What is Disha?
Disha is a neural dynamics engine that demonstrates how artificial intelligence can learn physics. Instead of being programmed with physics equations, Disha's neural network learns the behavior of projectile motion by observing examples.
🎯 Key Concept
Traditional software uses equations: y = v₀t - ½gt²
Neural networks learn patterns from data without explicit equations.
How Neural Networks Learn Physics
1. The Architecture
- Time (t)
- Velocity (v)
- Angle (θ)
- Gravity (g)
- X position
- Y position
2. What Are Weights?
Weights are numbers that control how strongly one neuron influences another. Think of them as the "knowledge" stored in the network.
- Positive weights: Excitatory connections (strengthen signals)
- Negative weights: Inhibitory connections (weaken signals)
- Large weights: Important relationships
- Small weights: Less important relationships
3. The Training Process
Generate Training Data
Create 2,000 examples with random inputs (velocity, angle, gravity, time) and calculate their correct outputs using physics equations.
Initialize Network
Start with random weights. The network knows nothing about physics yet.
Make Predictions
Feed an input through the network and get a prediction.
Calculate Error
Compare prediction to correct answer. Large error = bad prediction.
Error = (Predicted - Actual)²
Adjust Weights
Use backpropagation to adjust weights slightly to reduce error.
Repeat
Repeat steps 3-5 for 100 epochs (complete passes through all data).
4. What Happens During Training?
As training progresses:
- Epoch 1-20: Large adjustments, loss drops rapidly
- Epoch 20-60: Refinement, finding optimal patterns
- Epoch 60-100: Fine-tuning, diminishing improvements
The network discovers relationships like:
- Higher velocity → longer range
- 45° angle → maximum range (when initial and final heights are equal)
- Gravity pulls down quadratically with time
- Horizontal motion is independent of vertical motion
Simple Explanation: How AI Learns
🎓 For Non-Technical Users
Think of it like learning to catch a ball:
- At first, you miss: The neural network starts with random guesses.
- You observe: The network sees many examples of throws and where balls land.
- Your brain adjusts: The network adjusts its internal numbers (weights) to improve.
- You get better: After enough practice, the network makes accurate predictions.
What makes this special?
Instead of telling the computer "use this equation," we show it examples and let it discover the patterns on its own. This is how modern AI learns everything from recognizing faces to playing chess.
The Magic of Hidden Layers
Hidden layers are like "thinking steps" between input and output:
- First hidden layer: Combines basic inputs (like "fast speed + steep angle")
- Second hidden layer: Combines these combinations into complex patterns
- Output layer: Makes the final prediction
Why Does Loss Go Down?
"Loss" measures how wrong the network is. When training:
- High loss = predictions are way off
- Low loss = predictions are accurate
- The training process automatically finds ways to reduce loss
Trained Models
Export Reports
Generate professional documentation of your trained models.