Interactive Numerical Methods
This module provides interactive tools to help you visualize the algorithms that power modern scientific computing. See firsthand how we can approximate solutions to problems that are difficult or impossible to solve analytically.
AI Tutor Setup
To use the AI tutor, you need a Google AI API key.
- Go to Google AI Studio.
- Click "Create API key" to get your key.
- Copy the key and paste it into the field above.
Learning Objectives
- Approximate definite integrals using various quadrature rules (Riemann, Trapezoid, Simpson's) and analyze their accuracy.
- Implement Euler's method to find numerical solutions for ordinary differential equations (ODEs).
- Identify the challenges of stiff equations and compare the stability of explicit and implicit solution methods.
- Analyze the behavior of numerical methods in multidimensional systems, such as the energy conservation properties of Euler's method for pendulum dynamics.
- Apply Newton's method to find the local optima of a function and understand the role of the initial guess.
Numerical Integration (Quadrature)
Approx. Area:
Error:
Need help with Numerical Integration?
Stuck on this topic? Start a session with the AI tutor for a hint or guidance.
Euler's Method (1D)
Step Size (Δt): 0.200
Need help with Euler's Method?
Stuck on this topic? Start a session with the AI tutor for a hint or guidance.
Stiff Equations & Stability
A stiff differential equation has components that change on very different time scales. This poses a challenge for explicit solvers like Forward Euler. For the equation $y' = -20y$, the solution decays very rapidly. Watch what happens when the number of steps is too low (i.e., the step size $\Delta t$ is too large).
Step Size (Δt): 0.111
Food for thought:
- The stability of Forward Euler depends on the term $|1 - 20\Delta t|$. When is this greater than 1, causing oscillations to grow?
- Backward Euler uses the future value $y_{n+1}$ to calculate the slope. Why does this "implicit" approach lead to such a stable result, regardless of step size?
Need help with Stiff Equations?
Stuck on this topic? Start a session with the AI tutor for a hint or guidance.
Explicit Euler (2D): Pendulum Dynamics
The standard (Explicit/Forward) Euler method applied to the pendulum system. The recurrence is: $\theta_{n+1} = \theta_n + \Delta t \cdot \omega_n$ and $\omega_{n+1} = \omega_n - \Delta t \cdot (g/L) \sin(\theta_n)$. Notice how the numerical solution spirals outwards, indicating that the system is gaining energy, which is physically incorrect.
Phase Space Plot
Angle vs. Time
Angular Velocity vs. Time
Need help with Explicit Euler (2D)?
Stuck on this topic? Start a session with the AI tutor for a hint or guidance.
Implicit Euler (2D): Pendulum Dynamics
The Backward (Implicit) Euler method uses future values to determine the step: $\theta_{n+1} = \theta_n + \Delta t \cdot \omega_{n+1}$ and $\omega_{n+1} = \omega_n - \Delta t \cdot (g/L) \sin(\theta_{n+1})$. This creates a non-linear system that must be solved at each step. The result is a trajectory that spirals inwards, indicating numerical energy dissipation, but it is much more stable than the explicit method.
Phase Space Plot
Angle vs. Time
Angular Velocity vs. Time
Need help with Implicit Euler (2D)?
Stuck on this topic? Start a session with the AI tutor for a hint or guidance.
Newton's Method for Optimization
Newton's method is a powerful root-finding algorithm that can be adapted for optimization. To find a local minimum or maximum of a function $f(x)$, we look for points where the derivative $f'(x) = 0$. Applying Newton's method to the derivative gives the following iterative update rule, which uses the second derivative (the Hessian):
This demo uses the function $f(x) = x^4 + x^3 - x^2 - x$. See how the initial guess affects which minimum the algorithm converges to. A poor initial guess (e.g., try $x_0 = 0$) can even lead to convergence to a maximum!
Iteration:
Current x:
Current f(x):
Need help with Newton's Method?
Stuck on this topic? Start a session with the AI tutor for a hint or guidance.
Regularization in Optimization
A major issue with the standard Newton's method arises when the second derivative, $f''(x_n)$, is negative or zero. A negative $f''(x_n)$ causes the algorithm to move towards a maximum (an ascent direction), which is the opposite of our goal. To fix this, we can add a non-negative constant, $\lambda$, to the denominator. This is a form of regularization.
This ensures the denominator is positive (for a sufficiently large $\lambda$), forcing the update step to be in a descent direction. Try starting with an initial guess of $x_0=0$. With $\lambda=0$, this is standard Newton's method, which will converge to the local maximum. Increase $\lambda$ to see how regularization pushes the algorithm away from the maximum and towards a true minimum.
Iteration:
Current x:
Current f(x):
Need help with Regularization?
Stuck on this topic? Start a session with the AI tutor for a hint or guidance.