Differential equations are an important class of equations for describing physical systems and phenomenon. Let’s take a look at what they mean, some examples of their use, and how to solve them.
What is a Differential Equation?
Let’s say a ball is dropped from the top of a tall building, ignoring air friction or wind. What is the equation that describes the ball’s position, as a function of time? Unfortunately, when describing physical phenomenon or systems, it’s usually easier to describe the change of a system, rather than its current state. In other words, the derivative of a value is easier to describe then the value itself. In our case, the acceleration of the ball, caused by gravity, is far more apparent than the position of the ball. Therefore, in order to calculate the position, we start with derivative of position, then try to calculate the position equation.
Simply put, differential equations are equations that relate a function to its own derivatives. For example, the statement “the sum of a function and its derivative equals zero,” or “the third derivative and the first derivative combined create a sine wave.” Given a differential equation, the end goal is to solve for the underlying function.
Let’s take a look at the ball drop again. We know the acceleration due to gravity (9.81 m/s^2), and acceleration is the second derivative of position. This gives us the following differential equation, and its subsequent integrals:

The top line gives an easy and obvious equation, the acceleration due to gravity. However, this is the second derivative of position, which isn’t what we’re after. To get the equation for position, we have to integrate twice, thus solving the differential equation.
But wait; what are c1 and c2? This brings up an important point: initial conditions are necessary to solve differential equations. In our case, c1 describes the ball’s initial velocity (whether it’s merely let go, or thrown downward), and c2 describes the initial position (from what height was the ball dropped). Indeed, it’s impossible to say with any confidence what the ball’s position is if we don’t know c1 and c2, which means that differential equations are not truly solved until initial conditions are accounted for.
The ball drop example is a very simple differential equation, as the second derivative only depends on a constant. This allowed us to directly integrate the equation to get rid of the derivatives. However, more complex differential equations do now allow this; see some examples below:

x” denotes second derivative of x, etc.
As you can see, we cannot directly integrate both sides of the equation to solve for x. So what do we do?
Solving Differential Equations
There are two ways to solve differential equations: analytically and numerically. Analytically involves using mathematics to find the underlying function, giving a closed form for the answer. Numerically involves using pure computing power to brute force a solution. Analytically provides a cleaner, simpler answer, but for many differential equations an analytical solution is impossible. Let’s look at analytical first.
Analytical Solutions
Analytical solutions are often done in two ways: memorization and the Laplace transform.
Memorization
Memorization is what it sounds like. Many forms of differential equations have already been solved, so simply memorizing those forms and their associated answers is sufficient. Let’s look at homogeneous second order differential equations with constant coefficients. Homogeneous signifies that the equation equates to 0, and second order means the highest derivative is second order:

So we have to solve for x(t).
Let’s take a step back. What function, x(t), has clean and simple relationships to its own derivative(s)? Exponentials and sinusoids come to mind. The derivative of an exponential is a scaled version of that exponential, and sinusoids are usually scaled inversions of their own second derivative, as follows:

This is relevant because exponentials and sinusoids come up a lot for solutions to differential equations. Keeping that in mind, let’s go back to solving for x(t). Let’s take an educated guess and assume x(t) is an exponential:

If e^rt*(ar^2+br+c)=0, then (ar^2+br+c)=0. Since we’re equating a polynomial to zero, this boils down to finding the roots of that polynomial. In our case, we’re looking at a quadratic equation, so let’s use the quadratic formula!

Depending on a, b, and c, there are three possible solutions: two real, distinct roots; two complex, distinct roots; and repeated roots. Let’s look at each case:
- If b^2 – 4ac > 0, then there are two distinct, real roots. In this case, the solution is the sum of two exponentials:

- If b^2 – 4ac < 0, then there are two distinct, complex roots. In this case, the solution is exponentially decaying sinusoids:

- If b^2 – 4ac = 0, then there are repeated roots. In this case, the answer is a sum of an exponential and an exponential multiplied by t:

There are many similar equations for other types of differential equations. The key takeaway for memorization analytical solving is:
- Recognize the differential equation
- Remember the associated solution
- Solve for constants in solution
Laplace Transform
The Laplace transform is a different approach to solving differential equations. Differential equations are difficult to work with due to derivatives, but the Laplace transform changes the problem into an algebra problem. The Laplace transform is as follows:

The Laplace transform has the convenient property that if f(t) becomes F(s), then the derivative of f(t) becomes sF(s), if initial conditions are 0:

Assuming all initial conditions are 0, this allows us to use the Laplace transform to easily rearrange the equation:

The Laplace transform of u(t) is 1/s
We start with a differential equation, then take the Laplace transform of both sides. On the left side, Y can be factored out, isolating it. We can then use algebra to solve for Y. Using partial fraction expansion:

Great! Using the Laplace transform, we’ve solved for Y(s). But what we’re looking for is y(t), so how do we get that? Well we get Y(s) from y(t) using the Laplace transform; we can get y(t) from Y(s) using the inverse Laplace transform! Turns out that’s a pain in the butt; rather than doing the computations ourselves, we use a table of transforms that’s already done the math for us! Conveniently, we can do the inverse Laplace transform one term at a time, and then add them all up at the end.

Here, we see that the Laplace transform of 1 is 1/s, our first term. Therefore, the inverse Laplace transform of 1/s is 1.

Our second term is 1/(s+1), which is 1/(s-a) when a = -1. Therefore, the inverse Laplace transform of 1/(s+1) is exp(-t).

The last term is 1/(s+1)^2, which is n!/(s-a)^(n+1) when n=1 and a=-1. Therefore, the inverse Laplace transform of the final term is t*e^-t. All combined:

The Laplace transform, therefore, allows us to use tables and algebra to solve differential equations, which is often preferable to solving the differential equation directly. The procedure is to use the Laplace transform on the differential equation, algebraically solve for the function of interest, then use the inverse Laplace transform table to undo the transformation.
Numerical Solutions
Analytical solutions are great for finding closed form solutions to differential equations. However, this isn’t always possible. Fortunately, we have another approach: the numerical solution. Using Euler’s method, it is theoretically possible to compute the solution to any differential equation.

The top equation is the definition of the derivative. Below, the equation is rearranged to calculate f(x+h).
For some small step h, we can compute the new value f(x+h) using the current value f(x), and the derivative at that position, f‘(x). This is a linear approximation of the function which becomes exact as h becomes infinitely small.
How does this help us? Well let’s say we have some first order differential equation:

The differential equation gives us a way to find the slope at a given point. Let’s find the slope at the starting condition, x(0)=1. Plugging that into the differential equation, we get x’+1=0, or x’=-1. Let’s take a small step of 0.1. To calculate the new value, we get the small step times the derivative plus the current value, or 0.1*-1+1=0.9.
Now, we have x(0.1)=0.9. The process repeats all over again: use the current value to get the new derivative, then calculate the new value, ad infinatum. Below shows a couple of steps of this process:

Let’s plot x(t):

So we’ve found x(t). Let’s try smaller and larger steps, and see how that changes the solution:

red = 0.01, black = 0.1, blue = 1
The red plot has the smallest step, so it is closest to the true solution. The black plot is very close, but has far fewer computations. The blue solution is very rough, and probably shouldn’t be used.
This example shows that the smaller the step you take, the better (more accurate) your solution. If the step size is too large, though, the solution is completely wrong. This results in a balancing act: if your steps are too small, then you need tons of computations; if the step size is too large, then the solution is inaccurate.
So we can solve first order differential equations. But how do we solve second order differential equations?
Fortunately, we can decompose second order (and higher) differential equations into multiple first order equations.

We have transformed a single 2nd order equation into two single order equations. The process is the same as before:
- Use the current values of x and v to calculate x’ and v’
- Use x’ and v’, and a small step, to calculate the new values of x and v
- Use the new values of x and v to calculate x’ and v’
- Repeat process as many times as desired
Let’s look at an example problem:

Using h=0.05:


At t=0, x=0 and v=5. We calculate v’=-20. Using time step of 0.05, we calculate:
- x = 0 + 0.05 * 5 = 0.25
- v = 5 + 0.05 * -20 = 4
Above shows the calculation for t = 0.05; this process is repeated as much as necessary.
Simulation
Differential equations are great for modeling physical systems; solving the differential equations allows us to run simulations. Fortunately, Euler’s method works great for this purpose!
Let’s look at a damped spring mass system:

The mass m is connected to the wall by a spring with stiffness k, and a dashpot with drag c. x denotes displacement of the mass from equilibrium; when x is 0, the spring is at its natural length.

The spring applies force proportional to displacement, and the dashpot applies force proportional to velocity. This gives the force equation, using Newton’s F = ma. Acceleration is the second derivative of position, so we can decompose the force equation into two first order differential equations. Let’s run some simulations, with various c!

c = 2 (red), 8 (black), 32(blue)
Initial condition: x(0) = 1, x'(0) = 0
We see three cases above. The red graph shows the spring mass oscillating for a long time before returning to equilibrium; this is the underdamped scenario. This means the dashpot is dissipating very small amounts of energy, so the system remains active for a long time. The black graph shows the spring mass returning to equilibrium very quickly without any oscillation; this is the critically damped case. This is the quickest the mass can return to equilibrium without any overshoots. The blue graph shows no oscillation, but the mass returns to equilibrium very slowly; this is the overdamped case. The dashpot is dissipating energy so quickly the mass takes a long time to reach its final position.
Now let’s try varying k:

k = 8 (red), 16 (black), 32(blue)
Initial condition: x(0) = 0, x'(0) = 5
We see that as the spring constant k increases, the oscillation frequency, as well as number of oscillations, increases.
As you can see, solving differential equations numerically allows simulating physical systems, which then allows us to vary parameters and see how it changes the system’s behavior.
Conclusion
Differential equations are powerful tools for describing and modeling physical systems. Unfortunately, they’re not very easy to work with, as they give the relationship between functions and their derivatives, rather than the equation for the underlying function. To address this issue, many common forms of differential equations have been already been solved by mathematicians past. The Laplace transform also provides a way to solve differential equations, using algebra and transform tables rather than dealing with derivatives. Yet another approach is to use computing power and Euler’s method to calculate the solution to differential equations. The last approach is very common as it allows simulating reality, such as a damped spring mass system.