A differential equation is an algebraic relationship between a quantity and its rate of change.
A differential-algebraic equation is where the relationship between a quantity and its rate of change is given by an implicit algebraic relationship.
The solution to a first-order differential equation dy/dt = a t should not be thought of as the exponential function y = exp(a t). Rather, the first-order differential equation should be thought of as the process generating the exponential function.
The exponential function describes a quantity that grows or diminishes in proportion to the quantity itself — continuously compounded interest and viscous damping are examples. The rate of growth or decline (positive a or negative a) is the coefficient in the differential equation, and this property defines the exponential function in a way that allows calculating it.
The unit imaginary number i = sqrt(-1) (j if you are an electrical engineer because i already means current) defines a 90-degree rotation in the plane. Squaring i means performing two 90-degree rotations (performing two left-turns in succession), which turns you around and points you in the opposite direction.
The complex differential equation dr/dt = ja t describes a relationship where the rate of change of a quantity is turned 90 degrees from the current value of the quantity in question. The position of the tip of the end of a stick attached to a pivot point obeys that relationship — when the stick turns about its pivot, the velocity of the end of the stick is turned 90 degrees from the x-y quantity describing the position of the end of the stick.
This complex-coefficient differential equation defines the complex exponential r = exp(j a t), describing “harmonic motion” for a constant rate of rotation of the stick about its pivot point. Expressing r = x + jy, where x is the position of the tip of the stick along a reference direction and y is the position of the stick along a direction turned 90 deg in relation to this reference, that r = exp(j a t) = cos(a t) + j sin(a t) follows from the geometric definition of cos and sin going back to classical antiquity.
So, how do you calculate cos(a t) or sin(a t) in exp(j a t) from harmonic motion, or how do you calculate exp(a t) for real-valued a in the compound interest problem? You could express any of these functions as a power series, substitute into the generating differential equation, and then determine the power series coefficients satisfying the differential equation as giving the power series expansion of the function you want. Evaluate the power series, and you get an accurate numerical value of cos, sin or exp?
Maybe you did that until the mid 1950s when an engineer named Jack Volders working a Convair on a digital navigation system for the B-58 supersonic bomber came up with a better way. His algorithm performs a stepwise geometric refinement of the sin and cos functions that is in many ways better than the power-series expansion that had been laboriously used in the pass to generate cos, sin and exp (and log) tables. Volders’ CORDIC algorithm is how these functions are calculated on your scientific calculator, and this algorithm was incorporated into the original 8087 math coprocessor chip enhancing the IBM PC.
Correction, the man is Jack Volder, and CORDIC is known as Volder’s Algorithm.
As both an engineer and physicist, I would rank the ability to teach DEs in the following order:
1: engineers
2: physicists
3: mathematicians
Group 3 always spent their time on useless stuff like existence and uniqueness theorems and integrating factors. I learned much more about actually solving real world DEs in a circuit theory class, where Laplace transforms were the bread and butter. I’m afraid Heaviside had the last laugh.
I always remember my PhD supervisor’s derivation of the solution to an important and tricky DE that’s used a lot in the Earth Sciences. He would say:
“At this point, we will reverse the order of integration. If we were mathematicians, we would agonize over whether this is permissible. But we’re physicists, so we’ll do it anyway.”
It is my observation that a lot of useful mathematics was invented not by mathematicians but by physicists and engineers. Mathematicians then come along and tidy things up. Unfortunately, they tend to think that they should teach the techniques, when they clearly are separated from the original motivation for their uses. They tend to be poor mathematics teachers.
I guess I have gotten through life with knowledge restricted to linear constant-coefficient differential equations.
I never heard of an integrating factor before, and yeah, yeah, Wikipedia is your friend. I guess it is a mathematics “trick”, but I’ll be hornswoggled to figure out what motivates that trick and how to come up with a general rule of where it can be applied.
About mathematics invented by engineers and physicists, you’re spot on. Oliver Heaviside was primarily a physicist and electrical engineer, and invented operational calculus to solve differential equations. Dismissed by mathematicians, it later proved to be equivalent to Laplace transforms, and was quite sound. Then Albert Einstein came up with Einstein notation to compactly express tensors, reducing a half page of expressions to a single line. And Paul Dirac introduced the delta function, eschewed by mathematicians until someone figured out that it was a distribution.
Integrating factors were the first technique I learned (but never mastered) for solving differential equations. A really obscure technique, which I remember only by name, was “the method of annihilators.” Some people in my sophomore class found that really interesting, but I never could comprehend it.
All this was in 1973, in my sophomore Mechanical Engineering curriculum. Today, I just put my differential equation, or tough integral, or whatever, into Maple, and it’s solved in a second or two. Yes, yes, I do understand that it’s better to have insight into the mathematics to make sure the Maple solution is reasonable. And I do, but I also need answers quickly in some cases, and I know how to judge the reasonableness of numbers that I get. See, I started off life using a slide rule.
Differentiation is playing a game of chicken with division by zero.
This was always my hang-up…
Rota yeah, famous…. What happens when you write a textbook.
I hadn’t heard that one. 🙂
I have to disagree on some of that. In particular, differentials probably should be a bigger part of differential equations than they are now. In addition to the weak example of the integration factor given in the article, there’s the laws of electromagnetism and Stoke’s Theorem about the relation between multivariable integration of a differential over a space and integration of a related differential over the boundary of the previous space.
One also has more advanced stuff like geodesics on curved surfaces which can readily be described by differentials.
A differential equation is an algebraic relationship between a quantity and its rate of change.
A differential-algebraic equation is where the relationship between a quantity and its rate of change is given by an implicit algebraic relationship.
The solution to a first-order differential equation dy/dt = a t should not be thought of as the exponential function y = exp(a t). Rather, the first-order differential equation should be thought of as the process generating the exponential function.
The exponential function describes a quantity that grows or diminishes in proportion to the quantity itself — continuously compounded interest and viscous damping are examples. The rate of growth or decline (positive a or negative a) is the coefficient in the differential equation, and this property defines the exponential function in a way that allows calculating it.
The unit imaginary number i = sqrt(-1) (j if you are an electrical engineer because i already means current) defines a 90-degree rotation in the plane. Squaring i means performing two 90-degree rotations (performing two left-turns in succession), which turns you around and points you in the opposite direction.
The complex differential equation dr/dt = ja t describes a relationship where the rate of change of a quantity is turned 90 degrees from the current value of the quantity in question. The position of the tip of the end of a stick attached to a pivot point obeys that relationship — when the stick turns about its pivot, the velocity of the end of the stick is turned 90 degrees from the x-y quantity describing the position of the end of the stick.
This complex-coefficient differential equation defines the complex exponential r = exp(j a t), describing “harmonic motion” for a constant rate of rotation of the stick about its pivot point. Expressing r = x + jy, where x is the position of the tip of the stick along a reference direction and y is the position of the stick along a direction turned 90 deg in relation to this reference, that r = exp(j a t) = cos(a t) + j sin(a t) follows from the geometric definition of cos and sin going back to classical antiquity.
So, how do you calculate cos(a t) or sin(a t) in exp(j a t) from harmonic motion, or how do you calculate exp(a t) for real-valued a in the compound interest problem? You could express any of these functions as a power series, substitute into the generating differential equation, and then determine the power series coefficients satisfying the differential equation as giving the power series expansion of the function you want. Evaluate the power series, and you get an accurate numerical value of cos, sin or exp?
Maybe you did that until the mid 1950s when an engineer named Jack Volders working a Convair on a digital navigation system for the B-58 supersonic bomber came up with a better way. His algorithm performs a stepwise geometric refinement of the sin and cos functions that is in many ways better than the power-series expansion that had been laboriously used in the pass to generate cos, sin and exp (and log) tables. Volders’ CORDIC algorithm is how these functions are calculated on your scientific calculator, and this algorithm was incorporated into the original 8087 math coprocessor chip enhancing the IBM PC.
Correction, the man is Jack Volder, and CORDIC is known as Volder’s Algorithm.
As both an engineer and physicist, I would rank the ability to teach DEs in the following order:
1: engineers
2: physicists
3: mathematicians
Group 3 always spent their time on useless stuff like existence and uniqueness theorems and integrating factors. I learned much more about actually solving real world DEs in a circuit theory class, where Laplace transforms were the bread and butter. I’m afraid Heaviside had the last laugh.
I always remember my PhD supervisor’s derivation of the solution to an important and tricky DE that’s used a lot in the Earth Sciences. He would say:
“At this point, we will reverse the order of integration. If we were mathematicians, we would agonize over whether this is permissible. But we’re physicists, so we’ll do it anyway.”
It is my observation that a lot of useful mathematics was invented not by mathematicians but by physicists and engineers. Mathematicians then come along and tidy things up. Unfortunately, they tend to think that they should teach the techniques, when they clearly are separated from the original motivation for their uses. They tend to be poor mathematics teachers.
I guess I have gotten through life with knowledge restricted to linear constant-coefficient differential equations.
I never heard of an integrating factor before, and yeah, yeah, Wikipedia is your friend. I guess it is a mathematics “trick”, but I’ll be hornswoggled to figure out what motivates that trick and how to come up with a general rule of where it can be applied.
About mathematics invented by engineers and physicists, you’re spot on. Oliver Heaviside was primarily a physicist and electrical engineer, and invented operational calculus to solve differential equations. Dismissed by mathematicians, it later proved to be equivalent to Laplace transforms, and was quite sound. Then Albert Einstein came up with Einstein notation to compactly express tensors, reducing a half page of expressions to a single line. And Paul Dirac introduced the delta function, eschewed by mathematicians until someone figured out that it was a distribution.
Integrating factors were the first technique I learned (but never mastered) for solving differential equations. A really obscure technique, which I remember only by name, was “the method of annihilators.” Some people in my sophomore class found that really interesting, but I never could comprehend it.
All this was in 1973, in my sophomore Mechanical Engineering curriculum. Today, I just put my differential equation, or tough integral, or whatever, into Maple, and it’s solved in a second or two. Yes, yes, I do understand that it’s better to have insight into the mathematics to make sure the Maple solution is reasonable. And I do, but I also need answers quickly in some cases, and I know how to judge the reasonableness of numbers that I get. See, I started off life using a slide rule.
Differentiation is playing a game of chicken with division by zero.
This was always my hang-up…
Rota yeah, famous…. What happens when you write a textbook.
I hadn’t heard that one. 🙂
I have to disagree on some of that. In particular, differentials probably should be a bigger part of differential equations than they are now. In addition to the weak example of the integration factor given in the article, there’s the laws of electromagnetism and Stoke’s Theorem about the relation between multivariable integration of a differential over a space and integration of a related differential over the boundary of the previous space.
One also has more advanced stuff like geodesics on curved surfaces which can readily be described by differentials.