Kalman Filter
There are a lot of different articles on Kalman filter, but it is difficult to find the one which contains an explanation, where all filtering formulas come from. I think that without understanding of that this science becomes completely non understandable. In this article I will try to explain everything in a simple way.
Kalman filter is very powerful tool for filtering of different kinds of data. The main idea behind this that one should use an information about the physical process. For example, if you are filtering data from a car’s speedometer then its inertia give you a right to treat a big speed deviation as a measuring error. Kalman filter is also interesting by the fact that in some way it is the best filter. We will discuss precisely what does it mean. In the end of the article I will show how it is possible to simplify the formulas.
Preliminaries
At first, let’s memorize some definitions and facts from probability theory.
Random variable
When one says that it is given a random variable
ξ, it means that it may take random values. Different values come with different probabilities. For example, if someone drops a dice then the set of values is discrete
{1,2,3,4,5,6}. When you deal with a speed of moving particle then obviously you should work with a continuous set of values. Values which come out after each experiment (measurement) we would denote by
x1,x2,..., but sometimes we would use the same letter as we use for a random variable
ξ. In the case of continuous set of values a random variable is characterized by its probability density function
ρ(x). This function shows a probability that the random variable falls into a small neighbourhood
dx of the point
x. As we can see on the picture, this probability is equal to the area of the hatched rectangle below the graph
ρ(x)dx.
Quite often in our life, random variables have the Gauss Distribution, when the probability density is
ρ(x)∼e−(x−μ)22σ2.

We can see that the bell-shaped function
ρ(x) is centered at the point
μ and its characteristic width is around
σ.
Since we are talking about the Gaussian Distribution, then it would be a sin not to mention from where does it come from. As well as the number
e and
π are firmly penetrated in mathematics and can be found in the most unexpected places, so Gaussian Distribution has deep roots in the theory of probability. The following remarkable statement partly explains presence of the Gauss Distribution in a lot of processes:
Let a random variable
ξ has an arbitrary distribution (in fact there are some restrictions on arbitrariness, but they are not restrictive at all). Let’s perform
n experiments and calculate a sum
ξ1+...+ξn, of fallen values. Let’s make a lot of experiments. It is clear that every time we will get a different value of the sum. In other words, this sum is a random variable with its own distribution law. It turns out that for sufficiently large
n, the law of distribution of this sum tends to a Gaussian Distribution (by the way, the characteristic width of a bell is growing like
√n. Read more in the Wikipedia:
Central limit theorem. In real life there are a lot of values which are a sum of large number of independent and identically distributed random variables. So this values have Gauss Distribution.
Mean Value
By definition, a mean value of a random variable is a value which we get in a limit if we perform more and more experiments and calculate a mean of fallen values. A mean value is denoted in different ways: mathematicians denote by
Eξ (expectation), Physicists denote it by
¯ξ or
<ξ>. We will denote it as mathematicians do.
For instance, a mean value of Gaussian Distribution
ρ(x)∼e−(x−μ)22σ2 is equal to
μ.
Variance
For Gaussian distribution, we clearly see that the random variable tends to fall within a certain region of its mean value
μ. Let us enjoy the Gaussian distribution once again:

On the picture, one may see that a characteristic width of a region where values mostly fall is
σ. How can we estimate this width for an arbitrary random variable? We can draw a graph of its probability density function and just visually evaluate the characteristic range. However it would be better to choose a precise algebraic way for this evaluation. We may find a mean length of deviation from the mean value:
E|ξ−Eξ|. This value is a good estimation of a characteristic deviation of
ξ . However we know very well, how problematic it is to use absolute values in formulas, thus this formula is rarely used in practice.
A simpler approach (simple from calculation’s point of view) is to calculate
E(ξ−Eξ)2.
This value called variance and denoted by
σ2ξ. The quadratic root of the variance is a good estimation of random variable’s characteristic deviation. It’s called the standard deviation.
For instance, one can compute that for the Gaussian distribution
ρ(x)∼e−(x−μ)22σ2 the variance is equal to
σ2 thus the standard deviation is
σ. This result really corresponds to our geometrical intuition. In fact a small cheating is hidden here. Actually in a definition of the Gauss distribution you see the number
2 in a denominator of expression
−(x−μ)22σ2. This
2 stands there in purpose, for the standard deviation
σξ to be equal exactly to
σ. So the formula of Gauss distribution is written in a way, which keep in mind that one would compute its standard deviation.
Independent random variables
Random variables may depend on each other or not. Imagine that you are throwing a needle on the floor and measuring coordinates of its both ends. This two coordinates are random variables, but they depend on each other, since a distance between them should be always equal to the length of the needle. Random variables are independent from each other if falling results of the first one doesn’t depend on results of the second. For two independent variables
ξ1 and
ξ2 the mean of their product is equal to the product of their mean:
E(ξ1⋅ξ2)=Eξ1⋅ξ2
Proof
For instance to have blue eyes and finish a school with higher honors are independent random variables. Let say that there are 20%=0.2 of people with blue eyes and 5%=0.05 of people with higher honors. So there are 0.2⋅0.5=0.01=1% of people with blue eyes and higher honors. This example helps us to understand the following. For two independent random variables ξ1 and ξ2 which are given by their density of probability ρ1(x) and ρ2(y), the density of probability ρ(x,y) (the first variable falls at x and the second at y) can by find by the formula
ρ(x,y)=ρ1(x)⋅ρ2(y)
It means that
E(ξ1⋅ξ2)=∫xyρ(x,y)dxdy=∫xyρ1(x)ρ2(y)dxdy=∫xρ1(x)dx∫yρ2(y)dy=Eξ1⋅Eξ2
As you see, the proof is done for random variables which have a continuous spectrum of values and are given by their density of probability function. The proof is similar for general case.
Kalman filter
Problem statement
Let denote by
xk a value which we intend to measure and then filter. It can be a coordinate, velocity, acceleration, humidity, temperature, pressure, e.t.c.
Let us start with a simple example, which will lead us to the formulation of the general problem. Imagine that you have a radio control toy car which can run only forward and backward. Knowing its mass, shape, and other parameters of the system we have computed how the way how a controlling joystick acts on a car’s velocity
vk.

The the coordinate of the car would by the following formula
xk+1=xk+vkdt
In real life we can’t , we can’t have a precise formula for the coordinate since some small disturbances acting on the car as wind, bumps, stones on the road, so the real speed of the car will differ from the calculated one. So we add a random variable
ξk to the right hand side of last equation:
xk+1=xk+vkdt+ξk
We also have a GPS sensor on the car which tries to measure the coordinate of the car
xk. Of course there is an error in this measuring, which is a random variable
ηk. So from the sensor we would get a wrong data:
zk=xk+ηk
Our aim is to find a good estimation for true coordinate
xk, knowing a wrong sensor’s data
zk. This good estimation we will denote by
xopt.
In general the coordinate
xk may stands for any value (temperature, humidity,...) and the controlling member we would denote by
uk ( in the example with a car
uk=vk⋅dt). The equations for the coordinate and the sensor measurements would be the following:

(1)
Let us discuss, what do we know in these equations.
- uk is a known value which controls an evolution of the system. We do know it from the model of the system.
- The random variable ξ represents the error in the model of the system and the random variable η is a sensor’s error. Their distribution laws don’t depend on time (on iteration index k).
- The means of errors are equal to zero: Eηk=Eξk=0.
- We might not know a distribution law of the random variables, but we do know their variances σξ and ση. Note that the variances don’t depend on time (on k) since the corresponding distribution laws neither.
- We suppose that all random errors are independent from each other: the error at the time k doesn’t depend on the error at the time k′.
Note that a filtration problem is not a smoothing problem. Our aim is not to smooth a sensor’s data, we just want to get the value which is as close as it is possible to the real coordinate
xk.
Kalman algorithm
We would use an induction. Imagine that at the step
k we have already found the filtered sensor’s value
xopt, which is a good estimation of the real coordinate
xk. Recall that we know the equation which controls the real coordinate:
xk+1=xk+uk+ξk
Therefore before getting the sensor’s value we might state that it would show the value which is close to
xopt+uk. Unfortunately so far we can’t say something more precise. But at the step
k+1 we would have a non precise reading from the sensor
zk+1.
The idea of Kalman is the following. To get the best estimation of the real coordinate
xk+1 we should get a golden middle between the reading of non precise sensor
zk+1 and
xopt+uk - our prediction, what we have expected to see on the sensor. We will give a weight
K to the sensor’s value and
(1−K) to the predicted value:
xoptk+1=K⋅zk+1+(1−K)⋅(xoptk+uk)
The coefficient
K is called a Kalman coefficient. It depends on iteration index, and strictly speaking we should rather write
Kk+1. But to keep formulas in a nice shape we would omit the index of
K.
We should choose the Kalman coefficient in way that the estimated coordinate
xoptk+1 would be as close as it is possible to the real coordinate
xk+1.
For instance, if we do know that our sensor is very super precise then we would trust to its reading and give him a big weight (
K is close to one). If the sensor conversely is not precise at all, then we would rely on our theoretically predicted value
xoptk+uk.
In general situation, we should minimize the error of our estimation:
ek+1=xk+1−xoptk+1
We use equations (1) (those which are on a blue frame), to rewrite the equation for the error:
ek+1=(1−K)(ek+ξk)−Kηk+1
Proof
ek+1=xk+1−xoptk+1=xk+1−Kzk+1−(1−K)(xoptk+uk)==xk+uk+ξk−K(xk+uk+ξk+ηk+1)−(1−K)(xoptk+uk)==(1−K)(xk−xoptk+ξk)−Kηk+1=(1−K)(ek+ξk)−Kηk+1
Now it comes a time to discuss, what does it mean the expression “to minimize the error”? We know that the error is a random variable so each time it takes different values. Actually there is no unique answer on that question. Similarly it was in the case of the variance of a random variable, when we were trying to estimate the characteristic width of its probability density function. So we would choose a simple criterium. We would minimize a mean of the square:
E(e2k+1)→min
Let us rewrite the last expression:
E(e2k+1)=(1−K)2(E2k+σ2ξ)+K2σ2η
Key to the proof
From the fact that all random variables in the equation for ek+1 don’t depend on each other and the mean values Eηk+1=Eξk=0, follows that all cross terms in E(e2k+1) become zeros:
E(ξkηk+1)=E(ekξk)=E(ekηk+1)=0.
Indeed for instance E(ekξk)=E(ek)E(ξk)=0.
Also note that formulas for the variances looks much simpler: σ2η=Eη2k and σ2η=Eη2k+1 (since Eηk+1=Eξk=0)
The last expression takes its minimal value, when its derivation is zero. So when:
Kk+1=Ee2k+σ2ξEe2k+σ2ξ+σ2η
Here we write the Kalman coefficient with its subscript, so we emphasize the fact that it do depends on the step of iteration. We substitute to the equation for the mean square error
E(e2k+1) the Kalman coefficient
Kk+1 which minimize its value:
E(e2k+1)=σ2η(Ee2k+σ2ξ)Ee2k+σ2ξ+σ2η
So we have solved our problem. We got the iterative formula for computing the Kalman coefficient.
All formulas in one place:
Example
On the plot from the beginning of this article there are filtered datum from the fictional GPS sensor, settled on the fictional car, which moves with the constant acceleration
a.
xt+1=xt+at⋅dt+ξt
Look at the filtered results once again:
The code on matlab:
clear all;
N=100 % number of samples
a=0.1 % acceleration
sigmaPsi=1
sigmaEta=50;
k=1:N
x=k
x(1)=0
z(1)=x(1)+normrnd(0,sigmaEta);
for t=1:(N-1)
x(t+1)=x(t)+a*t+normrnd(0,sigmaPsi);
z(t+1)=x(t+1)+normrnd(0,sigmaEta);
end;
%kalman filter
xOpt(1)=z(1);
eOpt(1)=sigmaEta; % eOpt(t) is a square root of the error dispersion (variance).
% It's not a random variable.
for t=1:(N-1)
eOpt(t+1)=sqrt((sigmaEta^2)*(eOpt(t)^2+sigmaPsi^2)/(sigmaEta^2+eOpt(t)^2+sigmaPsi^2))
K(t+1)=(eOpt(t+1))^2/sigmaEta^2
xOpt(t+1)=(xOpt(t)+a*t)*(1-K(t+1))+K(t+1)*z(t+1)
end;
plot(k,xOpt,k,z,k,x)
Analysis
If one look at how the Kalman coefficient
Kk changes from the iteration
k, it is possible to see that it stabilizes to the certain value
Kstab. For instance if the square mean errors of sensor and the model respect to each other as ten to one, then the plot of the Kalman coefficient dependency from the iteration step would be the following:

In the next example we would discuss how that can simplify our life.
Second example
In practice it happens that we don’t know almost anything from the physical model what we are filtering. Imagine you have decided to filter that measurements from your favourite accelerometer. Actually you don’t know in forward how the accelerometer would be moved. Only thing you might know is the variance of sensor’s error
σ2η. In this difficult problem we might put all unknown information from the physical model to the random variable
ξk:
xk+1=xk+ξk
Strictly speaking this kind of system doesn’t satisfy the condition that we have imposed on the random variable
ξk. Since it holds the information unknown for us physics of the movement. We can’t say that it different moment of times the errors are independent from each other and their means are equals to zero. In other words, it means that for this kind of situations the Kalman theory isn’t applied. But anyway we can try to use the machinery of Kalman theory just by choosing some reasonable values for
σ2ξ and
σ2η to get just a nice graph of filtered datum.
But there is a much simpler way. We saw that with increasing of the step
k the Kalman coefficient always stabilizes to the certain value
Kstab. So instead of guessing the values of the coefficients
σ2ξ and
σ2η and computing the Kalman coefficient
Kk by difficult formulas, we can assume that this coefficient is constant and select just this constant. This assumption would not affect much on filtering results. At first we anyway the Kalman machinery is not exactly applicable to our problem and secondly, the Kalman coefficient quickly stabilizes to the constant. In the end everything becomes very simple. We don’t need almost any formula from Kalman theory, we just need to select a reasonable value
Kstab and insert it to the iterative formula
xoptk+1=Kstab⋅zk+1+(1−Kstab)⋅xoptk
On the next graph you can see the filtered by two different ways measurements from an imaginary sensor. The first method is the honest one, with all formulas from Kalman theory. The second method is the simplified one.

We see that there is not a big difference between two this methods. There is a small variation in the beginning, when the Kalman coefficient still is not stabilized.
Discussion
We have seen that the main idea of Kalman filter is to choose the coefficient
K in a way that the filtered value
xoptk+1=Kzk+1+(1−K)(xoptk+uk)
in average would be as close as possible to the real coordinate
xk+1. We see that the filtered value
xoptk+1 is a linear function from the sensor's measurement
zk+1 and the previous filtered value
xoptk. But the previous filtered value
xoptk itself is a linear function of the sensor’s measurement
zk and the pre-previous filtered value
xoptk−1. And so on until the end of the chain. So the filtered value linearly depends on all previous sensor’s readings:
xoptk+1=λ+λ0z0+…+λk+1zk+1
That is a reason that the Kalman filter is called a linear filter. It is possible to prove that the Kalman filter is the best from all linear filters. The best in a sense that it minimizes a square mean of the error.
Multidimensional case
It is possible to generalise all the Kalman theory to the multidimensional case. Formulas there looks a bit more ellaborated but the idea of their deriving still remains the same as in a one dimension. For instance, In this nice
video you can see the example.
Literature
The original article which is written by Kalman you can download here:
http://www.cs.unc.edu/~welch/kalman/media/pdf/Kalman1960.pdf