Kalman Filter

Được đăng lên bởi Nguyễn Thành
Số trang: 8 trang   |   Lượt xem: 264 lần   |   Lượt tải: 0 lần
Chapter 11

Tutorial: The Kalman Filter
Tony Lacey.

11.1 Introduction
The Kalman lter 1 has long been regarded as the optimal solution to many tracking and data prediction
tasks, 2 . Its use in the analysis of visual motion has been documented frequently. The standard Kalman
lter derivation is given here as a tutorial exercise in the practical use of some of the statistical techniques
outlied in previous sections. The lter is constructed as a mean squared error minimiser, but an alternative
derivation of the lter is also provided showing how the lter relates to maximum likelihood statistics.
Documenting this derivation furnishes the reader with further insight into the statistical constructs within
the lter.
The purpose of ltering is to extract the required information from a signal, ignoring everything else.
How well a lter performs this task can be measured using a cost or loss function. Indeed we may de ne
the goal of the lter to be the minimisation of this loss function.

11.2 Mean squared error
Many signals can be described in the following way;

yk = ak xk + nk


where; yk is the time dependent observed signal, ak is a gain term, xk is the information bearing signal
and nk is the additive noise.
The overall objective is to estimate xk . The di erence between the estimate of x^k and xk itself is termed
the error;

f ek  = f xk , x^k 


The particular shape of f ek  is dependent upon the application, however it is clear that the function
should be both positive and increase monotonically 3 . An error function which exhibits these characteristics is the squared error function;

f ek  = xk , x^k 2



Since it is necessary to consider the ability of the lter to predict many data over a period of time a more
meaningful metric is the expected value of the error function;

lossfunction = E f ek 


This results in the mean squared error MSE function;
t = E e2k


11.3 Maximum likelihood
The above derivation of mean squared error, although intuitive is somewhat heuristic. A more rigorous
derivation can be developed using maximum likelihood statistics. This is achieved by rede ning the goal
of the lter to nding the x^ which maximises the probability or likelihood of y. That is;

max P yjx^


Assuming that the additive random noise is Gaussian distributed with a standard deviation of

P yk jx^k  = Kk exp , yk ,2 a2k x^k 
where Kk is a normalisation constant. The maximum likelihoo...
Để xem tài liệu đầy đủ. Xin vui lòng
Kalman Filter - Người đăng: Nguyễn Thành
5 Tài liệu rất hay! Được đăng lên bởi - 1 giờ trước Đúng là cái mình đang tìm. Rất hay và bổ ích. Cảm ơn bạn!
8 Vietnamese
Kalman Filter 9 10 836