Object tracking with LIDAR, Radar, sensor fusion and Extended Kalman Filter

In the 6’th project from the Self-Driving Car engineer program designed by Udacity, we will utilize an Extended Kalman Filter to estimate the state of a moving object of interest with noisy LIDAR and Radar measurements.

This post builds up starting with a very simple Kalman Filter implementation for the 1D motion smoothing, to a complex object motion tracking in 2D by fusing noisy measurements from LIDAR and Radar sensors:

  • A minimal implementation of the Kalman Filter in Python for the simplest 1D motion model
  • A formal implementation of the Kalman Filter in Python using state and covariance matrices for the simplest 1D motion model
  • A formal implementation of the Kalman Filter in C++ and Eigen using state and covariance matrices for the simplest 1D motion model (same as above)
  • Object motion tracking in 2D by fusing noisy measurements from LIDAR and Radar sensors

Accurate motion tracking is a critical component in any robotics application. Tracking systems suffer from noise and small distortions causing incorrect viewing perspectives.

To handle these imperfections, filtering is often applied to the tracked measurements so the application can obtain more accurate estimates of the motion.
The Kalman filter (KF) is a popular choice for estimating motion in robotics. Since position information is linear, standard Kalman filtering can be easily applied to the tracking problem without much difficulty.

However, most robotic motions also contain nonlinearity requiring a modification to the KF.

The extended Kalman filter (EKF) provides this modification by linearizing all nonlinear models (i.e., process and measurement models) so the traditional KF can be applied.

Unfortunately, the EKF has two important potential drawbacks. First, the derivation of the Jacobian matrices, the linear approximates to the nonlinear functions, can be complex causing implementation difficulties. Second, these linearizations can lead to instability if the time-step intervals are not sufficiently small.

To address these limitations, the unscented Kalman filter (UKF) was developed. The UKF operates on the premise that it is easier to approximate a Gaussian distribution than it is to approximate an arbitrary nonlinear function. Instead of linearizing using Jacobian matrices, the UKF using a deterministic sampling approach to capture the mean and covariance estimates with a minimal set of sample points.
The UKF is a powerful nonlinear estimation technique and has been shown to be a superior alternative to the EKF in a variety of applications.
This post describes in details how the standard linear Kalman filter works. The Udacity platform provides a class Artificial Intelligence for robotics where a big topic is dedicated to the linear Kalman filter.
A comparison of the three versions can be found in this publication A Comparison of Unscented and Extended Kalman Filtering for Estimating Quaternion Motion.

A minimal implementation of the Kalman Filter in python for the simplest 1D motion model

The code bellow implements the simplest Kalman filter for estimating the motion in 1D. The input are the noisy 1D position measurements and the sigmas for weighting during the update and predict steps.

The output are estimated position (mu) and its trust (sigma) at a certain time.

The initial position is set to 0 and its sigma to 10000 (very high uncertainty), and after only a few steps the position gets close to the measurement and with a smaller sigma (smaller uncertainty).

A formal implementation of the Kalman Filter in Python using state and covariance matrices for the simplest 1D motion model

The code bellow implements a multi-dimensional Kalman filter for estimating the motion in 1D, with the state defined by position and velocity.

The input is defined by the initial state x (position and velocity) both set to 0. For estimating the state at a later time the state transition matrix F is used, matrix which embeds the motion model in 1D:

x_k+1 = x_k + velocity * dt

When predicting in future the uncertainty increases and decreases when new measurements are available.

The output is defined by the state x at a certain time, and its uncertainty matrix P.

A formal implementation of the Kalman Filter in C++ and Eigen using state and covariance matrices for the simplest 1D motion model

The code bellow implements the same multi-dimensional Kalman filter as in the example before for estimating the motion in 1D, with the state defined by position and velocity.

The input is defined by the initial state x (position and velocity) both set to 0. For estimating the state at a later time the state transition matrix F is used, matrix which embeds the motion model in 1D:

x_k+1 = x_k + velocity * dt

When predicting in future the uncertainty increases and decreases when new measurements are available.

The output is defined by the state x at a certain time, and its uncertainty matrix P.

 Object motion tracking in 2D by fusing noisy measurements from LIDAR and Radar sensors

This part uses noisy measurements from Lidar and Radar sensors and the Extended Kalman Filter to estimate the 2D motion of bicycles. The source code can be found here.

The state has four elements: position in x and y, and the velocity in x and y.

object_tracking_state_prediction

Linear motion model

The linear motion model in the matrix form:

object_tracking_state_tranzition_matrix

Motion noise and process noise refer to the same case: uncertainty in the object’s position when predicting location. The model assumes velocity is constant between time intervals, but in reality we know that an object’s velocity can change due to acceleration. The model includes this uncertainty via the process noise.

Laser Measurements

The LIDAR sensor output is a point cloud but in this project, the point cloud is pre-processed and the x,y state of the bicycle is already extracted.

Laser Measurements

Laser Measurements

Definition of LIDAR variables:

  • z is the measurement vector. For a lidar sensor, the z vector contains the positionx and positiony measurements.
  • H is the matrix that projects your belief about the object’s current state into the measurement space of the sensor. For lidar, this is a fancy way of saying that we discard velocity information from the state variable since the lidar sensor only measures position: The state vector x contains information about [px,py,vx,vy] whereas the z vector will only contain [px,py]. Multiplying Hx allows us to compare x, our belief, with z, the sensor measurement.
  • What does the prime notation in the x vector represent? The prime notation like px means you have already done the update step but have not done the measurement step yet. In other words, the object was at px. After time Δt, you calculate where you believe the object will be based on the motion model and get px.

Radar Measurements

The Radar sensor output is defined by the measured distance to the object, orientation and its speed.

object_tracking_radar_measurements

Definition of Radar variables:

  • The range, (ρ), is the distance to the pedestrian. The range is basically the magnitude of the position vector ρ which can be defined as ρ=sqrt(px2+py2).
  • φ=atan(py/px). Note that φ is referenced counter-clockwise from the x-axis, so φ from the video clip above in that situation would actually be negative.
  • The range rate, ρ˙, is the projection of the velocity, v, onto the line, L.

To be able to fuse Radar measurements defined in the polar coordinate system with the LIDAR measurements defined in the cartesian coordinate system, one of the measurements must be transformed.

In this project the LIDAR measurements are transformed from the cartesian into the polar coordinate system using this formula:

Cartesian to polar coordinate system

Cartesian to polar coordinate system

Overview of the Kalman Filter Algorithm Map

object_tracking_general_flow

The Kalman Filter algorithm

The Kalman Filter algorithm will go through the following steps:

  • first measurement – the filter will receive initial measurements of the bicycle’s position relative to the car. These measurements will come from a radar or lidar sensor.
  • initialize state and covariance matrices – the filter will initialize the bicycle’s position based on the first measurement.
  • then the car will receive another sensor measurement after a time period Δt.
  • predict – the algorithm will predict where the bicycle will be after time Δt. One basic way to predict the bicycle location after Δt is to assume the bicycle’s velocity is constant; thus the bicycle will have moved velocity * Δt. In the extended Kalman filter lesson, we will assume the velocity is constant; in the unscented Kalman filter lesson, we will introduce a more complex motion model.
  • update – the filter compares the “predicted” location with what the sensor measurement says. The predicted location and the measured location are combined to give an updated location. The Kalman filter will put more weight on either the predicted location or the measured location depending on the uncertainty of each value.
  • then the car will receive another sensor measurement after a time period Δt. The algorithm then does another predict and update step.

Video result

Lidar measurements are red circles, radar measurements are blue circles with an arrow pointing in the direction of the observed angle, and estimation markers are green triangles. The video below shows what the simulator looks like when a c++ script is using its Kalman filter to track the object. The simulator provides the script the measured data (either lidar or radar), and the script feeds back the measured estimation marker, and RMSE values from its Kalman filter.

Resources

https://github.com/udacity/CarND-Extended-Kalman-Filter-Project

https://github.com/paul-o-alto/CarND-Extended-Kalman-Filter-Project

https://github.com/paul-o-alto/CarND-Extended-Kalman-Filter-Project

https://en.wikipedia.org/wiki/Kalman_filter

https://en.wikipedia.org/wiki/Extended_Kalman_filter

https://en.wikipedia.org/wiki/Kalman_filter#Unscented_Kalman_filter

https://ai2-s2-pdfs.s3.amazonaws.com/df70/ce9aed1d8b6fe3a93cb4e41efcb8979428c0.pdf

https://www.udacity.com/course/artificial-intelligence-for-robotics–cs373

http://www.bzarg.com/p/how-a-kalman-filter-works-in-pictures/

Leave a Reply

Your email address will not be published.