# Robot Localization with Particle Filter and C++

In the 8’th project from the Self-Driving Car engineer program designed by Udacity, we implemented a 2 dimensional 3DOF particle filter in C++ to localize our vehicle in a known map.

Our vehicle starts in an unknown location and using the particle filter approach we need to determine where our vehicle is. The particle filter is given a map and some initial localization information (analogous to what a GPS would provide), and at each time step the filter also gets observation and control data.

This post includes the following:

• Bayes Filter and Markov Localization
• Simple 1D Markov Localization example implemented in C++
• Particle Filter Localization implemented in 3DOF (x,y,yaw) and C++

### Bayes Filter and Markov Localization

Markov Localization or Bayes Filter for Localization is a generalized filter for localization and all other localization approaches are realizations of the Bayes Filter. Kalman Filters and Particle Filters are as well based on the Bayes Filter.

The Bayes Localization Filter Markov Localization is a general framework for recursive state estimation, which means this framework allows us to use the previous state (state at t-1) to estimate a new state (state at t) using only current observations and controls (observations and control at t), rather than the entire data history (data from 0:t).

The motion model describes the prediction step of the filter while the observation model is the update step. And the state estimation using the Bayes filter is dependent upon the interaction between prediction (motion model) and update (observation model steps) and all the localization methods discussed so far are realizations of the Bayes filter.

Generally we can think of our vehicle location as a distribution, each time we move our distribution becomes more diffuse (wider). We pass our variables (map data, observation data, and control data) into the filter to concentrate (narrow) this distribution, at each time step. Each state prior to applying the filter represents our prior and the narrowed distribution represents our Bayes posterior.

### Simple 1D Markov Localization example implemented in C++

Following is a simple implementation of a 1D Markov localization.

At the very beginning, as shown in the image bellow, the car starts next to a tree landmark. This means the car position can be in any of the tree locations with the same probability. In the example bellow there are 6 solutions with the same probability. Here is our 1d map of landmarks. At the very beginning we believe we are next to a landmark. From left to right the landmark positions are [9, 15, 25, 31, 59, 77]

While the car moves the believes are updated and some of the solutions are dropped out. In the example bellow, the car moved 5 meters to the right in front of the 2nd tree and only 2 solutions survived with the same but higher probability. When the car is in front of the 4th tree there is only one solution left with a very high probability. The source code of this 1D Markov Localization can be found here.

After the execution we can see that the localization converges to a single solution with the shape of the normal distribution and with the mean at position x=40.

Full output:

This document explains the Markov Localization in more details including mathematical formulas.

### Particle Filter Localization implemented in 3DOF (x,y,yaw) and C++

Particle filters implement the prediction-updating transitions of the filtering equation directly by using a genetic type mutation-selection particle algorithm. The samples from the distribution are represented by a set of particles; each particle has a likelihood weight assigned to it that represents the probability of that particle being sampled from the probability density function.

The flowchart below represents the steps of the particle filter algorithm as well as its inputs. The pseudo code steps below correspond to the steps in the algorithm flow chart, initialization, prediction, particle weight updates, and re-sampling. Line 1: At the initialization step we estimate our position from GPS input. The subsequent steps in the process will refine this estimate to localize our vehicle.

Line 3: During the prediction step we add the control input (yaw rate & velocity) for all particles.

Lines 4-5: During the update step, we update our particle weights using map landmark positions and feature measurements.

Lines 7-10: During re-sampling we will re-sample M times (M is range of 0 to length_of_particle Array) drawing a particle i (i is the particle index) proportional to its weight .  This is implemented as a re-sampling wheel.

Line 11: The new set of particles represents the Bayes filter posterior probability. We now have a refined estimate of the vehicles position based on input evidence.

#### Initialization

The most practical way to initialize our particles and generate real time output, is to make an initial estimate using GPS input. As with all sensor based operations, this step is impacted by noise. Particles shall be implemented by sampling a Gaussian distribution, taking into account Gaussian sensor noise around the initial GPS position and heading estimates.

#### Prediction Step

Now that we have initialized our particles it’s time to predict the vehicle’s position. Here we will use what we learned in the motion models lesson to predict where the vehicle will be at the next time step, by updating based on yaw rate and velocity, while accounting for Gaussian sensor noise.

#### Data Association: Nearest Neighbor

Data association problem is the problem of matching landmark measurements to objects in the real world, like map landmarks. One technique to use the right association is to use the Nearest Neighbor algorithm as shown below. The Nearest Neighbor Data Association has pros and cons:

Pros:

• easy to understand
• easy to implement
• works well in many situations

Cons:

• not robust to high density of measurements or map landmarks
• not robust to sensor noise
• not robust to errors in position estimates
• inefficient to calculate
• does not take different sensor uncertainties into account

#### Update Step

The x and y errors are depicted from the point of view of the map (x is horizontal, y is vertical) rather than the point of view of the car where x is in the direction of the car’s heading, ( i.e. It points to where the car is facing), and y is orthogonal (90 degrees) to the left of the x-axis (pointing out of the left side of the car).

#### Calculating Error

Two errors are computed, one for position and one for rotation as the root squared error between the best particle and the ground truth.

• Position RMSE =
• Theta RMSE =

#### Particle Weights

The particles final weight will be calculated as the product of each measurement’s Multivariate-Gaussian probability.

The Multivariate-Gaussian probability has two dimensions, x and y. The mean of the Multivariate-Gaussian is the measurement’s associated landmark position and the Multivariate-Gaussian’s standard deviation is described by our initial uncertainty in the x and y ranges. The Multivariate-Gaussian is evaluated at the point of the transformed measurement’s position. The formula for the Multivariate-Gaussian can be seen below.

The project source code can be found here.

### Conclusions:

Particle filters are very robust approaches to localization which can simultaneous incorporate measurements from very different types of sensors including LIDAR, stereo camera, radar or even neural networks.
Using a robust cost function they deal with outlayers (wrong landmarks) introduced by sensor noise or by changes in the environment such dynamic objects or illuminations.
The first step is to build a map which includes the landmarks. After the map is built, a particle filter localization system will spread particles (possible solutions) and chooses the particle with the smaller error cost.
They are applicable to almost any localization problem, from 2DOF (x,y position), 3DOF (x,y,yaw) to full 6DOF (x,y,z,roll,pitch,yaw).
Compared to other approaches they could be slower because many possibles solutions are tested in parallel at each update.

### Sources:

https://bitbucket.org/coldvisionio/udacity-carnd-class/src/8fa3ba86c3f3ac3531bff7a2344d67b41edd0064/CardND-Markov-Localization/

https://bitbucket.org/coldvisionio/udacity-carnd-class/src/8fa3ba86c3f3ac3531bff7a2344d67b41edd0064/CarND-Kidnapped-Vehicle-Project-master/

### Resources:

https://en.wikipedia.org/wiki/Monte_Carlo_localization

https://en.wikipedia.org/wiki/Recursive_Bayesian_estimation

http://www.cs.cmu.edu/afs/cs/project/jair/pub/volume11/fox99a-html/node3.html

http://www.coldvision.io/wp-content/uploads/2017/06/markov-localization-reference-1.pdf

https://en.wikipedia.org/wiki/Nearest_neighbor_search

http://ais.informatik.uni-freiburg.de/teaching/ws13/mapping/pdf/slam11-particle-filter-4.pdf