In the 8’th project from the Self-Driving Car engineer program designed by Udacity, we implemented a 2 dimensional 3DOF particle filter in C++ to localize our vehicle in a known map.

Our vehicle starts in an unknown location and using the particle filter approach we need to determine where our vehicle is. The particle filter is given a map and some initial localization information (analogous to what a GPS would provide), and at each time step the filter also gets observation and control data.

This post includes the following:

- Bayes Filter and Markov Localization
- Simple 1D Markov Localization example implemented in C++
- Particle Filter Localization implemented in 3DOF (x,y,yaw) and C++

### Bayes Filter and Markov Localization

Markov Localization or Bayes Filter for Localization is a generalized filter for localization and all other localization approaches are realizations of the Bayes Filter. Kalman Filters and Particle Filters are as well based on the Bayes Filter.

The Bayes Localization Filter Markov Localization is a general framework for recursive state estimation, which means this framework allows us to use the previous state (state at t-1) to estimate a new state (state at t) using only current observations and controls (observations and control at t), rather than the entire data history (data from 0:t).

The motion model describes the prediction step of the filter while the observation model is the update step. And the state estimation using the Bayes filter is dependent upon the interaction between prediction (motion model) and update (observation model steps) and all the localization methods discussed so far are realizations of the Bayes filter.

Generally we can think of our vehicle location as a distribution, each time we move our distribution becomes more diffuse (wider). We pass our variables (map data, observation data, and control data) into the filter to concentrate (narrow) this distribution, at each time step. Each state prior to applying the filter represents our prior and the narrowed distribution represents our Bayes posterior.

### Simple 1D Markov Localization example implemented in C++

Following is a simple implementation of a 1D Markov localization.

At the very beginning, as shown in the image bellow, the car starts next to a tree landmark. This means the car position can be in any of the tree locations with the same probability. In the example bellow there are 6 solutions with the same probability.

While the car moves the believes are updated and some of the solutions are dropped out. In the example bellow, the car moved 5 meters to the right in front of the 2nd tree and only 2 solutions survived with the same but higher probability.When the car is in front of the 4th tree there is only one solution left with a very high probability.

The source code of this 1D Markov Localization can be found here.

After the execution we can see that the localization converges to a single solution with the shape of the normal distribution and with the mean at position x=40.

1 2 3 4 5 6 7 8 9 |
... bel_x=37: 0.08470 ground truth: 0.08470 bel_x=38: 0.09907 ground truth: 0.09907 bel_x=39: 0.10754 ground truth: 0.10754 bel_x=40: 0.10834 ground truth: 0.10834 bel_x=41: 0.10130 ground truth: 0.10130 bel_x=42: 0.08790 ground truth: 0.08790 bel_x=43: 0.07079 ground truth: 0.07079 ... |

Full output:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 |
-->motion model for state x ! -->motion model for state x ! -->motion model for state x ! -->motion model for state x ! -->motion model for state x ! -->motion model for state x ! -->motion model for state x ! -->motion model for state x ! -->motion model for state x ! -->motion model for state x ! -->motion model for state x ! -->motion model for state x ! -->motion model for state x ! -->motion model for state x ! ................................................... ...............----> Results <----................. ................................................... bel_x=0: 0.00000 ground truth: 0.00000 bel_x=1: 0.00000 ground truth: 0.00000 bel_x=2: 0.00000 ground truth: 0.00000 bel_x=3: 0.00000 ground truth: 0.00000 bel_x=4: 0.00000 ground truth: 0.00000 bel_x=5: 0.00000 ground truth: 0.00000 bel_x=6: 0.00000 ground truth: 0.00000 bel_x=7: 0.00000 ground truth: 0.00000 bel_x=8: 0.00000 ground truth: 0.00000 bel_x=9: 0.00000 ground truth: 0.00000 bel_x=10: 0.00000 ground truth: 0.00000 bel_x=11: 0.00000 ground truth: 0.00000 bel_x=12: 0.00000 ground truth: 0.00000 bel_x=13: 0.00000 ground truth: 0.00000 bel_x=14: 0.00000 ground truth: 0.00000 bel_x=15: 0.00000 ground truth: 0.00000 bel_x=16: 0.00000 ground truth: 0.00000 bel_x=17: 0.00000 ground truth: 0.00000 bel_x=18: 0.00000 ground truth: 0.00000 bel_x=19: 0.00000 ground truth: 0.00000 bel_x=20: 0.00000 ground truth: 0.00000 bel_x=21: 0.00000 ground truth: 0.00000 bel_x=22: 0.00000 ground truth: 0.00000 bel_x=23: 0.00000 ground truth: 0.00000 bel_x=24: 0.00001 ground truth: 0.00001 bel_x=25: 0.00004 ground truth: 0.00004 bel_x=26: 0.00011 ground truth: 0.00011 bel_x=27: 0.00029 ground truth: 0.00029 bel_x=28: 0.00072 ground truth: 0.00072 bel_x=29: 0.00165 ground truth: 0.00165 bel_x=30: 0.00350 ground truth: 0.00350 bel_x=31: 0.00690 ground truth: 0.00690 bel_x=32: 0.01263 ground truth: 0.01263 bel_x=33: 0.02146 ground truth: 0.02146 bel_x=34: 0.03383 ground truth: 0.03383 bel_x=35: 0.04950 ground truth: 0.04950 bel_x=36: 0.06721 ground truth: 0.06721 bel_x=37: 0.08470 ground truth: 0.08470 bel_x=38: 0.09907 ground truth: 0.09907 bel_x=39: 0.10754 ground truth: 0.10754 bel_x=40: 0.10834 ground truth: 0.10834 bel_x=41: 0.10130 ground truth: 0.10130 bel_x=42: 0.08790 ground truth: 0.08790 bel_x=43: 0.07079 ground truth: 0.07079 bel_x=44: 0.05290 ground truth: 0.05290 bel_x=45: 0.03670 ground truth: 0.03670 bel_x=46: 0.02362 ground truth: 0.02362 bel_x=47: 0.01411 ground truth: 0.01411 bel_x=48: 0.00782 ground truth: 0.00782 bel_x=49: 0.00403 ground truth: 0.00403 bel_x=50: 0.00192 ground truth: 0.00192 bel_x=51: 0.00085 ground truth: 0.00085 bel_x=52: 0.00035 ground truth: 0.00035 bel_x=53: 0.00013 ground truth: 0.00013 bel_x=54: 0.00005 ground truth: 0.00005 bel_x=55: 0.00002 ground truth: 0.00002 bel_x=56: 0.00000 ground truth: 0.00000 bel_x=57: 0.00000 ground truth: 0.00000 bel_x=58: 0.00000 ground truth: 0.00000 bel_x=59: 0.00000 ground truth: 0.00000 bel_x=60: 0.00000 ground truth: 0.00000 bel_x=61: 0.00000 ground truth: 0.00000 bel_x=62: 0.00000 ground truth: 0.00000 bel_x=63: 0.00000 ground truth: 0.00000 bel_x=64: 0.00000 ground truth: 0.00000 bel_x=65: 0.00000 ground truth: 0.00000 bel_x=66: 0.00000 ground truth: 0.00000 bel_x=67: 0.00000 ground truth: 0.00000 bel_x=68: 0.00000 ground truth: 0.00000 bel_x=69: 0.00000 ground truth: 0.00000 bel_x=70: 0.00000 ground truth: 0.00000 bel_x=71: 0.00000 ground truth: 0.00000 bel_x=72: 0.00000 ground truth: 0.00000 bel_x=73: 0.00000 ground truth: 0.00000 bel_x=74: 0.00000 ground truth: 0.00000 bel_x=75: 0.00000 ground truth: 0.00000 bel_x=76: 0.00000 ground truth: 0.00000 bel_x=77: 0.00000 ground truth: 0.00000 bel_x=78: 0.00000 ground truth: 0.00000 bel_x=79: 0.00000 ground truth: 0.00000 bel_x=80: 0.00000 ground truth: 0.00000 bel_x=81: 0.00000 ground truth: 0.00000 bel_x=82: 0.00000 ground truth: 0.00000 bel_x=83: 0.00000 ground truth: 0.00000 bel_x=84: 0.00000 ground truth: 0.00000 bel_x=85: 0.00000 ground truth: 0.00000 bel_x=86: 0.00000 ground truth: 0.00000 bel_x=87: 0.00000 ground truth: 0.00000 bel_x=88: 0.00000 ground truth: 0.00000 bel_x=89: 0.00000 ground truth: 0.00000 bel_x=90: 0.00000 ground truth: 0.00000 bel_x=91: 0.00000 ground truth: 0.00000 bel_x=92: 0.00000 ground truth: 0.00000 bel_x=93: 0.00000 ground truth: 0.00000 bel_x=94: 0.00000 ground truth: 0.00000 bel_x=95: 0.00000 ground truth: 0.00000 bel_x=96: 0.00000 ground truth: 0.00000 bel_x=97: 0.00000 ground truth: 0.00000 bel_x=98: 0.00000 ground truth: 0.00000 bel_x=99: 0.00000 ground truth: 0.00000 ................................................... sum bel: 1.00000 ................................................... rse : 0.00000 ................................................... ................................................... |

This document explains the Markov Localization in more details including mathematical formulas.

### Particle Filter Localization implemented in 3DOF (x,y,yaw) and C++

Particle filters implement the **prediction-updating** transitions of the filtering equation directly by using a genetic type **mutation-selection** particle algorithm. The samples from the distribution are represented by a set of particles; each particle has a likelihood weight assigned to it that represents the probability of that particle being sampled from the probability density function.

The flowchart below represents the steps of the particle filter algorithm as well as its inputs.

The pseudo code steps below correspond to the steps in the algorithm flow chart, initialization, prediction, particle weight updates, and re-sampling.

Line 1: At the initialization step we estimate our position from GPS input. The subsequent steps in the process will refine this estimate to localize our vehicle.

Line 3: During the prediction step we add the control input (yaw rate & velocity) for all particles.

Lines 4-5: During the update step, we update our particle weights using map landmark positions and feature measurements.

Lines 7-10: During re-sampling we will re-sample M times (M is range of 0 to length_of_particle Array) drawing a particle i (i is the particle index) proportional to its weight . This is implemented as a re-sampling wheel.

Line 11: The new set of particles represents the Bayes filter posterior probability. We now have a refined estimate of the vehicles position based on input evidence.

**Initialization**

The most practical way to initialize our particles and generate real time output, is to make an initial estimate using GPS input. As with all sensor based operations, this step is impacted by noise. Particles shall be implemented by sampling a Gaussian distribution, taking into account Gaussian sensor noise around the initial GPS position and heading estimates.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 |
/* * Print out to the terminal 3 samples from a normal distribution with * mean equal to the GPS position and IMU heading measurements and * standard deviation of 2 m for the x and y position and 0.05 radians * for the heading of the car. */ #include <random> // Need this for sampling from distributions #include <iostream> using namespace std; // @param gps_x GPS provided x position // @param gps_y GPS provided y position // @param psi GPS provided yaw void printSamples(double gps_x, double gps_y, double psi) { default_random_engine gen; double std_x, std_y, std_psi; // Standard deviations for x, y, and psi // Set standard deviations for x, y, and psi std_x = 2; std_y = 2; std_psi = 0.05; // This line creates a normal (Gaussian) distribution for x normal_distribution<double> dist_x(gps_x, std_x); // Create normal distributions for y and psi normal_distribution<double> dist_y(gps_y, std_y); normal_distribution<double> dist_psi(psi, std_psi); for (int i = 0; i < 3; ++i) { double sample_x, sample_y, sample_psi; // Sample and from these normal distrubtions like this: // sample_x = dist_x(gen); // where "gen" is the random engine initialized earlier. sample_x = dist_x(gen); sample_y = dist_y(gen); sample_psi = dist_psi(gen); // Print your samples to the terminal. cout << "Sample " << i + 1 << " " << sample_x << " " << sample_y << " " << sample_psi << endl; } } int main() { // Set GPS provided state of the car. double gps_x = 4983; double gps_y = 5029; double psi = 1.201; // Sample from the GPS provided position. printSamples(gps_x, gps_y, psi); return 0; } |

**Prediction Step**

Now that we have initialized our particles it’s time to predict the vehicle’s position. Here we will use what we learned in the motion models lesson to predict where the vehicle will be at the next time step, by updating based on yaw rate and velocity, while accounting for Gaussian sensor noise.

**Data Association: Nearest Neighbor**

Data association problem is the problem of matching landmark measurements to objects in the real world, like map landmarks. One technique to use the right association is to use the Nearest Neighbor algorithm as shown below.

The Nearest Neighbor Data Association has pros and cons:

Pros:

- easy to understand
- easy to implement
- works well in many situations

Cons:

- not robust to high density of measurements or map landmarks
- not robust to sensor noise
- not robust to errors in position estimates
- inefficient to calculate
- does not take different sensor uncertainties into account

**Update Step**

The x and y errors are depicted from the point of view of the map (x is horizontal, y is vertical) rather than the point of view of the car where x is in the direction of the car’s heading, ( i.e. It points to where the car is facing), and y is orthogonal (90 degrees) to the left of the x-axis (pointing out of the left side of the car).

**Calculating** Error

Two errors are computed, one for position and one for rotation as the root squared error between the best particle and the ground truth.

- Position RMSE = $√(xp−xg2+(yp−yg2$
- Theta RMSE = $√(θp−θg2$

**Particle Weights**

The particles final weight will be calculated as the product of each measurement’s Multivariate-Gaussian probability.

The Multivariate-Gaussian probability has two dimensions, x and y. The mean of the Multivariate-Gaussian is the measurement’s associated landmark position and the Multivariate-Gaussian’s standard deviation is described by our initial uncertainty in the x and y ranges. The Multivariate-Gaussian is evaluated at the point of the transformed measurement’s position. The formula for the Multivariate-Gaussian can be seen below.

The project source code can be found here.

### Video result:

### Conclusions:

### Sources:

### Resources:

https://en.wikipedia.org/wiki/Monte_Carlo_localization

https://en.wikipedia.org/wiki/Recursive_Bayesian_estimation

http://www.cs.cmu.edu/afs/cs/project/jair/pub/volume11/fox99a-html/node3.html

http://www.coldvision.io/wp-content/uploads/2017/06/markov-localization-reference-1.pdf

https://en.wikipedia.org/wiki/Nearest_neighbor_search

http://ais.informatik.uni-freiburg.de/teaching/ws13/mapping/pdf/slam11-particle-filter-4.pdf