Implementing algorithms on topics such scene understanding, object recognition or cognitive neuroscience of visual object recognition.
Implementing algorithms on topics such machine learning, deep learning or reinforcement learning.
Achieved using CUDA parallelization and highly optimized data structures and algorithms.
Using C++, GPU, CUDA, Python, OpenCV, Robot Operating System (ROS), Linux, Eclipse, CMake, Git, QT, Design Patterns, OOP, UML, TDD.
Robotic automation has transformed the manufacturing industry and has the potential to change many other aspects of our lives. However robotics has made relatively less progress in other important industries which have a complex and time variant landscape. Vision is the missing capability that currently prevents robots from performing useful tasks in the complex, unstructured and dynamically changing environments in which we live and work.
Seeing is a lot more than just processing images. It is a complex process tightly coupled with memory - which enables an understanding of the scenario required to robustly perform tasks that involve objects and places. These are tightly coupled with action, thereby providing rapid and continuous feedback for control.
For these reasons combining Image Processing with Machine Learning is highly advisable. All algorithms are to be developed in a manner that enables real-time execution on commodity embedded systems.
#1 Software Development
Java, C++, CUDA, Linux, Eclipse, QT, Git, Agile
#2 Robotic Vision
GPU, CUDA, OpenCV
#3 Machine Learning
Object Recognition, Deep Learning, Reinforcement Learning
#4 Embedded Systems
ROS, NVIDIA Jetson
In the 10’th project from the Self-Driving Car engineer program designed by Udacity, we implemented a Model Predictive Control to drive the car around the track. This time however we’re not given the crosstrack error, we had to calculate that ourself. Additionally, there’s a 100 millisecond latency between actuations commands on top of the connection[…]
In the 8’th project from the Self-Driving Car engineer program designed by Udacity, we implemented a 2 dimensional 3DOF particle filter in C++ to localize our vehicle in a known map. Our vehicle starts in an unknown location and using the particle filter approach we need to determine where our vehicle is. The particle filter[…]
In the 6’th project from the Self-Driving Car engineer program designed by Udacity, we will utilize an Extended Kalman Filter to estimate the state of a moving object of interest with noisy LIDAR and Radar measurements. This post builds up starting with a very simple Kalman Filter implementation for the 1D motion smoothing, to a complex[…]
In this 5’th project from the Self-Driving Car engineer program designed by Udacity, our goals are the following: Perform a Histogram of Oriented Gradients (HOG) feature extraction on a labeled training set of images and train a classifier Linear SVM classifier Optionally, you can also apply a color transform and append binned color features, as[…]
In this 4’th project from the Self-Driving Car engineer program designed by Udacity, our goal is to write a software pipeline to identify the lane boundaries in a video from a front-facing camera on a car. The camera calibration images, test road images, and project videos are available here repository. The goals / steps of[…]
The Self-Driving Car engineer program designed by Udacity is currently the only machine learning program which is focused entirely on autonomous driving. The program offers worldclass traning staff and prominent partners like Nvidia or Mercedes Benz. Besides interesting lessons and exercises the program expects students to prove their deep learning skills in real world projects.[…]
This prototype tests different implementations of the image classification with Deep Learning, Convolutional Neural Networks (CNN), Caffe, OpenCV 3.x and CUDA. Topics covered: Image Classification, Neural Networks vs Deep Learning, CNN vs R-CNN, cuDNN, Caffe, ImageNet & challenges Testing of OpenCV’s DNN CPU classification using GoogLeNet, a trained network from Caffe model zoo Testing of[…]
This prototype tests different implementations of the real-time feature-based object detection with SURF, KNN, FLANN, OpenCV 3.X and CUDA. Object detection is the process of finding instances of real-world objects such as faces, bicycles, and buildings in images or videos. Object detection algorithms typically use extracted features and learning algorithms to recognize instances of an object[…]
This prototype tests implementations of the pencil sketch operation for images and videos using C++, CUDA, OpenCV 3.X. Sketching is a natural way of expressing some types of ideas. It conveys information that can be really hard to explain using text, and at the same time it does not require a tremendous amount of effort. It[…]
This prototype tests the image segmentation with several Watershed-based algorithms, including the marker-controlled variation provided by OpenCV 3.X, with the graph-based variation Power Watershed implemented in C++, with the unified version of waterfalls, standard and P algorithms implemented in C++, and a CUDA implementation of the standard algorithm. In computer vision, image segmentation is the[…]