• Johnny Kohlbeck

End of Semester Autonomous Controls Update - 5/14

Our autonomous controls and electrical subteam has been hard at work the past month on all stages of our autonomous system “brain”.  The “brain” is comprised of different stages to take in sensor input, compare it to previous observations, and then tell the car how to behave appropriately. Our team has also been active in supporting our combustion and electric teams as they gear up for their respective competitions. See below for pictures, videos, and a summary of what we’ve been up to!

While we are not competing until next Summer 2020 so don’t have a full vehicle to show off yet, on April 27th we attended unveiling for our combustion and electric teams to support them in displaying their vehicles to sponsors, parents, university faculty, and others interested in our Wisconsin Racing program. It was fantastic to see the cars put together after working side-by-side with the other teams throughout the year.

Unveiling was also a great opportunity for us to demonstrate the technology we’ve been working with and explain it to event attendees. We had our LiDAR on display and were able to show how some of our object recognition and path planning systems work. Shown here is our MRS-6124 LiDAR unit sponsored by SICK and our YOLOv3 image classifier model being applied to a video made public by a previous competitor.

We feel it is very important for others to understand how autonomous vehicles work so people can get a better idea of what to expect from future autonomous cars and make more educated decisions on using them in their own lives. For example, we explained how our car will be level 4 autonomy due to its full self driving capability but within an enclosed environment, and then elaborated on some of the differences between the levels of autonomy existing on the road today and where we see industry trends moving towards.

Object Recognition Algorithm Improvement

Initially, we began this year using a Haar Cascade Classifier with OpenCV because it was easy to implement, but ran into limitations due to its speed and accuracy of classification. While it was a good starting point, we needed something faster and more accurate, so we investigated the performance of existing methods.

We compared single stage (YOLO, SSD, etc.) and two stage detectors (Faster-RCNN, R-FCN, etc.) among other existing techniques. With computer vision, there is no universally “best” algorithm as weighing detection speed and accuracy is highly application dependent. Being an autonomous race car, we wanted an algorithm that was very fast. Additionally, since we know our course will only have blue and yellow cones on it, we are willing to sacrifice some accuracy because we believe it will be easy for even a lower accuracy classifier to tell the two apart.

This is a generalization of our findings on the performance trade-offs between different image classifiers. The latest YOLO version (v3) offers a significant accuracy and speed bump compared to previous YOLO versions due to the recent switch from DarkNet-19 to DarkNet-53 (19 to 53 convolutional layers). So, after researching multiple existing methods of computer vision and factoring in our application, we decided to pursue YOLOv3.

Training Data

In order to start training our image classifier, we needed lots of pictures of cones. We started taking our own and reached out to the FSOCO (Formula Student Objects in Context) database. FSOCO is an inter-team database with pre-labeled images of cones that asks for a 600 picture contribution to the data set. This saves time for all teams involved by making it easier and quicker to get reliable training data.

To label the cones we use labelImg, a tool used to label objects in training images. After loading an image a user simply selects “Create RectBox” and drags a rectangle of the entire object to be labeled (even if some of the object is blocked by another object in front of it). This box is then used by the YOLOv3 algorithm for training and ultimately detecting an object. In the image above we have already labeled the front blue and yellow cones with classes of one and zero, respectively, and are in the process of labeling the middle yellow cone.

MRS-6124 LiDAR (sponsored by SICK) Testing We’ve been working to characterize the performance of our MRS-6124 LiDAR unit, sponsored to our team by SICK. The unit has 24 scan planes with 120deg horizontal and 15deg vertical field of views. In order to test the accuracy and assist with object detection we set up a cone in front of a wall at known distance from the LiDAR. We then visualized the output in ROS and measured the distances to where we knew the true wall and true cone locations were. After measuring the distances between scan planes to be ~0.06m, we measured error due to noise of ~0.12m in either direction (where there seemed to be roughly two scan planes in front of and behind the true locations). This matches up with the published systematic error of +/-0.125m in the user manual for the product.

Testing set up.

ROS output, red star and line mark where the wall is truly measured to be.

ROS output, red star marks where the cone is truly measured to be.

We will be continuing to test the performance of the unit for cones at different distances and angles, under different environmental conditions, and with different forms of motion applied to the unit itself (ex. on a moving platform). We are also in the process of analyzing the data to detect cones from the overall point cloud. We will likely be implementing ground-plane removal techniques as well to assist with detection methods.

LD-MRS LiDAR (sponsored by SICK) Testing

SICK sponsored us with a second LiDAR unit, the LD-MRS. Compared to the MRS-6124 this unit has a smaller field of view (110 horizontal, 6.5 vertical) but faster processing and a larger range (300m vs 200m). Shown is sample output visualized through ROS (via rviz).

The LD-MRS also comes with built-in object recognition, as the video below shows. This feature puts a bounding box and motion vector on regions the unit believes are connected. We are now working to test the limits of how closely objects (specifically cones) can be before they are lumped together. The results of that testing will help us make a decision between which LiDAR unit we will ultimately select for use on our car.

State Estimation/SLAM

Knowing where the cones are relative to the car and the car’s relative location as it moves through space is complex yet can be broken down to a SLAM problem.

SLAM attempts to make an estimation of the true state of a system by using measurement data and known equations modeling the system. Measurements include distances to cones as well as vehicle motion, such as through the Ellipse-2D GPS/IMU that was sponsored for us by SBG Systems. Combining this data allows us to describe the system itself and things around it, hence Simultaneous Localization and Mapping (SLAM). Throughout the year we’ve been researching existing techniques such as variations of Kalman Filters (standard, EKF, UKF), FastSLAM (a Rao-Blackwell particle filter approach) and GraphSLAM (a least squares optimization approach), among others.

To summarize our findings, Kalman filters are relatively well known compared to other methods and one of the least complex algorithms as far as SLAM is concerned. The Kalman Filter on its own struggles with non-linear systems, so solutions such as the Extended Kalman Filter (EKF) and Unscented Kalman Filter (UKF) have been formed to handle those cases. An EKF was the first proposal, and the UKF was formed later as a more accurate, faster solution. FastSLAM seems it will be a better long term option due to its increased accuracy and speed, a result of having lesser space complexity compared to Kalman Filters (O(NM)/O(NlogM) depending on implementation vs. Kalman Filter O(N^2)). However, FastSLAM uses an EKF/UKF hidden inside particle samples so we would need a working EKF/UKF first. GraphSLAM seems best for larger maps but as it is relatively very new and our environment will be smaller (relative to SLAM scale) we think it may be more work to implement than it would be worth for the time being.

After weighing the pros and cons to each method, along with the level of existing documentation and feasibility of implementation, we’ve decided to start with a UKF. It seems to be the easiest to make with while still being able to handle nonlinear systems (as we have with position based on acceleration through kinematic constraints) and does so more accurately than an EKF.

The main difference between the EKF and UKF is in how they analyze a transformed distribution. When a non-linear transformation is applied, the EKF linearizes the function before passing all points through the function and then re-evaluating the mean and variance to arrive at an estimate. The UKF selects a set of points (sigma points), passes them directly through the non-linear transformation, and then re-evaluates the mean and variance based on those points. The images below help to illustrate the resulting differences.

Resulting mean estimate from EKF. While a better estimate of the nonlinear transform compared to the stand alone Kalman Filter, the predicted mean (red triangle) is still far off the true mean (blue star).

Resulting mean estimate from UKF. The red dots show the sigma points chosen to represent the distribution through the nonlinear transformation, and the improved estimation accuracy compared to the EKF is clear.

We found these images from a github that does an excellent job explaining how the UKF works. It also explains more generally how Kalman Filters work, too.

From our research, the UKF is a very accurate yet not too difficult solution to the SLAM problem. It appears to be a great starting point that can be scaled to FastSLAM eventually if needed. With there being lots of tutorials and papers written on the subject of Kalman Filters, we feel we have the best understanding of how they work and so expect to be better able to build an implementation for our needs.

We were able to build a simple Kalman Filter to accurately predict true x and y velocity data given noisy data, and are now working on upgrading it to a UKF.

MPC-based Path Planning

Since the beginning of the year we’ve been implementing and testing different path planning techniques to optimize the trade off between shortest distance and fastest time through a course. The relationship is a trade off because at times it is advantageous to take a longer race line in order to allow the car to maintain a higher speed. There are many ways of characterizing a “best” race line depending on the type of turn and conditions, so we focus generally on being tangent to the inside track boundary at the apex of the turn.

Once we were able to do this we wanted to improve on how we were finding the desired velocity at each point throughout the track. To assist with the vehicle dynamics needed for this we implemented an MPC-based algorithm.

The algorithm searches ahead a certain amount and, using information known about a certain amount behind it, finds an optimal trajectory to get the vehicle through the course as fast as possible. With this technique we are able to find the fastest path and associated velocity profile. This information will then be fed into a PID controller to create steering and acceleration requests to the rest of our car.

118 views0 comments

Recent Posts

See All