Autonomous Controls Update - 2/15/2019
Since acquiring our object recognition sensors we’ve been working to calibrate them and start setting them up with ROS, the software framework we’ll be using to link the sensor outputs with our sensor fusion, object detection filtering, SLAM, path planning, and path following algorithms. We’re in the process of testing our PID control with an RC car while building our computational nodes for an EKF and SLAM.
Turning on and getting test output from one of our LiDAR units. We’re currently working on converting the raw hex data from the unit to distance/angle coordinate pairs.
Experimenting with our stereo camera. We’re currently calibrating our device for different lighting conditions and developing our object recognition pipeline to extract distance data more efficiently.
Optimizing our path planning algorithms. First image is top down view showing a fastest time/distance path through a sample course. Second image shows the velocity profiles along the path per rough vehicle dynamic calculations. Working to increase algorithm speed and characterize speed profiles more accurately.
Our current top level block diagram plan for the autonomous data pipeline. Sensors will pick up cone and car locations before feeding them into the NVIDIA. Output is desired steering, velocity and acceleration which the ECU will convert to actuation via PID control.
Our stereo camera feeding IMU directional data to a ROS topic. Visualized using rviz within ROS.