In this project we are going to create a simple perception stack for self-driving cars (SDCs.). Our only data source will be video streams from cameras for simplicity. We’re mainly going to be analyzing the road ahead, detecting the lane lines, detecting other cars/agents on the road, and estimating some useful information that may help other SDCs stacks.
After cloning the repositery and adding test images/videos in their directory
-
Applied thresholding to region of interest then transformed the image to birdeye view
-
Warped binary image and histogram of intensity of white pixels in the image
-
Using sliding windows technique, we can detect White Pixel in the warped image then we apply polynomial best-fit line through the pixels.
-
Overlaying the lane lines with red and blue and the lane with green
-
last step we calculate center offset and the road curvature radius
-
A new directoy should be created called 'output images' and the output image should be saved automatically to this directory
-
Finally we can repeat the process using anyother image in the test_images directory and if you want to quit you can simply type 'q'
-
Run script.sh and select video instead
-
The output is a video with overlayed lane lines with red and blue colors and the lane itself with green color
-
The output video should be saved in output_videos directory.
Link to output video: https://drive.google.com/drive/folders/17ZdWBvtGSpbW7l-AJb6H5U5nit9g6d_m?usp=sharing
Before running this code you will need to download yolov3.cfg, yolov3.weights, and coco.names then put them in 'part1' directory
- Select either 1 for image or 2 for video input
- Choose image from directory
- original image:
- Lane detected image
- Final image:
- Result image is stored in the output_images directory
- Choose video from directory
- Result image is stored in the output_videos directory
Link to output video: https://drive.google.com/drive/folders/17ZdWBvtGSpbW7l-AJb6H5U5nit9g6d_m?usp=sharing