-
Notifications
You must be signed in to change notification settings - Fork 65
Training YOLO
Having created your image set and annotated it, the next step is to train YOLO. This page guides you through the process.
- Train.txt
- Test.txt
- Obj.data
- Obj.names
- yolo-obj.cfg*
*For yolo-obj.cfg we recommend discarding the file that is generated by VOTT and modifying the yolo-obj.cfg file that is supplied as part of the YOLO installation process and is found in /darknet/cfg folder.
-
Verify that YOLOv3 is installed on the machine you will be using for your training procedure. The YOLO installation on configuration processes can be found on the YOLO site
-
After you have YOLO installed, verify that the train.txt and test.txt files were generated properly. Open the txt files and verify each image is on its own line, and there are no breaks at the top, end, or between each of the file names:
data/obj/kinect_1-demo-20180928-0-output_frame_7.jpg
data/obj/kinect_1-demo-20180928-0-output_frame_10.jpg
data/obj/kinect_1-demo-20180928-0-output_frame_13.jpg
data/obj/kinect_1-demo-20180928-0-output_frame_14.jpg
data/obj/kinect_1-demo-20180928-0-output_frame_22.jpg
Note: The file path needs to match where you currently have the referenced images saved on your machine. Note 2: The test.txt file will represent a percentage of your overall annotated images which can be used to verify your training. They will not be used as part of the training process
-
After you have verified the test and train files, open the obj.data file which should look similar to this:
classes = 6 train = data/train.txt valid = data/test.txt names = data/obj.names backup = backup/
Note: ‘Backup’ denotes where your training weights will be saved.
Note 2: ‘Classes’ denotes how many unique objects you annotated in VOTT (e.x. how many objects you will be tracking in OPT)
3a. The ‘train’, ‘valid’, and ‘names’ should all have the correct file paths for where the called files are currently stored on your machine. For example, if I moved my train.txt and test.txt files and they are currently in my documents folder, the obj.data file would need to look like this:
classes = 6
train = documents/train.txt
valid = documents/test.txt
names = data/obj.names
backup = backup/
- After you have your obj.data file correctly setup, the yolo-obj.cfg file also needs to be configured. There are two lines that MUST be updated:
classes=6
The classes number need to match the classes referenced in your obj.data file. For example, if your obj.data file has ‘classes=10’ then in your yolo-obj.cfg line 244 much be ‘classes=10’.
filters=55
The number of filters you need is dependent on the number of classes you have in your obj.data file. For YOLOv3 the number of classes is determined by:
(classes+5)*9
So, in our example that has 6 classes this would look like:
(6+5)*9
filers=99
If you are using YOLOv2, the filter equation is:
(classes+5)*5
Using the example of 6 classes again, for YOLOv2 this would look like:
(5+5)*5
filters=50
Lastly, at the top of the yolo-obj.cfg file, lines 2 -3:
batch=64
subdivisions=8
These specifications can be changed depending on your GPU, but we have found these parameters to be reliable.
-
Once you have confirmed that all of your configuration files are correct, and you have all of your annotated images and their associated txt files you can start the training process. First, in your terminal navigate to your darknet folder. Then, using this command you can start your YOLO training process:
./darknet detector train cfg/obj.data cfg/yolo-obj.cfg darknet19_448.conv.23
If you have multiple GPUs you can utilize them with a GPU flag:
./darknet detector train cfg/obj.data cfg/yolo-obj.cfg darknet19_448.conv.23 -gpus 0,1
If everything is working properly you should see an output that looks similar to this:
Region Avg IOU: 0.375718, Class: 0.753696, Obj: 0.342988, No Obj: 0.005908, Avg Recall: 0.348485, count: 66
Region Avg IOU: 0.343607, Class: 0.693750, Obj: 0.322397, No Obj: 0.006142, Avg Recall: 0.258065, count: 62
Region Avg IOU: 0.298240, Class: 0.689890, Obj: 0.287116, No Obj: 0.004695, Avg Recall: 0.216667, count: 60
Region Avg IOU: 0.331550, Class: 0.700530, Obj: 0.380471, No Obj: 0.005534, Avg Recall: 0.288136, count: 59
Region Avg IOU: 0.353906, Class: 0.696325, Obj: 0.351220, No Obj: 0.006410, Avg Recall: 0.264706, count: 68
Region Avg IOU: 0.310978, Class: 0.614763, Obj: 0.302438, No Obj: 0.005876, Avg Recall: 0.244444, count: 45
Region Avg IOU: 0.249523, Class: 0.605193, Obj: 0.239432, No Obj: 0.004275, Avg Recall: 0.222222, count: 63
Region Avg IOU: 0.269377, Class: 0.585005, Obj: 0.280750, No Obj: 0.005019, Avg Recall: 0.203704, count: 54
Region Avg IOU: 0.287806, Class: 0.633498, Obj: 0.243028, No Obj: 0.004850, Avg Recall: 0.264706, count: 68
Region Avg IOU: 0.309383, Class: 0.621976, Obj: 0.273004, No Obj: 0.005589, Avg Recall: 0.283333, count: 60
Syncing... Done!
1320: 8.373757, 8.212132 avg, 0.002000 rate, 2.172083 seconds, 158400 images
It normally takes a day or more for the training to complete. Once it is complete, in your backup folder you will find a file named ‘final-yolo-obj.weights’.
Run csshx or terminator to issue the same commands simultaneously.
Files Required
- yolo-obj.cfg
- Obj.data
- Obj.names
- final-yolo-obj.weights
Note: All of the file names need to be the same. For example, when preparing to upload the files to OpenPTrack this is what the files could look like:
- OPT-YOLO-example-test.cfg*
- OPT-YOLO-example.data
- OPT-YOLO-example.names
- OPT-YOLO-example.weights
*The .cfg file requires the suffix -test
Then, the files need to be placed in these directories:
darknet_opt/cfg/OPT-YOLO-example.cfg
darknet_opt/data/OPT-YOLO-example.names
darknet_opt/cfg/OPT-YOLO-example.data
darknet_opt/OPT-YOLO-example.weights
Once all of the files are named properly, and they are placed in the correct directory on each node, run the command
export OPT_OBJECT_TRAINING="<YOLO File Names>"
In the example above this command would be
export OPT_OBJECT_TRAINING="OPT-YOLO-example"
To use the default YOLO weights (COCO) run
unset OPT_OBJECT_TRAINING
To see the current training set (will be blank for default)
export | grep OPT_OBJECT_TRAINING
With your YOLO weights loaded into OpenPTrack you can test your object training on a new video that was not used for training, but includes the objects you trained YOLO with this command:
./darknet detector demo cfg/obj.data cfgyolo-obj.cfg backup/yolo-obj_final.weights data/<video you created for testing> -thresh 0.6`
The ‘-thresh 0.6’ flag can be made higher or lower depending on your use-case.
-
Start OpenPTrack object tracking
-
Next, run the real-time parameter reconfiguration:
rosrun rqt_reconfigure rqt_reconfigure
-
Then, once the real-time configuration screen appears we suggest you test these settings (they have worked well for us in our testing)
-
For each detection node, these parameters should be written to file, and updated on Github (I think):
ObjectThresh: .7 ObjectIdentifier_Thresh: .5 ObjectMedian_Factor: .1 motion_weight .1
-
For the object tracker, these parameters should be written to file, and update on Github:
Acceleration_variance: 100.0 Position_variance_weight: 30 Detector_likelyhood = true
These settings may need to be reconfigured per your specific OpenPTrack environmental variables, installation, or objects.
Note: Remember, any changes you make in the real-time parameter reconfiguation screen willl NOT be saved to file. You need to MANUALLY write these changes to their corresponding files.
- System Requirements
- Supported Hardware
- Initial Network Configuration
- Example Hardware List for UCLA Setup
- Making the Checkerboard
- Time Synchronization
- Pre-Tracking Configuration
- Camera Network Configuration
- Single Camera
- Setting Parameters
- Multi-Sensor Person Tracking
- HOG vs YOLO Detectors
- World Coordinate Settings
- Single Camera
- Pose Initialization
- Multi Sensor Pose Annotation
- Pose Best Practices
- Setting Parameters
- Single Camera
- Setting Parameters
- Multi Sensor Object Tracking
- YOLO Custom Training & Testing
- Yolo Trainer
- Single Camera
- Setting Parameters
- Multi Sensor Face Detection and Recognition
- Face Detection and Recognition Data Format
How to receive tracking data in: