important note: Since the images I use as a data set were given to me by the company, I cannot share them publicly.
BeeVision.jpynb: The Python code that performs all the steps. Please do not run all cells from start to finish. There are cells that add additional functions independent of the process or work in opposition to each other. In order to be able to use Google GPUs during the training phase, I preferred to work in Google Colab instead of working locally.
validation_random_coco.json: Json file containing the validation or test dataset's labels in coco format
model_weights_14epoch_626_0172.pth: The parameters of the model I am training. You can load these parameters into the model with the cells I put in the Python code. No training is required. Can be put directly to the test
Using the tools available online, I manually labeled 626 of the photos in the dataset and created a total of 23 classes. I had some problems with training because I had not created a custom dataset before and I was inexperienced with labels. I think I am not creating the dataset optimally. I got the labels from the tool I used in coco format and uploaded the photos and labels to the dataloader on the code.
After determining the parameters such as batch size, epoch number, and momentum, training can be done. The average training loss is calculated after each batch and printed as output during the epoch. At the end of each epoch, validation loss is calculated using the validation dataset and printed as output. In this way, Overfitting becomes preventable.
Before the training phase, the model can be given pre-made parameters and a pretrained model can be retrained. After training, model parameters can be saved for later use.