Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added object detection using coco dataset with yolo v5 advanced deep learning model #1192

Large diffs are not rendered by default.

32 changes: 32 additions & 0 deletions Object Detection using COCO dataset with YOLO_v5 model/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# Project Title: Object Detection with YOLOv5

Project Description:
--------------------
This project implements an object detection system using the YOLOv5 (You Only Look Once) model. YOLOv5 is a state-of-the-art, real-time object detection algorithm that is both fast and accurate. This system can detect multiple objects in images or video streams and can be further fine-tuned for custom datasets. It includes training the YOLOv5 model, evaluating it on a test dataset, and running real-time inference.

Features:
---------
Real-time object detection on images and video streams.
Training the YOLOv5 model on custom datasets.
Evaluation using key metrics such as Precision, Recall, Intersection over Union (IoU), and Mean Average Precision (mAP).
Deployment for detecting objects in images and video streams (GPU requirements for this case is much preferrable)
Model robustness testing with image augmentations.

Requirements:
--------------
Python 3.7+
PyTorch 1.7+
YOLOv5 (via the ultralytics/yolov5 repository)
Common libraries:
numpy
opencv-python
torch
pillow
matplotlib
albumentations

Result Demo:
-------------

![result-demo-obj-det](https://github.com/user-attachments/assets/0c5020e1-d3e7-4789-a16a-16b44f1b85af)

Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
__pycache__/
*.py[cod]
*.so

.ipynb_checkpoints/

# Python virtual environments
env/
venv/

# YOLOv5 Weights
*.pt
runs/
weights/


*.log


*.o
*.a
*.out
*.exe

# Data files
data/
*.csv
*.json

# System files
.DS_Store
Thumbs.db

# PyCharm/IDEA
.idea/

# Visual Studio Code
.vscode/
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
torch>=1.7.0
torchvision>=0.8.0
matplotlib>=3.2.2
numpy>=1.18.5
opencv-python>=4.1.2
pillow>=7.1.2
PyYAML>=5.3.1
tqdm>=4.64.0
scipy>=1.4.1
tensorboard>=2.4.1
seaborn>=0.11.0
pandas>=1.1.4
albumentations>=0.5.2
ipython>=7.16.1
jupyterlab>=2.1.5
requests>=2.23.0
22 changes: 22 additions & 0 deletions app (3).py
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
from flask import Flask, render_template, request, jsonify
from chat import chatbot

app = Flask(__name__)


@app.route("/")
def hello():
return render_template('chat.html')

@app.route("/ask", methods=['POST'])
def ask():

message = str(request.form['messageText'])

bot_response = chatbot(message)

return jsonify({'status':'OK','answer':bot_response})


if __name__ == "__main__":
app.run()
26 changes: 26 additions & 0 deletions chat (1).py
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
from peft import AutoPeftModelForCausalLM
from transformers import GenerationConfig
from transformers import AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("Vasanth/mistral-finetuned-alpaca")

model = AutoPeftModelForCausalLM.from_pretrained(
"Vasanth/mistral-finetuned-alpaca",
low_cpu_mem_usage=True,
return_dict=True,
torch_dtype=torch.float16,
device_map="cuda")

generation_config = GenerationConfig(
do_sample=True,
top_k=1,
temperature=0.1,
max_new_tokens=100,
pad_token_id=tokenizer.eos_token_id
)

def chatbot(message):
input_str = "###Human: " + message + " ###Assistant: "
inputs = tokenizer(input_str, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, generation_config=generation_config)
return tokenizer.decode(outputs[0], skip_special_tokens=True).replace(input_str, '')
2 changes: 2 additions & 0 deletions config (1).py
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
##OPEN API STUFF
OPENAI_API_KEY = "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"
Loading