From b649a10b061a4cc8adb03c1a55b3e67d9f4ec8dc Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Jan=20=C4=8Cuhel?=
<79118988+HonzaCuhel@users.noreply.github.com>
Date: Thu, 19 Dec 2024 16:21:37 +0100
Subject: [PATCH] Feat/add dependabot (#123)
* Improve README.md & add dependabot
* Change the target branch
---
.github/dependabot.yml | 10 ++++++++
README.md | 54 ++++++++++++++++++++++++++++++++++++++----
2 files changed, 59 insertions(+), 5 deletions(-)
create mode 100644 .github/dependabot.yml
diff --git a/.github/dependabot.yml b/.github/dependabot.yml
new file mode 100644
index 0000000..66aa242
--- /dev/null
+++ b/.github/dependabot.yml
@@ -0,0 +1,10 @@
+version: 2
+updates:
+ - package-ecosystem: "pip"
+ directory: "/"
+ schedule:
+ interval: "weekly"
+ target-branch: "main"
+ # Labels on pull requests for version updates only
+ labels:
+ - "pip dependencies"
\ No newline at end of file
diff --git a/README.md b/README.md
index 8d865c1..09a8967 100644
--- a/README.md
+++ b/README.md
@@ -8,7 +8,18 @@ This is a command-line tool that simplifies the conversion process of YOLO model
> \[!WARNING\]\
> Please note that for the moment, we support conversion of YOLOv9 weights only from [Ultralytics](https://docs.ultralytics.com/models/yolov9/#performance-on-ms-coco-dataset).
-## Running
+## 📜 Table of contents
+
+- [💻 How to run](#run)
+- [⚙️ Arguments](#arguments)
+- [🧰 Supported Models](#supported-models)
+- [📝 Credits](#credits)
+- [📄 License](#license)
+- [🤝 Contributing](#contributing)
+
+
+
+## 💻 How to run
You can either export a model stored on the cloud (e.g. S3) or locally. You can choose to install the toolkit through pip or using Docker. In the sections below, we'll describe both options.
@@ -54,21 +65,54 @@ docker compose run tools_cli shared_with_container/models/yolov6nr4.pt
The output files are going to be in `shared-component/output` folder.
-### Arguments
+
+
+## ⚙️ Arguments
- `model: str` = Path to the model.
- `imgsz: str` = Image input shape in the format `width height` or `width`. Default value `"416 416"`.
-- `version: Optional[str]` =
+- `version: Optional[str]` = Version of the YOLO model. Default value `None`. If not specified, the version will be detected automatically. Supported versions: `yolov5`, `yolov6r1`, `yolov6r3`, `yolov6r4`, `yolov7`, `yolov8`, `yolov9`, `yolov10`, `yolov11`, `goldyolo`.
- `use_rvc2: bool` = Whether to export for RVC2 or RVC3 devices. Default value `True`.
- `class_names: Optional[str]` = Optional list of classes separated by a comma, e.g. `"person, dog, cat"`
- `output_remote_url: Optional[str]` = Remote output url for the output .onnx model.
- `config_path: Optional[str]` = Optional path to an optional config.
- `put_file_plugin: Optional[str]` = Which plugin to use. Optional.
-## Credits
+
+
+## 🧰 Supported models
+
+Currently, the following models are supported:
+
+| Model Version | Supported versions |
+| ------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `yolov5` | YOLOv5n, YOLOv5s, YOLOv5m, YOLOv5l, YOLOv5x, YOLOv5n6, YOLOv5s6, YOLOv5m6, YOLOv5l6 |
+| `yolov6r1` | **v1.0 release:** YOLOv6n, YOLOv6t, YOLOv6s |
+| `yolov6r3` | **v2.0 release:** YOLOv6n, YOLOv6t, YOLOv6s, YOLOv6m, YOLOv6l
**v2.1 release:** YOLOv6n, YOLOv6s, YOLOv6m, YOLOv6l
**v3.0 release:** YOLOv6n, YOLOv6s, YOLOv6m, YOLOv6l |
+| `yolov6r4` | **v4.0 release:** YOLOv6n, YOLOv6s, YOLOv6m, YOLOv6l |
+| `yolov7` | YOLOv7-tiny, YOLOv7, YOLOv7x |
+| `yolov8` | **Detection:** YOLOv8n, YOLOv8s, YOLOv8m, YOLOv8l, YOLOv8x, YOLOv3-tinyu, YOLOv5nu, YOLOv5n6u, YOLOv5s6u, YOLOv5su, YOLOv5m6u, YOLOv5mu, YOLOv5l6u, YOLOv5lu
**Instance Segmentation, Pose, Oriented Detection, Classification:** YOLOv8n, YOLOv8s, YOLOv8m, YOLOv8l, YOLOv8x |
+| `yolov9` | YOLOv9t, YOLOv9s, YOLOv9m, YOLOv9c |
+| `yolov10` | YOLOv10n, YOLOv10s, YOLOv10m, YOLOv10b, YOLOv10l, YOLOv10x |
+| `yolov11` | **Detection, Instance Segmentation, Pose, Oriented Detection, Classification:** YOLO11n, YOLO11s, YOLO11m, YOLO11l, YOLO11x |
+| `goldyolo` | Gold-YOLO-N, Gold-YOLO-S, Gold-YOLO-M, Gold-YOLO-L |
+
+If you don't find your model in the list, it is possible that it can be converted, however, this is not guaranteed.
+
+
+
+## 📝 Credits
This application uses source code of the following repositories: [YOLOv5](https://github.com/ultralytics/yolov5), [YOLOv6](https://github.com/meituan/YOLOv6), [GoldYOLO](https://github.com/huawei-noah/Efficient-Computing) [YOLOv7](https://github.com/WongKinYiu/yolov7), and [Ultralytics](https://github.com/ultralytics/ultralytics) (see each of them for more information).
-## License
+
+
+## 📄 License
This application is available under **AGPL-3.0 License** license (see [LICENSE](https://github.com/luxonis/tools/blob/master/LICENSE) file for details).
+
+
+
+## 🤝 Contributing
+
+We welcome contributions! Whether it's reporting bugs, adding features or improving documentation, your help is much appreciated. Please create a pull request ([here](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request)'s how to do it) and assign anyone from the Luxonis team to review the suggested changes. Cheers!