-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add visualization methods to all message types #141
Conversation
…m/luxonis/depthai-nodes into feat/detection_visualization_node
In Keypoints message there is a bug with Visualizer, where it does not show points, for the time being I changed to lines instead. |
Codecov ReportAttention: Patch coverage is
✅ All tests successful. No failed tests found. Additional details and impacted files@@ Coverage Diff @@
## main #141 +/- ##
==========================================
+ Coverage 35.93% 36.10% +0.17%
==========================================
Files 69 70 +1
Lines 3835 3991 +156
==========================================
+ Hits 1378 1441 +63
- Misses 2457 2550 +93 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally LGTM, added some clarifying questions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally LGTM. Adding a few comments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, 2 small comments/questions
…m/luxonis/depthai-nodes into feat/detection_visualization_node
…m/luxonis/depthai-nodes into feat/detection_visualization_node
This PR adds default
getVisualizationMessage
methods to all messages. These methods allow depthai to automatically annotate outputs of all models that use these message types. This is the "default" visualizations, if a user wants custom or different annotations, he would have to override these functions or create a new AnnotationNode.The annotations were tested using RVC4 and RVC2 (where aplicable) on the following models:
YuNet, YOLOv6, YOLOv8, text detection, M-LSD, Fast SAM, Selfie segmentation, PP-LiteSeg, FaceLandmarker, OCR, Emotion classification, Depthanything, Midas, ESRGAN, Ultra Fast Lane Detection