Y-Vision is a real-time robust crowd tracking proof-of-concept framework which combines RGB+D data from multiple kinects. A short demo is available here:
- RGB and depth specific features extraction, detection and fusion.
- HOG features and blobs from connect component labeling are used as features.
- Bayesian model for pedestrian detection.
- Tracking using fast branch and bound association.
- Ground tracking using simple plane estimation (requires calibration)
- TCP interface with simple client API.
- Data set collection tool
- Efficient Kinect data capture
- Y-Vision is a C# pipeline for pedestrian detection and tracking.
- CollectionTool allows for massive data collection in the wild using a Kinect , useful for machine learning.
- Y-TcpServer is a TCP server that streams info on the detected pedestrians in real time.
- Y-TcpClient and Y-API are self-contained libraries allowing easy integration in other applications (e.g. unity)
- Y-TestApi2d listens and displays pedestrian detection events.
- Y-CalibrationBoard is a GUI calibration environment to help setup the Kinect rig.
- Y-Emulator is a development tool that simulates pedestrians for when no Kinect are available.
- Y-MachineLearning contains the Bayesian model for the pedestrian detection
- Y-UnitTests is the test package.
- Y-Visualization is a set of helpers to display the results and debug.
Y-Vision is licensed under the GNU 3.0 GENERAL PUBLIC license.