diff --git a/_quarto.yml b/_quarto.yml index 4ce30d30..ad272d91 100644 --- a/_quarto.yml +++ b/_quarto.yml @@ -122,6 +122,10 @@ book: - text: "---" # LABS - part: LABS + # getting started + chapters: + - contents/labs/labs.qmd + - contents/labs/getting_started.qmd # nicla vision - part: contents/labs/arduino/nicla_vision/nicla_vision.qmd chapters: diff --git a/contents/labs/arduino/nicla_vision/nicla_vision.qmd b/contents/labs/arduino/nicla_vision/nicla_vision.qmd index 646e26a3..28f34e1e 100644 --- a/contents/labs/arduino/nicla_vision/nicla_vision.qmd +++ b/contents/labs/arduino/nicla_vision/nicla_vision.qmd @@ -2,8 +2,7 @@ These labs provide a unique opportunity to gain practical experience with machine learning (ML) systems. Unlike working with large models requiring data center-scale resources, these exercises allow you to directly interact with hardware and software using TinyML. This hands-on approach gives you a tangible understanding of the challenges and opportunities in deploying AI, albeit at a tiny scale. However, the principles are largely the same as what you would encounter when working with larger systems. -![Nicla Vision. Credit: Arduino](./images/jpg/nicla_vision_quarter.jpeg){#fig-nicla_vision height=3in} - +![Nicla Vision. Credit: Arduino](./images/jpg/nicla_vision_quarter.jpeg) ## Pre-requisites diff --git a/contents/labs/getting_started.qmd b/contents/labs/getting_started.qmd new file mode 100644 index 00000000..a36af807 --- /dev/null +++ b/contents/labs/getting_started.qmd @@ -0,0 +1,55 @@ +# Getting Started {.unnumbered} + +Welcome to the exciting world of embedded machine learning and TinyML! In this hands-on lab series, you'll explore various projects that demonstrate the power of running machine learning models on resource-constrained devices. Before diving into the projects, let's ensure you have the necessary hardware and software set up. + +## Hardware Requirements + +To follow along with the hands-on labs, you'll need the following hardware: + +1. **Arduino Nicla Vision board** + - The Arduino Nicla Vision is a powerful, compact board designed for professional-grade computer vision and audio applications. It features a high-quality camera module, a digital microphone, and an IMU, making it suitable for demanding projects in industries such as robotics, automation, and surveillance. + - [Arduino Nicla Vision specifications](https://docs.arduino.cc/hardware/nicla-vision) + - [Arduino Nicla Vision pinout diagram](https://docs.arduino.cc/resources/pinouts/ABX00051-full-pinout.pdf) + +2. **XIAO ESP32S3 Sense board** + - The Seeed Studio XIAO ESP32S3 Sense is a tiny, feature-packed board designed for makers, hobbyists, and students interested in exploring edge AI applications. It comes with a camera, microphone, and IMU, making it easy to get started with projects like image classification, keyword spotting, and motion detection. + - [XIAO ESP32S3 Sense specifications](https://wiki.seeedstudio.com/xiao_esp32s3_getting_started/#specification) + - [XIAO ESP32S3 Sense pinout diagram](https://wiki.seeedstudio.com/xiao_esp32s3_getting_started/#hardware-overview) + +3. **Additional accessories** + - USB-C cable for programming and powering the boards + - Breadboard and jumper wires (optional, for connecting additional sensors) + +The Arduino Nicla Vision is tailored for professional-grade applications, offering advanced features and performance suitable for demanding industrial projects. On the other hand, the Seeed Studio XIAO ESP32S3 Sense is geared towards makers, hobbyists, and students who want to explore edge AI applications in a more accessible and beginner-friendly format. Both boards have their strengths and target audiences, allowing users to choose the one that best fits their needs and skill level. + +## Software Requirements + +To program the boards and develop embedded machine learning projects, you'll need the following software: + +1. **Arduino IDE** + - Download and install the [Arduino IDE](https://www.arduino.cc/en/software) for your operating system. + - Follow the [installation guide](https://docs.arduino.cc/software/ide-v1/tutorials/Windows) for your specific OS. + - Configure the Arduino IDE for the [Arduino Nicla Vision](https://docs.arduino.cc/software/ide-v1/tutorials/getting-started/cores/arduino-mbed_nicla) and [XIAO ESP32S3 Sense](https://wiki.seeedstudio.com/xiao_esp32s3_getting_started/#software-setup) boards. + +2. **OpenMV IDE (optional)** + - Download and install the [OpenMV IDE](https://openmv.io/pages/download) for your operating system. + - Configure the OpenMV IDE for the [Arduino Nicla Vision](https://docs.arduino.cc/tutorials/nicla-vision/getting-started/)). + +3. **Edge Impulse Studio** + - Sign up for a free account on the [Edge Impulse Studio](https://studio.edgeimpulse.com/login). + - Follow the guides to connect your [Arduino Nicla Vision](https://docs.edgeimpulse.com/docs/edge-ai-hardware/mcu/arduino-nicla-vision) and [XIAO ESP32S3 Sense](https://docs.edgeimpulse.com/docs/edge-ai-hardware/mcu/seeed-xiao-esp32s3-sense) boards to Edge Impulse Studio. + +## Network Connectivity + +Some projects may require internet connectivity for data collection or model deployment. Ensure that your development environment has a stable internet connection, either through Wi-Fi or Ethernet. + +- For the Arduino Nicla Vision, you can use the onboard Wi-Fi module to connect to a wireless network. +- For the XIAO ESP32S3 Sense, you can use the onboard Wi-Fi module or connect an external Wi-Fi or Ethernet module using the available pins. + +## Conclusion + +With your hardware and software set up, you're now ready to embark on your embedded machine learning journey. The hands-on labs will guide you through various projects, covering topics such as image classification, object detection, keyword spotting, and motion classification. + +If you encounter any issues or have questions, don't hesitate to consult the troubleshooting guides, forums, or reach out to the community for support. + +Let's dive in and unlock the potential of ML on real (tiny) systems! \ No newline at end of file diff --git a/contents/labs/labs.qmd b/contents/labs/labs.qmd index 21a74203..47c6acb4 100644 --- a/contents/labs/labs.qmd +++ b/contents/labs/labs.qmd @@ -1,24 +1,65 @@ # Overview {.unnumbered} -The following labs offer a unique chance to gain hands-on experience with machine learning (ML) systems by deploying TinyML models onto real embedded devices. Instead of working with large models that need data center-scale resources, you'll interact directly with both hardware and software. These exercises cover different sensor modalities, giving you exposure to a variety of applications. This approach helps you understand the real-world challenges and opportunities in deploying AI on real systems. +Welcome to the hands-on labs section where you'll explore deploying ML models onto real embedded devices, which will offer a practical introduction to ML systems. Unlike traditional approaches with large-scale models, these labs focus on interacting directly with both hardware and software. They help us show case various sensor modalities across different application use cases. This approach provides valuable insights into the challenges and opportunities of deploying AI on real physical systems. + +## Learning Objectives + +By completing these labs, we hope learners will: + +:::{.callout-tip} + +* Gain proficiency in setting up and deploying ML models on supported devices enabling you to tackle real-world ML deployment scenarios with confidence. + +* Understand the steps involved in adapting and experimenting with ML models for different applications allowing you to optimize performance and efficiency. + +* Learn troubleshooting techniques specific to embedded ML deployments equipping you with the skills to overcome common pitfalls and challenges. + +* Acquire practical experience in deploying TinyML models on embedded devices bridging the gap between theory and practice. + +* Explore various sensor modalities and their applications expanding your understanding of how ML can be leveraged in diverse domains. + +* Foster an understanding of the real-world implications and challenges associated with ML system deployments preparing you for future projects. + +::: + +## Target Audience + +These labs are designed for: + +* **Beginners** in the field of machine learning who have a keen interest in exploring the intersection of ML and embedded systems. + +* **Developers and engineers** looking to apply ML models to real-world applications using low-power, resource-constrained devices. + +* **Enthusiasts and researchers** who want to gain practical experience in deploying AI on edge devices and understand the unique challenges involved. ## Supported Devices -| Device/Board | Installaion & Setup | Keyword Spotting (KWS) | Image Classification | Object Detection | Motion Detection | -| --------------------------------- | ------------------------------- | --------------------------------------------------------------------- | ------------------------------------------------------------------- | ------------------------------------------------------------------- | ------------------------------------------------------------------- | -| [Nicla Vision](./arduino/nicla_vision/nicla_vision.qmd) | [Link](./arduino/nicla_vision/setup/setup.qmd) | [Link](./arduino/nicla_vision/kws/kws.qmd) | [Link](./arduino/nicla_vision/image_classification/image_classification.qmd) | [Link](./arduino/nicla_vision/object_detection/object_detection.qmd) | [Link](./arduino/nicla_vision/motion_classification/motion_classification.qmd) | -| [XIAO ESP32S3](./seeed/xiao_esp32s3/xiao_esp32s3.qmd) | [Link](./seeed/xiao_esp32s3/setup/setup.qmd) | [Link](./seeed/xiao_esp32s3/kws/kws.qmd) | [Link](./seeed/xiao_esp32s3/image_classification/image_classification.qmd) | Coming soon. | [Link](./seeed/xiao_esp32s3/motion_classification/motion_classification.qmd) | +| Exercise | [Nicla Vision](https://store.arduino.cc/products/nicla-vision) | [XIAO ESP32S3](https://www.google.com/search?q=XIAO+ESP32S3) | +| --------------------------------- | ------------------------------- | ------------------------------- | +| Installation & Setup | ✅ | ✅ | +| Keyword Spotting (KWS) | ✅ | ✅ | +| Image Classification | ✅ | ✅ | +| Object Detection | ✅ | ✅ | +| Motion Detection | ✅ | ✅ | ## Lab Structure -Each lab follows a similar structure: +Each lab follows a structured approach: -#. Introduction to the application and its real-world significance -#. Step-by-step instructions to set up the hardware and software environment -#. Detailed guidance on deploying the pre-trained TinyML model -#. Exercises to modify and experiment with the model and its parameters -#. Discussion on the results and potential improvements +1. **Introduction**: Explore the application and its significance in real-world scenarios. +2. **Setup**: Step-by-step instructions to configure the hardware and software environment. + +3. **Deployment**: Guidance on training and deploying the pre-trained ML models on supported devices. + +4. **Exercises**: Hands-on tasks to modify and experiment with model parameters. + +5. **Discussion**: Analysis of results, potential improvements, and practical insights. + ## Troubleshooting and Support -If you encounter any issues during the labs, please refer to the troubleshooting guides and FAQs provided with each lab. If you cannot find a solution, feel free to reach out to our support team or the community forums for assistance. \ No newline at end of file +If you encounter any issues during the labs, consult the troubleshooting comments or check the FAQs within each lab. For further assistance, feel free to reach out to our support team or engage with the community forums. + +## Credits + +Special credit and thanks to [Prof. Marcelo Rovai Mjrovai](https://github.com/Mjrovai) for his valuable contributions to the development and continuous refinement of these labs. \ No newline at end of file diff --git a/contents/labs/seeed/xiao_esp32s3/xiao_esp32s3.qmd b/contents/labs/seeed/xiao_esp32s3/xiao_esp32s3.qmd index 66a08ed3..461b829c 100644 --- a/contents/labs/seeed/xiao_esp32s3/xiao_esp32s3.qmd +++ b/contents/labs/seeed/xiao_esp32s3/xiao_esp32s3.qmd @@ -2,7 +2,7 @@ These labs provide a unique opportunity to gain practical experience with machine learning (ML) systems. Unlike working with large models requiring data center-scale resources, these exercises allow you to directly interact with hardware and software using TinyML. This hands-on approach gives you a tangible understanding of the challenges and opportunities in deploying AI, albeit at a tiny scale. However, the principles are largely the same as what you would encounter when working with larger systems. -![XIAO ESP32S3 Sense. Credit: SEEED Studio](./images/jpeg/xiao_esp32s3_decked.jpeg){#fig-xiao_esp32s3 height=3in} +![XIAO ESP32S3 Sense. Credit: SEEED Studio](./images/jpeg/xiao_esp32s3_decked.jpeg) ## Pre-requisites