Skip to content

Commit

Permalink
Merge branch 'pr/89-marcelo_exercises'
Browse files Browse the repository at this point in the history
# Conflicts:
#	dl_primer.qmd
#	frameworks.qmd
  • Loading branch information
profvjreddi committed Dec 7, 2023
2 parents 7a236c1 + b948d49 commit 45a7d71
Show file tree
Hide file tree
Showing 18 changed files with 772 additions and 35 deletions.
406 changes: 406 additions & 0 deletions Motion_Classif_Anomaly_Detect.qmd

Large diffs are not rendered by default.

18 changes: 17 additions & 1 deletion dl_primer.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -219,4 +219,20 @@ Furthermore, we delved into an examination of the limitations of deep learning.

In this primer, we have equipped you with the knowledge to make informed choices between deploying traditional machine learning or deep learning techniques, depending on the unique demands and constraints of a specific problem.

As we conclude this chapter, we hope you are now well-equipped with the basic "language" of deep learning, prepared to delve deeper into the subsequent chapters with a solid understanding and critical perspective. The journey ahead is filled with exciting opportunities and challenges in embedding AI within systems.
As we conclude this chapter, we hope you are now well-equipped with the basic "language" of deep learning and prepared to delve deeper into the subsequent chapters with a solid understanding and critical perspective. The journey ahead is filled with exciting opportunities and challenges in embedding AI within systems.

## Exercises

Now would be an excellent time to try some deep learning models:

::: callout-tip
### Deep Learning Basic Models

- **MLP (DNN) -- Regression**
- Boston Housing [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Mjrovai/UNIFEI-IESTI01-TinyML-2022.1/blob/main/00_Curse_Folder/1_Fundamentals/Class_07/TF_Boston_Housing_Regression.ipynb)
- **MLP (DNN) -- Classification**
- MNIST [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Mjrovai/UNIFEI-IESTI01-TinyML-2022.1/blob/main/00_Curse_Folder/1_Fundamentals/Class_09/TF_MNIST_Classification_v2.ipynb)
- Breast Cancer [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Mjrovai/UNIFEI-IESTI01-TinyML-2022.1/blob/main/00_Curse_Folder/1_Fundamentals/Class_13/docs/WDBC_Project/Breast_Cancer_Classification.ipynb)
- **CNN -- Classification**
- Cifar-10 [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Mjrovai/UNIFEI-IESTI01-TinyML-2022.1/blob/main/00_Curse_Folder/1_Fundamentals/Class_11/CNN_Cifar_10.ipynb)
:::
266 changes: 266 additions & 0 deletions dl_primer.qmd.orig

Large diffs are not rendered by default.

4 changes: 3 additions & 1 deletion dsp_spectral_features_block.qmd
Original file line number Diff line number Diff line change
@@ -1,8 +1,10 @@
# DSP - Spectral Features {.unnumbered}

![*DALL·E 3 Prompt: 1950s style cartoon illustration of a Latin male and female scientist in a vibration research room. The man is using a calculus ruler to examine ancient circuitry. The woman is at a computer with complex vibration graphs. The wooden table has boards with sensors, prominently an accelerometer. A classic, rounded-back computer shows the Arduino IDE with code for LED pin assignments and machine learning algorithms for movement detection. The Serial Monitor displays FFT, classification, wavelets, and DSPs. Vintage lamps, tools, and charts with FFT and Wavelets graphs complete the scene.*](images/dsp_ini.jpg){fig-align="center" width="6.5in"}

## Introduction

TinyML projects related to motion (or vibration) involve data from IMUs (usually **accelerometers** and **gyroscopes**). These time-series type datasets should be preprocessed before inputting them into a Machine Learning model training, which is a challenging area for embedded machine learning. Still, Edge Impulse helps overcome this complexity with its digital signal processing (DSP) preprocessing step and, more specifically, the [Spectral Features Block](https://docs.edgeimpulse.com/docs/edge-impulse-studio/processing-blocks/spectral-features) for Inertial sensors.
TinyML projects related to motion (or vibration) involve data from IMUs (usually **accelerometers** and **Gyroscopes**). These time-series type datasets should be preprocessed before inputting them into a Machine Learning model training, which is a challenging area for embedded machine learning. Still, Edge Impulse helps overcome this complexity with its digital signal processing (DSP) preprocessing step and, more specifically, the [Spectral Features Block](https://docs.edgeimpulse.com/docs/edge-impulse-studio/processing-blocks/spectral-features) for Inertial sensors.

But how does it work under the hood? Let's dig into it.

Expand Down
2 changes: 2 additions & 0 deletions image_classification.qmd
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# CV on Nicla Vision {.unnumbered}

![*DALL·E 3 Prompt: Cartoon in a 1950s style featuring a compact electronic device with a camera module placed on a wooden table. The screen displays blue robots on one side and green periquitos on the other. LED lights on the device indicate classifications, while characters in retro clothing observe with interest.*](images/img_class_ini.jpg){fig-align="center" width="6.5in"}

## Introduction

As we initiate our studies into embedded machine learning or tinyML, it's impossible to overlook the transformative impact of Computer Vision (CV) and Artificial Intelligence (AI) in our lives. These two intertwined disciplines redefine what machines can perceive and accomplish, from autonomous vehicles and robotics to healthcare and surveillance.
Expand Down
Binary file added images/dsp_ini.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/img_class_ini.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/kws_under_the_hood_ini.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/melbanks.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/movement_anomaly_ini.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/nicla-kws.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/nicla_sys_ini.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/obj_det_ini.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 2 additions & 0 deletions kws_feature_eng.qmd
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# Audio Feature Engineering {.unnumbered}

![*DALL·E 3 Prompt: 1950s style cartoon scene set in an audio research room. Two scientists, one holding a magnifying glass and the other taking notes, examine large charts pinned to the wall. These charts depict FFT graphs and time curves related to audio data analysis. The room has a retro ambiance, with wooden tables, vintage lamps, and classic audio analysis tools.*](images/kws_under_the_hood_ini.jpg){fig-align="center" width="6.5in"}

## Introduction

In this hands-on tutorial, the emphasis is on the critical role that feature engineering plays in optimizing the performance of machine learning models applied to audio classification tasks, such as speech recognition. It is essential to be aware that the performance of any machine learning model relies heavily on the quality of features used, and we will deal with "under-the-hood" mechanics of feature extraction, mainly focusing on Mel-frequency Cepstral Coefficients (MFCCs), a cornerstone in the field of audio signal processing.
Expand Down
4 changes: 2 additions & 2 deletions kws_nicla.qmd
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# Keyword Spotting (KWS) {.unnumbered}

![*DALL·E 3 Prompt: 1950s style cartoon scene set in a vintage audio research room. Two Afro-American female scientists are at the center. One holds a magnifying glass, closely examining ancient circuitry, while the other takes notes. On their wooden table, there are multiple boards with sensors, notably featuring a microphone. Behind these boards, a computer with a large, rounded back displays the Arduino IDE. The IDE showcases code for LED pin assignments and machine learning inference for voice command detection. A distinct window in the IDE, the Serial Monitor, reveals outputs indicating the spoken commands 'yes' and 'no'. The room ambiance is nostalgic with vintage lamps, classic audio analysis tools, and charts depicting FFT graphs and time-domain curves.*](images/nicla-kws.jpg){fig-align="center" width="6.5in"}

## Introduction

Having already explored the Nicla Vision board in the *Image Classification* and *Object Detection* applications, we are now shifting our focus to voice-activated applications with a project on Keyword Spotting (KWS).
Expand Down Expand Up @@ -98,8 +100,6 @@ As we learned in the chapter *Setup Nicla Vision*, EIS officially supports the N

- Put the NiclaV in Boot Mode by pressing the reset button twice.

![](https://84771188-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FGEgcCk4PkS5Pa6uBabld%2Fuploads%2Fgit-blob-111b26f413cd411b29594c377868bba901863233%2Fnicla_bootloader.gif?alt=media){fig-align="center" width="6.5in"}

- Upload the binary *arduino-nicla-vision.bin* to your board by running the batch code corresponding to your OS.

Go to your project on EIS, and on the `Data Acquisition tab`, select `WebUSB`. A window will pop up; choose the option that shows that the `Nicla is paired` and press `[Connect]`.
Expand Down
99 changes: 69 additions & 30 deletions learning_resources.qmd
Original file line number Diff line number Diff line change
@@ -1,51 +1,90 @@
---
editor:
markdown:
wrap: 72
---

# Resources

Embarking on your TinyML journey has never been easier with the curated resources to pave your path to expertise. There are coding platforms and communities where you can immerse yourself in hands-on TinyML projects, sharing or seeking advice on GitHub and Stack Overflow. Meanwhile, there are gateways to structured learning, featuring courses that provide a comprehensive education in the field.
Embarking on your TinyML journey has never been easier with the curated
resources to pave your path to expertise. There are coding platforms and
communities where you can immerse yourself in hands-on TinyML projects,
sharing or seeking advice on GitHub and Stack Overflow. Meanwhile, there
are gateways to structured learning featuring courses that provide a
comprehensive education in the field.

While this page serves as a solid starting point, stay tuned as we continually expand our resource pool, with the aim to foster a rich learning and collaborative environment for TinyML enthusiasts of all levels.
While this page serves as a solid starting point, stay tuned as we
continually expand our resource pool, with the aim to foster a rich
learning and collaborative environment for TinyML enthusiasts of all
levels.

## Books

Here is a list of recommended books for learning about TinyML or embedded AI:
Here is a list of recommended books for learning about TinyML or
embedded AI:

1. [TinyML: Machine Learning with TensorFlow Lite on Arduino and
Ultra-Low-Power
Microcontrollers](https://www.amazon.com/TinyML-Learning-TensorFlow-Ultra-Low-Power-Microcontrollers/dp/1492052043)
by Pete Warden and Daniel Situnayake

1. [TinyML: Machine Learning with TensorFlow Lite on Arduino and Ultra-Low-Power Microcontrollers](https://www.amazon.com/TinyML-Learning-TensorFlow-Ultra-Low-Power-Microcontrollers/dp/1492052043) by Pete Warden and Daniel Situnayake

2. [AI at the Edge: Solving Real-World Problems with Embedded Machine Learning](https://www.oreilly.com/library/view/ai-at-the/9781098120191/) by Daniel Situnayake
2. [AI at the Edge: Solving Real-World Problems with Embedded Machine
Learning](https://www.oreilly.com/library/view/ai-at-the/9781098120191/)
by Daniel Situnayake and Jenny Plunkett

3. [TinyML Cookbook: Combine artificial intelligence and ultra-low-power embedded devices to make the world smarter](https://www.amazon.com/TinyML-Cookbook-artificial-intelligence-ultra-low-power/dp/180181497X) by Gian Marco Iodice
3. [TinyML Cookbook: Combine artificial intelligence and
ultra-low-power embedded devices to make the world
smarter](https://www.amazon.com/TinyML-Cookbook-artificial-intelligence-ultra-low-power/dp/180181497X)
by Gian Marco Iodice

4. [Deep Learning on Microcontrollers: Learn how to develop embedded AI applications using TinyML](https://www.amazon.com/Deep-Learning-Microcontrollers-Applications-Embedded/dp/1803234378/) by Ashish Vaswani
4. [Deep Learning on Microcontrollers: Learn how to develop embedded AI
applications using
TinyML](https://www.amazon.com/Deep-Learning-Microcontrollers-Applications-Embedded/dp/1803234378/)
by Ashish Vaswani

5. [Introduction to TinyML](https://www.amazon.com/Introduction-TinyML-Rohit-Sharma/dp/B0B5Q281L9) by Rohit Sharma
5. [Introduction to
TinyML](https://www.amazon.com/Introduction-TinyML-Rohit-Sharma/dp/B0B5Q281L9)
by Rohit Sharma

These books cover a range of topics related to TinyML and embedded AI, including:
These books cover a range of topics related to TinyML and embedded AI,
including:

* The fundamentals of machine learning and TinyML
* How to choose the right hardware and software for your project
* How to train and deploy TinyML models on embedded devices
* Real-world examples of TinyML applications
- The fundamentals of machine learning and TinyML
- How to choose the right hardware and software for your project
- How to train and deploy TinyML models on embedded devices
- Real-world examples of TinyML applications

In addition to the above books, there are a number of other resources available for learning about TinyML and embedded AI, including online courses, tutorials, and blog posts. Some of these are listed below. Another great way to learn is join the [community](./community.qmd) of embedded AI developers.
In addition to the above books, there are a number of other resources
available for learning about TinyML and embedded AI, including online
courses, tutorials, and blog posts. Some of these are listed below.
Another great way to learn is by joining the
[community](./community.qmd) of embedded AI developers.

## Tutorials

## Frameworks

1. **GitHub**
Description: There are various GitHub repositories dedicated to TinyML where you can contribute or learn from existing projects. Some popular organizations/repos to check out are:
- TensorFlow Lite Micro: [GitHub Repository](https://github.com/tensorflow/tflite-micro)
- TinyML4D: [GitHub Repository](https://github.com/tinyML4D/tinyML4D)

2. **Stack Overflow**
Tags: [tinyml](https://stackoverflow.com/questions/tagged/tinyml)
Description: Use the "tinyml" tag on Stack Overflow to ask technical questions and find answers from the community.
1. **GitHub** Description: There are various GitHub repositories
dedicated to TinyML where you can contribute or learn from existing
projects. Some popular organizations/repos to check out are:
- TensorFlow Lite Micro: [GitHub
Repository](https://github.com/tensorflow/tflite-micro)
- TinyML4D: [GitHub
Repository](https://tinyml.seas.harvard.edu/4D/)
- Edge Impulse Expert Network:
[Repository](https://docs.edgeimpulse.com/experts/)
2. **Stack Overflow** Tags:
[tinyml](https://stackoverflow.com/questions/tagged/tinyml)
Description: Use the "tinyml" tag on Stack Overflow to ask technical
questions and find answers from the community.

## Courses and Learning Platforms

1. **Coursera**
Course: [Introduction to Embedded Machine Learning](https://www.coursera.org/learn/introduction-to-embedded-machine-learning)
Description: A dedicated course on Coursera to learn the basics and advances of TinyML.

2. **EdX**
Course: [Intro to TinyML](https://www.edx.org/professional-certificate/harvardx-tiny-machine-learning)
Description: Learn about TinyML with this HarvardX course.
1. **Coursera** Course: [Introduction to Embedded Machine
Learning](https://www.coursera.org/learn/introduction-to-embedded-machine-learning)
Description: A dedicated course on Coursera to learn the basics and
advances of TinyML.

2. **EdX** Course: [Intro to
TinyML](https://www.edx.org/professional-certificate/harvardx-tiny-machine-learning)
Description: Learn about TinyML with this HarvardX course.
4 changes: 3 additions & 1 deletion niclav_sys.qmd
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# Setup Nicla Vision {.unnumbered}

![*DALL·E 3 Prompt: Illustration reminiscent of a 1950s cartoon where the Arduino NICLA VISION board, equipped with a variety of sensors including a camera, is the focal point on an old-fashioned desk. In the background, a computer screen with rounded edges displays the Arduino IDE. The code seen is related to LED configurations and machine learning voice command detection. Outputs on the Serial Monitor explicitly display the words 'yes' and 'no'.*](images/nicla_sys_ini.jpg){fig-align="center" width="6.5in"}

## Introduction

The [Arduino Nicla Vision](https://docs.arduino.cc/hardware/nicla-vision) (sometimes called *NiclaV*) is a development board that includes two processors that can run tasks in parallel. It is part of a family of development boards with the same form factor but designed for specific tasks, such as the [Nicla Sense ME](https://www.bosch-sensortec.com/software-tools/tools/arduino-nicla-sense-me/) and the [Nicla Voice](https://store-usa.arduino.cc/products/nicla-voice?_gl=1*l3abc6*_ga*MTQ3NzE4Mjk4Mi4xNjQwMDIwOTk5*_ga_NEXN8H46L5*MTY5NjM0Mzk1My4xMDIuMS4xNjk2MzQ0MjQ1LjAuMC4w). The *Niclas* can efficiently run processes created with TensorFlow™ Lite. For example, one of the cores of the NiclaV runs a computer vision algorithm on the fly (inference), while the other executes low-level operations like controlling a motor and communicating or acting as a user interface. The onboard wireless module allows the management of WiFi and Bluetooth Low Energy (BLE) connectivity simultaneously.
Expand Down Expand Up @@ -104,7 +106,7 @@ Any messages sent through a serial connection (using print() or error messages)

OpenMV IDE is the premier integrated development environment with OpenMV Cameras and the Arduino Pro boards. It features a powerful text editor, debug terminal, and frame buffer viewer with a histogram display. We will use MicroPython to program the Nicla Vision.

> Before connecting the Nicla to the OpenMV IDE, ensure you have the latest bootloader version. Go to your Arduino IDE, select the Nicla board, and open the sketch on `Examples > STM_32H747_System STM_32H747_updateBootloader`. Upload the code to your board. The Serial Monitor will guide you.
> Before connecting the Nicla to the OpenMV IDE, ensure you have the latest bootloader version. Go to your Arduino IDE, select the Nicla board, and open the sketch on `Examples > STM_32H747_System STM32H747_manageBootloader`. Upload the code to your board. The Serial Monitor will guide you.
After updating the bootloader, put the Nicla Vision in bootloader mode by double-pressing the reset button on the board. The built-in green LED will start fading in and out. Now return to the OpenMV IDE and click on the connect icon (Left ToolBar):

Expand Down
2 changes: 2 additions & 0 deletions object_detection_fomo.qmd
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# Object Detection {.unnumbered}

![*DALL·E 3 Prompt: Cartoon in the style of the 1940s or 1950s showcasing a spacious industrial warehouse interior. A conveyor belt is prominently featured, carrying a mixture of toy wheels and boxes. The wheels are distinguishable with their bright yellow centers and black tires. The boxes are white cubes painted with alternating black and white patterns. At the end of the moving conveyor stands a retro-styled robot, equipped with tools and sensors, diligently classifying and counting the arriving wheels and boxes. The overall aesthetic is reminiscent of mid-century animation with bold lines and a classic color palette.*](images/obj_det_ini.jpg){fig-align="center" width="6.5in"}

## Introduction

This is a continuation of **CV on Nicla Vision**, now exploring **Object Detection** on microcontrollers.
Expand Down

0 comments on commit 45a7d71

Please sign in to comment.