Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

463 2 notes and 2 warnings #464

Merged
merged 6 commits into from
Sep 20, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions contents/dl_primer/dl_primer.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -251,10 +251,10 @@ Deep learning extends traditional machine learning by utilizing neural networks
| Maintenance | Easier (simple to update and maintain) | Complex (requires more efforts in maintenance and updates) |
+-------------------------------+-----------------------------------------------------------+--------------------------------------------------------------+

![Comparing Machine Learning and Deep Learning. Source: [Medium](https://aoyilmaz.medium.com/understanding-the-differences-between-deep-learning-and-machine-learning-eb41d64f1732)](images/png/mlvsdl.png){#fig-ml-dl}

: Comparison of traditional machine learning and deep learning. {#tbl-mlvsdl .striped .hover}

![Comparing Machine Learning and Deep Learning. Source: [Medium](https://aoyilmaz.medium.com/understanding-the-differences-between-deep-learning-and-machine-learning-eb41d64f1732)](images/png/mlvsdl.png){#fig-ml-dl}

### Choosing Traditional ML vs. DL

#### Data Availability and Volume
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -762,13 +762,13 @@ if __name__ == '__main__':
```
</div>

4. Run this script:
3. Run this script:

```bash
python3 get_img_data.py
```

3. Access the web interface:
4. Access the web interface:

- On the Raspberry Pi itself (if you have a GUI): Open a web browser and go to `http://localhost:5000`
- From another device on the same network: Open a web browser and go to `http://<raspberry_pi_ip>:5000` (Replace `<raspberry_pi_ip>` with your Raspberry Pi's IP address). For example: `http://192.168.4.210:5000/`
Expand Down Expand Up @@ -943,7 +943,7 @@ The final dense layer of our model will have 0 neurons with a 10% dropout for ov

![](images/png/result-train.png)

The result is excellent, with a reasonable 35ms of latency (for a Rasp-4), which should result in around 30 fps (frames per second) during inference. A Raspi-Zero should be slower, and the Rasp-5, faster.
The result is excellent, with a reasonable 35ms of latency (for a Raspi-4), which should result in around 30 fps (frames per second) during inference. A Raspi-Zero should be slower, and the Raspi-5, faster.

### Trading off: Accuracy versus speed

Expand Down Expand Up @@ -1048,7 +1048,7 @@ input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
```

One important difference to note is that the `dtype` of the input details of the model is now `int8`, which means that the input values go from -128 to +127, while each pixel of our image goes from 0 to 256. This means that we should pre-process the image to match it. We can check here:
One important difference to note is that the `dtype` of the input details of the model is now `int8`, which means that the input values go from -128 to +127, while each pixel of our image goes from 0 to 255. This means that we should pre-process the image to match it. We can check here:

```python
input_dtype = input_details[0]['dtype']
Expand Down Expand Up @@ -1209,11 +1209,11 @@ And test it with different images and the int8 quantized model (**160x160 alpha

![](images/png/infer-int8-160.png)

Let's download a smaller model, such as the one trained for the [Nicla Vision Lab](https://studio.edgeimpulse.com/public/353482/live) (int8 quantized model (96x96 alpha = 0.1), as a test. We can use the same function:
Let's download a smaller model, such as the one trained for the [Nicla Vision Lab](https://studio.edgeimpulse.com/public/353482/live) (int8 quantized model, 96x96, alpha = 0.1), as a test. We can use the same function:

![](images/png/infer-int8-96.png)

The model lost some accuracy, but it is still OK once our model does not look for many details. Regarding latency, we are around **ten times faster** on the Rasp-Zero.
The model lost some accuracy, but it is still OK once our model does not look for many details. Regarding latency, we are around **ten times faster** on the Raspi-Zero.

## Live Image Classification

Expand Down Expand Up @@ -1505,7 +1505,7 @@ The code creates a web application for real-time image classification using a Ra

## Conclusion:

Image classification has emerged as a powerful and versatile application of machine learning, with significant implications for various fields, from healthcare to environmental monitoring. This chapter has demonstrated how to implement a robust image classification system on edge devices like the Raspi-Zero and Rasp-5, showcasing the potential for real-time, on-device intelligence.
Image classification has emerged as a powerful and versatile application of machine learning, with significant implications for various fields, from healthcare to environmental monitoring. This chapter has demonstrated how to implement a robust image classification system on edge devices like the Raspi-Zero and Raspi-5, showcasing the potential for real-time, on-device intelligence.

We've explored the entire pipeline of an image classification project, from data collection and model training using Edge Impulse Studio to deploying and running inferences on a Raspi. The process highlighted several key points:

Expand Down
2 changes: 1 addition & 1 deletion contents/labs/raspi/raspi.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ These labs offer invaluable hands-on experience with machine learning systems, l

## Pre-requisites

- **Raspberry Pi **: Ensure you have at least one of the boards: the Raspberry Pi Zero 2W, Raspberry Pi 4 or 5.
- **Raspberry Pi**: Ensure you have at least one of the boards: the Raspberry Pi Zero 2W, Raspberry Pi 4 or 5.
- **Power Adapter**: To Power on the boards.
- Raspberry Pi Zero 2-W: 2.5W with a Micro-USB adapter
- Raspberry Pi 4 or 5: 3.5W with a USB-C adapter
Expand Down
12 changes: 6 additions & 6 deletions contents/labs/raspi/setup/setup.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ The Raspberry Pi runs a specialized version of Linux designed for embedded syste

> The latest version of Raspberry Pi OS is based on [Debian Bookworm](https://www.raspberrypi.com/news/bookworm-the-new-version-of-raspberry-pi-os/).

**Key feature**s:
**Key features**:

1. Lightweight: Tailored to run efficiently on the Pi's hardware.
2. Versatile: Supports a wide range of applications and programming languages.
Expand All @@ -112,7 +112,7 @@ Embedded Linux on the Raspberry Pi provides a full-featured operating system in

### Installation

To use the Raspberry Pi, we will need an operating system. By default, Raspberry Pis checks for an operating system on any SD card inserted in the slot, so we should install an operating system using [Raspberry Pi Imager.](https://www.raspberrypi.com/software/)
To use the Raspberry Pi, we will need an operating system. By default, Raspberry Pi checks for an operating system on any SD card inserted in the slot, so we should install an operating system using [Raspberry Pi Imager.](https://www.raspberrypi.com/software/)

*Raspberry Pi Imager* is a tool for downloading and writing images on *macOS*, *Windows*, and *Linux*. It includes many popular operating system images for Raspberry Pi. We will also use the Imager to preconfigure credentials and remote access settings.

Expand Down Expand Up @@ -326,7 +326,7 @@ When your device is rebooted (you should enter with the SSH again), you will rea

The Raspi is an excellent device for computer vision applications; a camera is needed for it. We can install a standard USB webcam on the micro-USB port using a USB OTG adapter (Raspi-Zero and Rasp-5) or a camera module connected to the Raspi CSI (Camera Serial Interface) port.

> USB Webcams generally have inferior quality to the camera modules that connect to the CSI port. They can also not be controlled using the `raspistill` and `rasivid` commands in the terminal or the `picamera` recording package in Python. Nevertheless, there may be reasons why you want to connect a USB camera to your Raspberry Pi, such as because of the benefit that it is much easier to set up multiple cameras with a single Raspberry Pi, long cables, or simply because you have such a camera on hand.
> USB Webcams generally have inferior quality to the camera modules that connect to the CSI port. They can also not be controlled using the `raspistill` and `raspivid` commands in the terminal or the `picamera` recording package in Python. Nevertheless, there may be reasons why you want to connect a USB camera to your Raspberry Pi, such as because of the benefit that it is much easier to set up multiple cameras with a single Raspberry Pi, long cables, or simply because you have such a camera on hand.

### Installing a USB WebCam

Expand Down Expand Up @@ -435,7 +435,7 @@ http://<raspberry_pi_ip_address>:8080/?action=stream

### Installing a Camera Module on the CSI port

There are now several Raspberry Pi camera modules. The original 5-megapixel model was [released ](https://www.raspberrypi.com/news/camera-board-available-for-sale/)in 2013, followed by an [8-megapixel Camera Module 2,](https://www.raspberrypi.com/products/camera-module-v2/) released in 2016. The latest camera model is the [12-megapixel Camera Module 3,](https://www.raspberrypi.com/documentation/accessories/camera.html#:~:text=the 12-megapixel-,Camera Module 3,-which was released) released in 2023.
There are now several Raspberry Pi camera modules. The original 5-megapixel model was [released](https://www.raspberrypi.com/news/camera-board-available-for-sale/) in 2013, followed by an [8-megapixel Camera Module 2](https://www.raspberrypi.com/products/camera-module-v2/) that was later released in 2016. The latest camera model is the [12-megapixel Camera Module 3](https://www.raspberrypi.com/documentation/accessories/camera.html), released in 2023.

The original 5MP camera (**Arducam OV5647**) is no longer available from Raspberry Pi but can be found from several alternative suppliers. Below is an example of such a camera on a Raspi-Zero.

Expand All @@ -445,7 +445,7 @@ Here is another example of a v2 Camera Module, which has a **Sony IMX219** 8-meg

![](images/png/raspi-5-cam.png)

Any camera module will work on the Raspis, but for that, the `onfiguration.txt` file must be updated:
Any camera module will work on the Raspberry Pis, but for that, the `configuration.txt` file must be updated:

```bash
sudo nano /boot/firmware/config.txt
Expand Down Expand Up @@ -479,7 +479,7 @@ libcamera-hello --list-cameras

![](images/png/list_cams_raspi-5.png)

> [libcamera ](https://www.raspberrypi.com/documentation/computers/camera_software.html#libcamera)is an open-source software library that supports camera systems directly from the Linux operating system on Arm processors. It minimizes proprietary code running on the Broadcom GPU.
> [libcamera](https://www.raspberrypi.com/documentation/computers/camera_software.html#libcamera) is an open-source software library that supports camera systems directly from the Linux operating system on Arm processors. It minimizes proprietary code running on the Broadcom GPU.

Let's capture a jpeg image with a resolution of 640 x 480 for testing and save it to a file named `test_cli_camera.jpg`

Expand Down
4 changes: 2 additions & 2 deletions contents/labs/seeed/xiao_esp32s3/kws/kws.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -168,7 +168,7 @@ Turn the PSRAM function of the ESP-32 chip on (Arduino IDE): Tools>PSRAM: "OPI P

![](https://hackster.imgix.net/uploads/attachments/1594639/image_Zo8usTd0A2.png?auto=compress%2Cformat&w=740&h=555&fit=max)

- Download the sketch [Wav_Record_dataset](https://github.com/Mjrovai/XIAO-ESP32S3-Sense/tree/main/Wav_Record_dataset),[](https://github.com/Mjrovai/XIAO-ESP32S3-Sense/tree/main/Wav_Record_dataset)which you can find on the project's GitHub.
- Download the sketch [Wav_Record_dataset](https://github.com/Mjrovai/XIAO-ESP32S3-Sense/tree/main/Wav_Record_dataset),[](https://github.com/Mjrovai/XIAO-ESP32S3-Sense/tree/main/Wav_Record_dataset) which you can find on the project's GitHub.

This code records audio using the I2S interface of the Seeed XIAO ESP32S3 Sense board, saves the recording as a.wav file on an SD card, and allows for control of the recording process through commands sent from the serial monitor. The name of the audio file is customizable (it should be the class labels to be used with the training), and multiple recordings can be made, each saved in a new file. The code also includes functionality to increase the volume of the recordings.

Expand Down Expand Up @@ -466,7 +466,7 @@ Testing the model with the data put apart before training (Test Data), we got an

![](https://hackster.imgix.net/uploads/attachments/1595225/pasted_graphic_58_TmPGA8iljK.png?auto=compress%2Cformat&w=740&h=555&fit=max)

Inspecting the F1 score, we can see that for YES. We got 0.95, an excellent result once we used this keyword to "trigger" our postprocessing stage (turn on the built-in LED). Even for NO, we got 0.90. The worst result is for unknown, what is OK.
Inspecting the F1 score, we can see that for YES, we got 0.95, an excellent result once we used this keyword to "trigger" our postprocessing stage (turn on the built-in LED). Even for NO, we got 0.90. The worst result is for unknown, what is OK.

We can proceed with the project, but it is possible to perform Live Classification using a smartphone before deployment on our device. Go to the Live Classification section and click on Connect a Development board:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ Download the sketch [MPU6050_Acc_Data_Acquisition.in](https://github.com/Mjrovai
#define INTERVAL_MS (1000 / (FREQUENCY_HZ + 1))
#define ACC_RANGE 1 // 0: -/+2G; 1: +/-4G

// convert factor g to m/s2 ==> [-32768, +32767] ==> [-2g, +2g]
// convert factor g to m/s^2^ ==> [-32768, +32767] ==> [-2g, +2g]
#define CONVERT_G_TO_MS2 (9.81/(16384.0/(1.+ACC_RANGE)))

static unsigned long last_interval_ms = 0;
Expand Down Expand Up @@ -114,16 +114,16 @@ void loop() {
// read raw accel/gyro measurements from device
imu.getAcceleration(&ax, &ay, &az);

// converting to m/s2
float ax_m_s2 = ax * CONVERT_G_TO_MS2;
float ay_m_s2 = ay * CONVERT_G_TO_MS2;
float az_m_s2 = az * CONVERT_G_TO_MS2;
// converting to m/s^2^
float ax_m_s^2^ = ax * CONVERT_G_TO_MS2;
float ay_m_s^2^ = ay * CONVERT_G_TO_MS2;
float az_m_s^2^ = az * CONVERT_G_TO_MS2;

Serial.print(ax_m_s2);
Serial.print(ax_m_s^2^);
Serial.print("\t");
Serial.print(ay_m_s2);
Serial.print(ay_m_s^2^);
Serial.print("\t");
Serial.println(az_m_s2);
Serial.println(az_m_s^2^);
}
}
```
Expand All @@ -132,15 +132,15 @@ void loop() {

Note that the values generated by the accelerometer and gyroscope have a range: [-32768, +32767], so for example, if the default accelerometer range is used, the range in Gs should be: [-2g, +2g]. So, "1G" means 16384.

For conversion to m/s2, for example, you can define the following:
For conversion to m/s^2^, for example, you can define the following:

```
#define CONVERT_G_TO_MS2 (9.81/16384.0)
```

In the code, I left an option (ACC_RANGE) to be set to 0 (+/-2G) or 1 (+/- 4G). We will use +/-4G; that should be enough for us. In this case.

We will capture the accelerometer data on a frequency of 50Hz, and the acceleration data will be sent to the Serial Port as meters per squared second (m/s2).
We will capture the accelerometer data on a frequency of 50Hz, and the acceleration data will be sent to the Serial Port as meters per squared second (m/s^2^).

When you ran the code with the IMU resting over your table, the accelerometer data shown on the Serial Monitor should be around 0.00, 0.00, and 9.81. If the values are a lot different, you should calibrate the IMU.

Expand All @@ -152,7 +152,7 @@ Run the code. The following will be displayed on the Serial Monitor:

Send any character (in the above example, "x"), and the calibration should start.

> Note that A message MPU6050 connection failed. Ignore this message. For some reason, imu.testConnection() is not returning a correct result.
> Note that a message MPU6050 connection failed. Ignore this message. For some reason, imu.testConnection() is not returning a correct result.

In the end, you will receive the offset values to be used on all your sketches:

Expand Down Expand Up @@ -205,7 +205,7 @@ For data collection, we should first connect our device to the Edge Impulse Stud

> Follow the instructions [here](https://docs.edgeimpulse.com/docs/edge-impulse-cli/cli-installation)to install the [Node.js](https://nodejs.org/en/)and Edge Impulse CLI on your computer.

Once the XIAO ESP32S3 is not a fully supported development board by Edge Impulse, we should, for example, use the [CLI Data Forwarder t](https://docs.edgeimpulse.com/docs/edge-impulse-cli/cli-data-forwarder)o capture data from our sensor and send it to the Studio, as shown in this diagram:
Once the XIAO ESP32S3 is not a fully supported development board by Edge Impulse, we should, for example, use the [CLI Data Forwarder](https://docs.edgeimpulse.com/docs/edge-impulse-cli/cli-data-forwarder) to capture data from our sensor and send it to the Studio, as shown in this diagram:

![](https://hackster.imgix.net/uploads/attachments/1590537/image_PHK0GELEYh.png?auto=compress%2Cformat&w=740&h=555&fit=max)

Expand Down Expand Up @@ -247,7 +247,7 @@ Now imagine your container is on a boat, facing an angry ocean, on a truck, etc.
- **Terrestrial** (palettes in a Truck or Train)
- Move the XIAO over a horizontal line.

- **Lift** (Palettes being handled by
- **Lift** (Palettes being handled by Fork-Lift)
- Move the XIAO over a vertical line.

- **Idle** (Palettes in Storage houses)
Expand Down Expand Up @@ -361,7 +361,7 @@ You should also use your device (which is still connected to the Studio) and per

## Deploy

Now it is time for magic˜! The Studio will package all the needed libraries, preprocessing functions, and trained models, downloading them to your computer. You should select the option Arduino Library, and at the bottom, choose Quantized (Int8) and Build. A Zip file will be created and downloaded to your computer.
Now it is time for magic! The Studio will package all the needed libraries, preprocessing functions, and trained models, downloading them to your computer. You should select the option Arduino Library, and at the bottom, choose Quantized (Int8) and Build. A Zip file will be created and downloaded to your computer.

![](https://hackster.imgix.net/uploads/attachments/1590716/image_d5jrYgBErG.png?auto=compress%2Cformat&w=740&h=555&fit=max)

Expand Down Expand Up @@ -443,7 +443,7 @@ buffer[ix + 1] = ay;
buffer[ix + 2] = az;
```

You should change the order of the following two blocks of code. First, you make the conversion to raw data to "Meters per squared second (ms2)", followed by the test regarding the maximum acceptance range (that here is in ms2, but on Arduino, was in Gs):
You should change the order of the following two blocks of code. First, you make the conversion to raw data to "Meters per squared second (ms^2^)", followed by the test regarding the maximum acceptance range (that here is in ms^2^, but on Arduino, was in Gs):

```
buffer[ix + 0] *= CONVERT_G_TO_MS2;
Expand Down
Loading