This example uses TensorFlow Lite with Python on a Raspberry Pi to perform real-time image classification using images streamed from the camera.
At the end of this page, there are extra steps to accelerate the example using the Coral USB Accelerator to increase inference speed.
Before you begin, you need to set up your Raspberry Pi with Raspberry Pi OS (preferably updated to Buster).
You also need to connect and configure the Pi Camera if you use the Pi Camera. This code also works with USB camera connect to the Raspberry Pi.
And to see the results from the camera, you need a monitor connected to the Raspberry Pi. It's okay if you're using SSH to access the Pi shell (you don't need to use a keyboard connected to the Pi)—you only need a monitor attached to the Pi to see the camera stream.
In this project, all you need from the TensorFlow Lite API is the Interpreter
class. So instead of installing the large tensorflow
package, we're using the
much smaller tflite_runtime
package.
To install this on your Raspberry Pi, follow the instructions in the Python quickstart.
You can install the TFLite runtime using this script.
sh setup.sh
First, clone this Git repo onto your Raspberry Pi like this:
git clone https://github.com/ywanglab/tflite-pi.git
Then use our script to install a couple Python packages, and download the TFLite model:
cd tflite-pi
# Run this script to install the required dependencies and download the TFLite models.
sh setup.sh
python3 classify.py
- You can optionally specify the
model
parameter to set the TensorFlow Lite model to be used:- The default value is
efficientnet_lite0.tflite
- TensorFlow Lite image classification models with metadatafrom (including models from TensorFlow Hub or models trained with TensorFlow Lite Model Maker are supported.)
- The default value is
- You can optionally specify the
maxResults
parameter to limit the list of classification results:- Supported value: A positive integer.
- Default value:
3
.
- Example usage:
python3 classify.py \
--model efficientnet_lite0.tflite \
--maxResults 5
If you want to significantly speed up the inference time, you can attach an ML accelerator such as the Coral USB Accelerator—a USB accessory that adds the Edge TPU ML accelerator to any Linux-based system.
If you have a Coral USB Accelerator, you can run the sample with it enabled:
-
First, be sure you have completed the USB Accelerator setup instructions.
-
Run the image classification script using the Edge TPU TFLite model and enable the Edge TPU option.
python3 classify.py \
--model efficientnet_lite0_edgetpu.tflite \
--enableEdgeTPU
You should see significantly faster inference speeds.
For more information about creating and running TensorFlow Lite models with Coral devices, read TensorFlow models on the Edge TPU.
For more information about executing inferences with TensorFlow Lite, read TensorFlow Lite inference.