Image Classification on Raspberry Pi Zero with TensorFlow Lite

In this project I will show you how to train an image classification model with TensorFlow and deploy it on a Raspberry Pi Zero. The model can count how many fingers you show to the camera. You can use this as a base for further projects, for example to adjust volume of your speakers or lighting in a room based on inputs from 0 to 5.

The TensorFlow model is trained on a different, more powerful machine, then saved as a TensorFlow Lite FlatBuffer file and finally deployed on a Raspberry Pi Zero.

The high level steps are described below:

  1. Train image recognition model
  2. Save model in a tflite file
  3. Install TensorFlow on the Raspberry Pi Zero
  4. Capture images with picamera and process images with numpy
  5. Score images with the TensorFlow Lite interpreter

You can find how to train the model in my notebook on Kaggle:

https://www.kaggle.com/iomili/fingers-dataset-cnn

I use the “Fingers” data set found here:

https://www.kaggle.com/koryakinp/fingers

I train a model with Keras and TensorFlow that counts fingers, based on the Fingers dataset. I used a relatively shallow network with two convolutional layers and a pooling layer. If you use a Kaggle kernel you have fee access to NVidia GPUs, which can significantly speed up training time for such image recognition models.

Once you are happy with the model, you can save it in a FlatBuffer file and copy the file to the Raspberry Pi.

Now we are ready to make some inference on the Pi. To use TensorFlow interpreter on the Raspberry Pi Zero we need to install TensorFlow:

pip3 install tensorflow 

To load and use the FlatBuffer file I use the guide here:

https://www.tensorflow.org/lite/guide/inference

Then capture and process the image with picam and numpy. Depending on the angle you take the picture you will need to adjust the rotation, so it matches the images in the training data set. You also need to have a dark grey background for the images you take with the picam. Convert the image to greyscale and finally reshape it to match the data used in the original Keras model. This is the snippet I use:

camera.resolution = (128, 128)
camera.framerate = 24
time.sleep(.5)
output = np.empty((128, 128, 3), dtype=np.uint8)
camera.capture(output, 'rgb')
camera.capture('original image.jpg')
# Change image to gray scale to match training data
output_grey = np.dot(output, [0.299, 0.587, 0.114])
# Rotate if necessary to match training data
output_grey = np.rot90(np.dot(output, [0.299, 0.587, 0.114]), 2)
# Reshape and scale
output_grey_rs = output_grey.reshape((-1,128, 128, 1))/255.0

You can find the full code for deploying the model here:

https://github.com/ionutpi/picamera-tflite.git

Real time DHT22 sensor data visualization with RabbitMQ, Flask and D3.js on Raspberry Pi Zero

Leave a Reply

Your email address will not be published / Required fields are marked *