Face recognition on the Orange Pi with OpenCV and Python

Install OpenCV on the Orange Pi

In this project I will show you how to capture images from a webcam, detect faces in those images, train a face recognition model and then try it out on video stream from a webcam. The code here can be the basis for many other projects that contain any element of personal authentication.


For hardware, you will need a webcam and an Orange Pi of course. I used the Orange Pi Plus 2E. I am pretty sure the instructions below will work on a Raspberry Pi, but I haven’t tested it. As far as software goes, we will use OpenCV, which is a real time computer vision and machine learning library. It will allow us to capture images from the webcam, manipulate them and apply face recognition models.

So follow the steps from the OpenCV installation tutorial and install a compiler and necessary libraries.

sudo apt-get install build-essential
sudo apt-get install cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev
sudo apt-get install python-dev python-numpy libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libjasper-dev libdc1394-22-dev

Next, get OpenCV and the extra modules that contain the face recognition libraries:

wget -O opencv-3.3.0.zip https://github.com/opencv/opencv/archive/3.3.0.zip
git clone https://github.com/opencv/opencv_contrib.git
unzip opencv-3.3.0.zip
cd opencv-3.3.0

Create a build directory:

mkdir build
cd build

The command I used for the build is adapted from one I found in this guide for the Raspberry Pi, to which I added the option to include extra modules (that contain the face recognition functions):

cmake -D WITH_FFMPEG=OFF -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D OPENCV_EXTRA_MODULES_PATH=<path to opencv_contrib/modules/> -D BUILD_NEW_PYTHON_SUPPORT=ON -D INSTALL_C_EXAMPLES=ON -D INSTALL_PYTHON_EXAMPLES=ON -D BUILD_EXAMPLES=ON ..

Then compile with the command below.

make -j4

When it starts it should look like this:

This step took around 1 hour on my device. If you get any error from the extra modules, just remove them. For this to work, you only need the face module. Finally install OpenCV:

sudo make install

Now you have OpenCV in Python on the Orange Pi with the face recognition libraries. You can test it by launching Python and importing cv2.

Capture live stream from the webcam and apply face recognition

First we will write a Python script to read and store images for the model, the faces of the people we want to recognize. In the second step we will put the model to the test and see if it correctly recognizes the right person.

To read webcam stream and store faces corresponding to a person, use the code below. Make sure you have a XML face detection cascade file. These XML cascade files are usually located in opencv/data/haarcascades/. Either copy the required file in the same folder as the script, or put the whole pathe in the cv2.CascadeClassifier() function.

import os
import numpy as np
import cv2

# create folder to save pictures of a person
name=raw_input('Enter your name:')
if not os.path.exists('faces'):
    os.mkdir('faces' )
os.mkdir('faces/'+name )
print name+' folder created!'
path='faces/'+name

# import face detection cascade
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')

# create capture object
cap = cv2.VideoCapture(0)

# take 10 pictures
for i in range(1,11):
    print ('Take picture '+str(i) + ' and press space') 
    while(True):
        # capture frame-by-frame
        ret, img = cap.read()

        # convert image to grayscale
        gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

        # detect face
        faces = face_cascade.detectMultiScale(gray, 1.3, 5)

        # for each face draw a rectangle around and copy the face
        for (x,y,w,h) in faces:
            cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
            roi_color = img[y:y+h, x:x+w]
            roi_gray = gray[y:y+h, x:x+w]

        # display the resulting frame
        cv2.imshow('frame',img)   

        # when pressing space resize face image and save it
        if cv2.waitKey(1) & 0xFF == ord(' '):
            roi_gray=cv2.resize(roi_gray,(100,100))
            cv2.imwrite(path+'/'+str(i)+'.jpg',roi_gray)
            break 
    
# when everything done, release the capture
cap.release()
cv2.destroyAllWindows()

Save the code above in a Python script. I called it “input.py”. Connect you webcam to the Orange Pi and start the script. Type in your name and look at the camera, pressing space each time a rectangle shows around your face. You have to take 10 pictures, which will be saved under the faces/ folder.

To test the model save the code below in another script, let’s call it “output.py”. The code creates a list of faces (image objects) and labels (integers), which is what the training model requires. This is done by going through the folders and files in the faces/ directory, created by the previous script. Then we train the model and predict directly on the video stream, adding a label on top of each recognized face.

import cv2
import numpy as np
import os

faces = []
labels = []

path_data='faces/'

names=os.listdir(path_data)
i=-1

# read each image in the faces folder
for root, dirs, files in os.walk(path_data):
    i=i+1
    # for the recognizer we need an array of images and corresponding labels
    for name in files:
        faces.append(cv2.imread(root+'/'+name,cv2.IMREAD_GRAYSCALE))
        labels.append(i)

# create our LBPH face recognizer 
face_recognizer = cv2.face.LBPHFaceRecognizer_create()

# import face detection cascade
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')

# train model
face_recognizer.train(faces, np.array(labels))

cap = cv2.VideoCapture(0)
while(True):
    # capture frame-by-frame
    ret, img = cap.read()

    # put a rectangle and a label for each recognized face
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    faces = face_cascade.detectMultiScale(gray, 1.3, 5)
    for (x,y,w,h) in faces:
        cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
        roi_color = img[y:y+h, x:x+w]
        roi_gray = gray[y:y+h, x:x+w]
        label= face_recognizer.predict(roi_gray)
        cv2.putText(img, names[label[0]-1]+' ts:'+str(label[1]), (x, y), cv2.FONT_HERSHEY_PLAIN, 1.5, (0, 255, 0), 2)
    # display the resulting frame
    cv2.imshow('frame',img)   
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break 
    
# when everything done, release the capture
cap.release()
cv2.destroyAllWindows()

To close the stream press the q key.

Personal cloud storage on the Orange Pi

Comments

  1. Hi. Thank you, input.py works fine. But i am having this error in output.py:

    sudo python output.py
    Traceback (most recent call last):
    File \”output.py\”, line 22, in
    face_recognizer = cv2.face.LBPHFaceRecognizer_create()
    AttributeError: \’module\’ object has no attribute \’face\’

    Thank you anyway.

  2. I follow the steps. I saw “Next, get OpenCV and the extra modules that contain the face recognition libraries:” This extra modules are not included in the step by step?

    Thank you for the response.

    • You download the extra modules when you run this command:
      git clone https://github.com/opencv/opencv_contrib.git

      Then you have to add the modules folder in open_contrib to the OPENCV_EXTRA_MODULES_PATH flag. For example, if you downloaded the extra modules in /home/orangepi/, then you put OPENCV_EXTRA_MODULES_PATH=/home/orangepi/opencv_contrib/modules

  3. cmake -D WITH_FFMPEG=OFF -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D OPENCV_EXTRA_MODULES_PATH= -D BUILD_NEW_PYTHON_SUPPORT=ON -D INSTALL_C_EXAMPLES=ON -D INSTALL_PYTHON_EXAMPLES=ON -D BUILD_EXAMPLES=ON ..

  4. cmake -D WITH_FFMPEG=OFF -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D OPENCV_EXTRA_MODULES_PATH=>/home/orangepi/opencv-3.3.0/modules/< -D BUILD_NEW_PYTHON_SUPPORT=ON -D INSTALL_C_EXAMPLES=ON -D INSTALL_PYTHON_EXAMPLES=ON -D BUILD_EXAMPLES=ON ..

  5. No. I don’t know what i am doing wrong… i will repeat the whole proccess in a clean system. Than you.

Leave a Reply

Your email address will not be published / Required fields are marked *