Embedded Neural Network Inference with DeepDetect

DeepDetect has optimizations for running on ARM boards and computers. The instructions below are for running on Raspberry Pi 3.

DeepDetect relies on the embedded inference library NCNN that is optimized for a variety of board, phones and embedded devices. Jolibrain maintains a fork of NCNN with more features, and it is the one used by DeepDetect.

In this tutorial, we setup DeepDetect for Raspberry Pi with NCNN backend, and we use LiveDetect from the DeepDetect ecosystem to stream live video from a camera and produce an overlay of bounding boxes for detected objects in real-time.

Example of live feed

Convert a model to run on Raspberry Pi efficiently

If you don’t have your own model, proceed with the sections below with the existing models.

Existing models need to be converted in order to yield best performances on the Pi and ARM boards in general.

Before converting an embedded model trained with the DeepDetect Platform typically, you first need:

  • A trained Caffe model files, i.e. both deploy.prototxt and model_xxx.caffemodel files.
  • caffe2ncnn that is a small program to convert Caffe weights into NCNN format. Follow instrucions below :
git clone https://github.com/jolibrain/ncnn.git
cd ncnn
mkdir build
cd build
make

Now, setup a new directory for the embedded model, e.g. embed_model.

To convert a model, follow the instructions below:

# go to your NCNN build from the previous step
cd ncnn/build/tools/caffe/
./caffe2ncnn 0 /path/to/deploy.prototxt /path/to/model_xxx.caffemodel /path/to/embed_model/ncnn.bin /path/to/embed_model/ncnn.params

where ncnn.bin and ncnn.params are the converted model files.

  • Copy the corresp.txt file to the embed_model directory.

That’s it, the model optimized for NCNN and ARM is ready to be used.

Setting up the DeepDetect Server for Raspberry Pi

We assume a Docker installation on the Raspberry Pi, as it works very well with very little overhead.

  • Install Docker:

curl -fsSL get.docker.com -o get-docker.sh && sh get-docker.sh
sudo groupadd docker
sudo usermod -aG docker $USER
  • Get & Run Raspberry Pi Docker image for DeepDetect

docker pull jolibrain/deepdetect_ncnn_pi3
docker run -d -p 8080:8080 -v $HOME/models:/opt/models jolibrain/deepdetect_ncnn_pi3

Option 1: Running live image detection with LiveDetect

LiveDetect is part of the DeepDetect ecosystem. It allows acquiring video from a live camera and processing it in real-time, including on Raspberry Pi. It is written in Go, easy to setup and use.

  • Set LiveDetect up

You can download a LiveDetect release or build LiveDetect from sources.

Here we download the LiveDetect release for RPI3, and save it to the Pi.

  • Run a live video stream processed with a Neural Network

./livedetect \
    --port 8080 \
    --host 127.0.0.1 \
    --mllib ncnn \
    --width 300 --height 300 \
    --detection \
    --create --repository /opt/models/voc/ \
    --init "https://deepdetect.com/models/init/embedded/images/detection/squeezenet_ssd_voc_ncnn.tar.gz" \
    --confidence 0.3 \
    --device-id 0 \
    -v INFO \
    -P "0.0.0.0:8888" \
    --service voc \
    --nclasses 21 \
    --create-service

This automatically download the squeezenet_ssd_voc_ncnn model that detects 20 types of objects, including cars and persons, sets it up and stream the processed video frames in real-time. Reach http://<your_raspberry_ip>:8888 with a Web browser, where <your_raspberry_ip> can be obtained with ifconfig from the raspberry terminal.

To use your own, previously converted, model, remove the --init option and replace value of --repository by the path to your own model. Don’t forget to adapt the number of classes with --nclasses, --width and --height as needed.

Option 2: Setting up a pre-trained model and using the REST API

DeepDetect provides embedded pre-trained models for NCNN.

Go to the squeezenet_ssd_voc_ncnn model page and follow usage instructions. This model detects 20 types of objects, including cars, persons, …

Related