Training an image classifier service

In another tutorial it was shown how to setup an image classifier from an existing (i.e. pre-trained) neural network model. Here we show how to train this model with DeepDetect. This yields a useful example on how to train your own image classification models.

Setup of the cats & dogs dataset

The first step is to acquire and setup the dataset. We are using the cats & dogs dataset from Alternatively, you can get it from

Setup the directory for model and data:

mkdir models
mkdir models/cats_dogs

Then unzip the data into models/cats_dogs.

Copy the pre-trained model

We will use transfer learning, i.e. use a pre-trained model on Imagenet that we specialize on the cats vs dogs task. This eases the training task, makes it converge much faster and yields near perfect accuracy in a few thousand interations.

To install the pre-trained model for our architecture of reference (see se_resnet_50 below):

cd models/cats_dogs

Creating the service

First, assuming a Docker container for DeepDetect, start the server with

docker run -d -p 8080:8080 -v /path/to/models:/opt/models/ jolibrain/deepdetect_gpu

Then create the service with:

curl -X PUT "http://localhost:8080/services/catsdogs" -d '{
       "description":"image classification service",
           "db": true
     "weight": "SE-ResNet-50.caffemodel"

In the call above, we are defining a state of the art image classification network called Squeeze-and-Excitation ResNet-50, and setting it up for training.

Training the classifier

The training phase is complex phase. Luckily it is fully automated from within DeepDetect. Basically, the data flow into an image data connector. The connector prepares the data for the neural net and deep learning library. The neural net is trained and tested regularly until completion. At this stage, the machine learning service has a model to use for classifying images automatically. More details on each of the hidden steps:

  • building of training and testing sets of image databases: the image dataset built above is turned into two databases on images, one for training, the other for validating the net regularly along the training process. The rational under the the building of a database, is that each image is passed thousands of times to the net and that reading and re-reading from the hard drive is too slow. The database is much more efficient for non sequential access.

  • training of the net: batches of random images are passed to the net for training, the process is repeated until the requested number of iterations has been reached. The training job can be stopped at any time through the API.

  • transfer learning: we will use a pre-trained model and specialize it onto the cats vs dogs task. This will give us near perfect accuracy in a few thousand training iterations.

Below is a training call for the model

curl -X POST "http://localhost:8080/train" -d '{
       "noise":{"all_effects":true, "prob":0.001},
       "distort":{"all_effects":true, "prob":0.01}

The main options are explained as follows:

  • batch_size: the number of training images sent at once for training
  • iterations: the total number of times a batch of images is sent in for training
  • base_lr: the initial learning rate
  • test_split: the part of the training set used for testing, e.g. 0.1 means 10% is used for testing
  • shuffle: whether to shuffle the dataset before training (recommended)
  • measure: the list of measures to be computed and returned as status by the server
  • noise and distort: automated data augmentation to make the model more robust

For more details, see the API.

Upon the start of the training, the server will output some image file processing information:

INFO - Processed 1000 files.
INFO - Processed 2000 files.
INFO - Processed 3000 files.
INFO - Processed 4000 files.

The bash script below calls on the training status every 20 seconds. It should take around 5000 iterations to reach 98% accuracy or so.

while true; do
    out=$(curl -s -X GET "http://localhost:8080/train?service=imageserv&job=1&timeout=20")
    echo $out
    if [[ $out == *"running"* ]]

Testing the classifier

Once training has completed, the service is immediately available for prediction. A simple prediction call looks like this:

curl -X POST "http://localhost:8080/predict" -d '{

Note that the trained model is saved on disk, and that the service can be safely destroyed while keeping the model. Simply create a new identical service and it will load the existing model, it is then immediately ready for prediction.