A Set of Deep Neural Network Models for Classification

Below are a range of deep neural network models that are free, even for commercial use in your applications. These models have been trained over images for a range of domains. Thus they should accomodate a range of applications, from fashion item recognition to sports and gender classification.

This page lists a growing list of available models, along with information on how to use them and how they were built.

General Information

The primary intention behind these models is to ease the setup, deployment and testing of a range of applications around deep neural models.


  • The models are very good for building and testing an application pipeline that includes one or more deep neural networks. However, the models should not be considered suited for many high accuracy production tasks. This is because most models are rough in the sense that the average accuracy can be low on some types of images and classes.

  • The models are free, even for commercial use. The training sets cannot be shared at this stage since images are copyrighted. However, as some training sets originate from Imagenet, they can be reconstructed out of category names without much work…

  • These models are intended to primarily be used with DeepDetect which relies on Caffe but can be converted. First they can easily be used with Caffe alone. Second, if you’re using TensorFlow, see how to convert Caffe models to Tensorflow, and similar conversion for Torch.

  • The model classes are exclusive. In other words, there no control over the attention of a model: this means that using a furnitures recognition model, in an image containing a chair and a table, one or the other will be recognized best, most often not both. For other types of models, contact us.

  • Not finding what you need or assistance needed ? Let us know or report difficulties, our pipeline is automated, and some models can be easily built.

What applications are these models good for ?

These models are good for classification and recommendation for instance. They are especially useful for building and testing an application pipeline. Typically:

  1. Build up an application that uses one or more deep models
  2. Test the application on your production data
  3. The application can then be made more accurate by either finetuning the deep model or building a new more accurate one. You can do that or ask us for assistance, as needed.

As an example of applications, see how easy it is to build an image search engine with ElasticSearch.


Works best on GPU but fine on multi-core CPU as well. DeepDetect is supported on Ubuntu 14.04 LTS but builds on other Linux flavors.

Model Usage

Below are instructions for setting up a classification service for a given model from command line and from python client. Importantly:

  • The number of classes nclasses needs to be specified at service creation. This is model-dependent, and the number of classes can be obtained from the list below or the model.json file included in the model tarball

  • The service name is for you to set, below examples use the clothing model and the service is named clothing

  • A batch of multiple images can be passed over at once to the server for classification.

Steps for setting up a model service:

  1. Select & download a model tarball
  2. Uncompress in the repository of your choice, e.g. /home/me/models/clothing
  3. Build and run dede
  4. Use the code samples below to build your classification pipeline
  5. See the API for more details on the various parameters and options


Service creation:

curl -X PUT "http://localhost:8080/services/clothing" -d '{
       "description":"clothes classification",

Classification of a single image:

curl -X POST "http://localhost:8080/predict" -d '{


Service creation:

from dd_client import DD

model_repo = '/home/me/models/clothes'
height = width = 224
nclasses = 304

# setting up DD client
host = ''
sname = 'clothing'
description = 'clothes classification'
mllib = 'caffe'
dd = DD(host)

# creating ML service
model = {'repository':model_repo}
parameters_input = {'connector':'image','width':width,'height':height}
parameters_mllib = {'nclasses':nclasses}
parameters_output = {}

Classifying a single image

parameters_input = {}
parameters_mllib = {}
parameters_output = {'best':5}
data = ['http://4.bp.blogspot.com/-uwu7SmTbBXI/VD_NNJc4Y-I/AAAAAAAAK1I/rt9de3mWXJo/s1600/faux-fur-coat-winter-2014-big-trend-10.jpg']
classif = dd.post_predict(sname,data,parameters_input,parameters_mllib,parameters_output)
print classif


Visualizing the result from the sample above:

Fur Output

This naturally comes in JSON form:

        {"prob":0.6452152132987976,"cat":"fur coat"},
        {"prob":0.3132794201374054,"cat":"overgarment, outer garment"},
        {"prob":0.009311402216553688,"cat":"sheepskin coat, afghan"},

DeepDetect supports turning the JSON output into any custom format through output templates. See the example on how to push results into ElasticSearch without glue code.

Image Classification Models

Model Training

All models below have been trained as follows:

  • Dataset is split as 90% training and 10% testing
  • A GoogleNet is initialized with ILSVRC12 weights (aka 1000 classes from Imagenet)
  • Network is trained in between 50000 and 300000 steps with step control of the learning rate and SGD or Nesterov as optimizer
  • Data augmentation varies from mirroring to mirroring + rotations

The model tarballs include the network training:

  • googlenet.prototxt is the neural network definition file for training
  • googlenet_solver.prototxt contains the default parameters for training
  • model.json contains the history of training calls and parameters. This is where you want to start looking in order to re-train or finetune a model.
  • deploy.prototxt is the neural network definition file for prediction

If you have difficulties finetuning the models, contact us.

List of Image Classification Models