DeepDetect Official CPU & GPU EC2 AMI Documentation

Overview

Note: see Fixes section if you are using a GPU instance, they require a script to behave properly, this is a temporary issue.

AMI runs the latest DeepDetect server with Caffe, Tensorflow 11 and XGBoost built-in.

The AMI comes with over twenty pre-trained models for a range of applications from generic image classification to text and sentiment analysis.

All models and functionalities (e.g. training & prediction) are available through the server API.

The recommended API clients are:

Fixes

The current GPU instances appear to have defect, to fix it, do the following:

  • ssh to the AMI
ssh -i <path to your pem file> ubuntu@machine_ip_or_name
  • get the fixing script:
wget https://deepdetect.com/amis/fix_gpu.sh
  • execute it:
chmod +x fix_gpu.sh
./fix_gpu.sh

The DeepDetect server should be available normally from your AMI after a few seconds.

Version

Current AMIs run from commit f8f6d2646221172beffa50cb388e8f7c4e908670.

Note that due to Amazon processing time in between submissions, AMIs versions slightly lag behind the most recent version of the software.

Specifications

  • Ubuntu 16.04
  • Cuda 7.5 with CuDNN 5.1
  • OpenBlas
  • Tensorflow 0.11
  • Caffe with custom improvements
  • XGBoost latest
  • DeepDetect latest

Quickstart

More information:

Check that DeepDetect is running correctly

  • Try an info call:
curl -X GET 'http://<yourpublicip>:8080/info'

Output should look like:

{"status":{"code":200,"msg":"OK"},"head":{"method":"/info","version":"0.1","branch":"master","commit":"1b8cdd3bbc8a597f61e15efa5a0a83150017428e","services":[]}}
  • Check on the server logs, explanations are below

Note: commit may be different

Server Logs

Server logs are accessible at /var/log/deepdetect.log.

Typical log at AMI startup should look like:

DeepDetect [ commit f7d27d73005db2832ef445153e42b5641104ff4f ]
Running DeepDetect HTTP server on <you public ip>:8080

In case of difficulties, please report the server logs along with your request.

SSH Access to AMI

To get started, launch an AWS instances using this AMI from the EC2 Console. If you are not familiar with this process please review the AWS documentation provided here:

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-instance.html

Accessing the instance via SSH:

ssh -i <path to your pem file> ubuntu@{ EC2 Instance Public IP }

Built-in Models

The AMI comes with over twenty pre-trained models for a variety of tasks.

Generic Image Models

These models are available on GPU AMI from the following directories:

  • Caffe models: /opt/deepdetect/models/base/caffe
  • Tensorflow models: /opt/deepdetect/models/base/tf
model name Caffe Tensorflow Source Top-1 Accuracy (ImageNet) Image size
Inception v1 / googlenet Y 1000 classes Y 1000 classes BVLC / Google 67.9% 224x224
inception_v2 N Y 1001 classes Google 72.2% 224x224
inception_v3 N Y 1001 classes Google 76.9% 299x299
resnet_50 Y 1000 classes Y 1000 classes MSR 75.3% 299x299
resnet_101 Y 1000 classes Y 1000 classes MSR 76.4% 299x299
resnet_152 Y 1000 classes Y 1000 classes MSR 77% 299x299
inception_resnet_v2 N Y 1001 classes Google 79.79% 299x299

To create a service based on one of the models above:

curl -X PUT "http://<yourpublicip>:8080/services/imgserv" -d '{
       "mllib":"caffe",
       "description":"image classification",
       "type":"supervised",
       "parameters":{
         "input":{
           "connector":"image",
           "height":224,
           "width":224
         },
         "mllib":{
           "nclasses":1000
         }
       },
       "model":{
         "repository":"/opt/deepdetect/models/base/caffe/googlenet"
       }
     }'

Replace the height, width and repository with the values adapted from the table (e.g. replace googlenet with resnet_50).

Classification of a single image:

curl -X POST "http://<yourpublicip>:8080/predict" -d '{
       "service":"imgserv",
       "parameters":{
         "output":{
           "best":5
         }
       },
       "data":["https://deepdetect.com/img/ambulance.jpg"]
     }'

Applied Image Models

Models below are originally released by DeepDetect, see https://deepdetect.com/applications/model/ for more information. They are useful for testing and finetuning your own models on a variety of targeted tasks.

Note: the models below are per default only available from the GPU AMI. This is because some CPU EC2 instances may not have the required amount of disk space per default. If you are using a CPU AMI, just get the models you need directly from https://deepdetect.com/applications/model/ and see how to use a custom model.

These models are available from /opt/deepdetect/models/apps/caffe on the GPU AMI.

Model model name Backend Image size #classes Description
Age detection age_model Caffe 227x227 8 Estimates person’s age as within one of eight buckets
Gender classification gender Caffe 224x224 2 Estimates person’s gender
Clothing classification clothing Caffe 224x224 304 Clothing categories
Fabric classification fabric Caffe 224x224 233 Fabric categories
Buildings classification buildings Caffe 224x224 185 Building categories
Bags classification bags Caffe 224x224 37 Bags categories
Footwear classification footwear Caffe 224x224 51 Footwear categories
Sports classification sports Caffe 224x224 143 Sports categories
Furnitures furnitures Caffe 224x224 179 Furnitures categories

Service creation:

curl -X PUT "http://<yourpublicip>:8080/services/clothing" -d '{
       "mllib":"caffe",
       "description":"image classification",
       "type":"supervised",
       "parameters":{
         "input":{
           "connector":"image",
           "height":224,
           "width":224
         },
         "mllib":{
           "nclasses":304
         }
       },
       "model":{
         "repository":"/opt/deepdetect/models/apps/caffe/clothing"
       }
     }'

If you cannot find what you need in the list above, contact us.

Applied Text Models

Our text models are character-based, i.e. they are robust to mistakes in words, see the dedicated page https://deepdetect.com/applications/text_model/

Model Backend Alphabet size #classes Languages
Sentiment analysis Caffe See sequence size in https://deepdetect.com/applications/text_model/ 2 English, Arabic, Czech, Spanish, Finish, French, Indonesian, Italian, Japanesee, Korean, Portuguese, Russian, Thai and Turkish
Movies reviews sentiment Caffe 1014 2 English

Other models

If you have your own pre-trained model, proceed as follows:

  • ssh to the AMI and create a model directory, e.g. at /opt/deepdetect/models/mymodel
  • scp the pre-trained model to the model directory
  • proceed with model creation as with other models

Contact us if you have any issue.

Custom models

Custom models can be trained either from scratch or finetuned out of one of the available pre-trained models. The latter often yield best accuracy on most tasks.

Training a model

In addition to instructions below, see:

along with this set of examples.

Training an image models in just a few steps:

  • Data preparation: a directory with one sub-directory for every class (e.g. cats, dogs, …) that contain the relevant images
  • Upload your data to the AMI, e.g.
scp -r -i <path to your pem file> yourdata/ ubuntu@{ EC2 Instance Public IP }:/opt/deepdetect/path/to/models
  • Create a deep learning service:
curl -X PUT "http://<yourpublicip>:8080/services/imageserv" -d '{ "mllib":"caffe", "description":"image classification service",
"type":"supervised", "parameters":{ "input":{ "connector":"image", "width":224, "height":224 }, "mllib":{ "template":"googlenet",
"nclasses":5 } }, "model":{ "templates":"../templates/caffe/", "repository":"/opt/deepdetect/path/to/models/yourdata/" } }'
  • Train your model:
curl -X POST "http://<yourpublicip>:8080/train" -d '{ "service":"imageserv", "async":true, "parameters":{ "mllib":{ "gpu":true, "net":{ "batch_size":32 }, "solver":{ "test_interval":500, "iterations":30000, "base_lr":0.001, "stepsize":1000, "gamma":0.9 } }, "input":{ "connector": "image", "test_split":0.1, "shuffle":true, "width":224, "height":224 }, "output":{ "measure":["acc","mcll","f1"]} }, "data":["ilsvrc12"] }'
  • Your model is now available to use through predict calls:
curl -X POST "http://<yourpublicip>:8080/predict" -d '{ "service":"imageserv", "parameters":{ "input":{ "width":224, "height":224 }, "output":{ "best":3 } }, "data":["/path/to/img.jpg","http://yourdomain.com/img.jpg] }'

Note: the training parameters above are adhoc, you may need to adapt them to your needs.

Finetuning

Finetuning starts training from a pre-trained model. This allows the neural network to benefit from the patterns already captured.

Finetuning proceeds as training from scratch, the only difference being to use the finetuning:True parameter at service creation time along with specifying the existing model weights:

  • Service creation:
curl -X PUT "http://<yourpublicip>:8080/services/imageserv" -d '{ "mllib":"caffe", "description":"image classification service",
"type":"supervised", "parameters":{ "input":{ "connector":"image", "width":224, "height":224 }, "mllib":{ "template":"googlenet",
"nclasses":5, "finetuning":true, "weights":"bvlc_googlenet.caffemodel"} }, "model":{ "templates":"../templates/caffe/", "repository":"/opt/deepdetect/path/to/models/yourdata/" } }'

That’s it!

Issues

It is recommended to also look at the list of currently known issues. If nothing is relevant, you can try to search the closed issues as well at https://github.com/beniz/deepdetect.

Anyways, for any issue, you can contact support.

Server Crash ?

DeepDetect server is robust to errors. Since it is Open Source, it has been tested under heavy load by us and customers alike.

Some situations remain from which the server cannot recover, typically:

  • when machine runs out of memory (e.g. neural net is too large for RAM or GPU VRAM)
  • when the underlying deep learning library (e.g. Caffe or Tensorflow) cannot itself recover from a memory or compute error

Note: the server automatically restart after any unrecoverable failure.

In all cases, if you experience what you believe is a server crash, always contact support.

Free Trial

The AMI do not offer free trial since our Docker builds are available for free for both CPU and GPU. Note that the docker builds do not come with pre-trained models and do not have Tensorflow support built-in. See specific docker instructions.

Another way to test the product is to build it from sources, see https://github.com/beniz/deepdetect.

Support

Contact

Email your requests to ami@deepdetect.com

Please allow 24hrs or use the gitter live chat for faster response.


DeepDetect documentation