export DD_PLATFORM=$HOME/deepdetect
git clone https://github.com/jolibrain/dd_platform_docker.git ${DD_PLATFORM}
cd ${DD_PLATFORM}/code/gpu/
CURRENT_UID=$(id -u):$(id -g) MUID=$(id -u) docker-compose up -d
cd ${DD_PLATFORM}/code/cpu/
CURRENT_UID=$(id -u):$(id -g) MUID=$(id -u) docker-compose up -d
Once docker containers are started, platform UI is available on http://localhost:1912
export DD_PLATFORM=$HOME/deepdetect
cd ${DD_PLATFORM}/code/gpu/
CURRENT_UID=$(id -u):$(id -g) MUID=$(id -u) docker-compose stop
cd ${DD_PLATFORM}/code/cpu/
CURRENT_UID=$(id -u):$(id -g) MUID=$(id -u) docker-compose stop
cd ${DD_PLATFORM}/code/gpu/
CURRENT_UID=$(id -u):$(id -g) MUID=$(id -u) docker-compose up -d
cd ${DD_PLATFORM}/code/cpu/
CURRENT_UID=$(id -u):$(id -g) MUID=$(id -u) docker-compose up -d
# Environment
export DD_PLATFORM=$HOME/deepdetect
export CURRENT_UID=$(id -u):$(id -g)
export MUID=$(id -u)
# Go to directory
cd ${DD_PLATFORM}/code/cpu
# Update platform
bash update.sh
# Environment
export DD_PLATFORM=$HOME/deepdetect
export CURRENT_UID=$(id -u):$(id -g)
export MUID=$(id -u)
# Go to directory
cd ${DD_PLATFORM}/code/gpu
# Update platform
bash update.sh
cd ${DD_PLATFORM}/code/cpu
cd ${DD_PLATFORM}/code/gpu
docker-compose rm -f -s -v
cd && rm -rf ${DD_PLATFORM}
AMI runs the latest DeepDetect Platform with many pre-trained models ready for use.
Launch the GPU AMI (forthcoming product URL)
The DeepDetect platform is ready to be used form a Web browser at this address: http://<yourpublicip>:1912
.
You should see the following page:
More information:
From a Web browser, got to http://yourpublicip:1912
Check that pre-trained models are available by navigating to the Predict
tab, the page should look like:
Try an info
call:
From outside your AMI:
curl -X GET 'http://yourpublicip:8080/info'
Output should look like:
{
"status": {
"code": 200,
"msg": "OK"
},
"head": {
"method": "/info",
"version": "0.1",
"branch": "master",
"commit":"c8556f0b3e7d970bcd9861b910f9eae87cfd4b0c",
"services": []
}
}
Note: commit may be different
Here is how to do a simple image classification service and prediction test:
curl -X PUT 'http://localhost:8080/services/ilsvrc_googlenet' -d '{
"description": "image classification service",
"mllib": "caffe",
"model": {
"init": "https://deepdetect.com/models/init/desktop/images/classification/ilsvrc_googlenet.tar.gz",
"repository": "/opt/model/ilsvrc_googlenet"
},
"parameters": {
"input": {
"connector": "image"
}
},
"type": "supervised"
}
'
should yield:
{
"status":{
"code":201,
"msg":"Created"
}
}
curl -X POST "http://localhost:8080/predict" -d '{
"service":"imageserv",
"parameters":{
"input":{},
"output":{
"best":3
},
"mllib":{
"gpu":true
}
},
"data":[
"http://i.ytimg.com/vi/0vxOhd4qlnA/maxresdefault.jpg"
]
}'
should yield:
{
"status":{
"code":200,
"msg":"OK"
},
"head":{
"method":"/predict",
"time":852.0,
"service":"imageserv"
},
"body":{
"predictions":{
"uri":"http://i.ytimg.com/vi/0vxOhd4qlnA/maxresdefault.jpg",
"classes":[
{
"prob":0.2255125343799591,
"cat":"n03868863 oxygen mask"
},
{
"prob":0.20917612314224244,
"cat":"n03127747 crash helmet"
},
{
"last":true,
"prob":0.07399296760559082,
"cat":"n03379051 football helmet"
}
]
}
}
}
The recommended API clients are:
curl
calls from the command line, see examples below and in general documentation and examplesSince the DeepDetect AMI version 1.4 (latest), the DeepDetect Server is updated automatically at startup.
Server logs are accessible at /var/log/deepdetect.log
.
Typical log at AMI startup should look like:
DeepDetect [ commit f7d27d73005db2832ef445153e42b5641104ff4f ]
Running DeepDetect HTTP server on :8080
In case of difficulties, please report the server logs along with your request.
To get started, launch an AWS instances using this AMI from the EC2 Console. If you are not familiar with this process please review the AWS documentation provided here:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-instance.html
Accessing the instance via SSH:
ssh -i ubuntu@{ EC2 Instance Public IP }
From there you can reach the server on localhost:8080
, with an info
call for instance:
curl -X GET 'http://localhost:8080/info'
It is recommended to also look at the list of currently known issues. If nothing is relevant, you can try to search the closed issues as well at https://github.com/jolibrain/deepdetect.
Anyways, for any issue, you can contact support.
Known issues
After a reboot, the DeepDetect server is not coming back up ? The auto-update may take some time, along with the Ubuntu security updates. Wait at least five to ten minutes. If the DeepDetect server is still not getting back up, ssh
into the AMI, and run sudo docker ps
. If nothing shows, run top
and see whether some docker
processes are among the top ones, meaning the update is still under way. If it is, wait until it has finished.
After a reboot, the server is still not coming back up ? This is most likely due to Ubuntu auto-updates that change the kernel for a new one, without the required NVidia driver for the EC2 GPU instance. One known solution is to log onto your instance with ssh
and do:
nvidia-smi
This should tell you that the current kernel does not have the required driver. Remove the kernel with:
sudo aptitude remove linux-image-4.4.0-97-generic
(change the kernel version according to nvidia-smi
output).
g2.2xlarge
EC2 GPU instances do not appear to bear enough GPU memory for using resnet_50
and above. Try p2.2xlarge
instead.Server Crash ? DeepDetect server is robust to errors. Since it is Open Source, it has been tested under heavy load by us and customers alike.
Some situations remain from which the server cannot recover, typically:
Note: the server automatically restarts after any unrecoverable failure.
In all cases, if you experience what you believe is a server crash, always contact support.
The AMI do not offer free trial since our Docker builds are available for free for both CPU and GPU.
Another way to test the product is to build it from sources, see https://github.com/jolibrain/deepdetect.
Email your requests to ami@deepdetect.com
Please allow 24hrs or use the gitter live chat for faster response.