Blog

Building high performance C++ Deep Learning applications with DeepDetect

11 June 2021

DeepDetect Server comes with a REST API that makes the building of Deep Learning Web and other applications easy.

By Deep Learning application here we refer to either or both of:

  • Training: Training deep neural network from image, videos, text, CSV or other data, including training from scratch, re-training and finetuning (aka. transfer learning)

  • Inference: Running one or more deep neural networks, collecting and using results. This includes running the so-called DeepDetect chains of models, i.e. multiple models in conditional sequences, very useful when building complex applications.

A common use-case of a REST API is when the GPUs are a remote computing facility. In that case, a REST API comes handy since the compute can be easily deported and the data exchanged in JSON and other common formats.

In some other real-world applications however, timing is especially critical, typically for real time applications. An Augmented Reality (AR) application for instance, requires a 15ms delay at most to not ruin the experience. That’s the cost of pleasing and fooling our visual cortex! In other applications data may not be transfered, or the required bandwidth and delay is inappropriate, think automotive, railways, and related industrial contexts.

In these cases, it becomes beneficial to build a high performance application right on top of the DeepDetect server C++ capabilities. This means no more over the network exchanges, keeping everything within the scope of a single process and memory. In practice this can lead to a order of magnitude increase in performance.

Luckily, the DeepDetect Server is fully written in C++, and this means it is already configured for the best low level performances. See our blog post on why C++ is a good choice for Deep Learning applications for more details.

Steps for building a C++ application on top of DeepDetect

The overall process basically comes down to linking directly your own C++ code with DeepDetect as a library. The steps are as follows:

  1. Build DeepDetect with the options required for your application.
  2. Configure your project for linking to the DeepDetect C++ library.
  3. Write your application & build it.
  4. Build your application.
  5. Use the common JSON API from DeepDetect though from within your C++ code.

Building DeepDetect

First, we need to build DeepDetect:

git clone https://github.com/jolibrain/deepdetect.git
mkdir build
cd build
cmake .. -DUSE_CAFFE=ON -DUSE_TENSORRT_OSS=ON -DBUILD_TESTS=OFF -DUSE_SIMSEARCH=OFF
make

Above is an example command for building DeepDetect C++ library with support for both Caffe and TensorRT models. See the documentation as many other options are available, for torch, ncnn, and other Deep Learning backends.

Configuring the Deep Learning C++ application.

Now, from a separate directory, the target application can be configured. Using cmake as a build tool, it is as simple as writing a CMakeLists.txt for your project, a template is below:

# CMakeLists.txt
cmake_minimum_required(VERSION 3.5 FATAL_ERROR)
project(MyProject)

add_executable(my_app
    main.cc)

find_package(DeepDetect CONFIG REQUIRED)

include_directories(${DeepDetect_INCLUDE_DIRS})
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${DeepDetect_CXX_FLAGS}")
target_compile_definitions(my_app PRIVATE ${DeepDetect_DEFINITIONS})
target_link_directories(my_app PRIVATE ${DeepDetect_LIBRARY_DIRS})
target_link_libraries(my_app ${DeepDetect_LIBRARIES})

Here main.cc, and possibly other files to be added, do contain the end application’s code.

Writing your C++ Deep Learning application

So let’s see what a very basic main.cc should look like:

#include "jsonapi.h"

using namespace dd;

int main(int argc, char *argv[])
{
  JsonAPI api;
  api.boot(argc, argv);
}

Here, jsonapi.h comes from the DeepDetect C++ library. It links to everything that is needed to train and run inference on deep neural networks from the DeepDetect JSON API.

Build the application:

Building your own custom application is then made simple with cmake, as shown below:

# from the build directory of your application, not from dd build dir!
cmake [myapp_location] -DCMAKE_PREFIX_PATH="/path/to/dd/build/directory"
make

That’s it, your ./main application should be ready to run.

Running and building complex applications

The JSON API takes JSON as input, see the available methods in the code that correspond to the different API calls.

Additionnally, one can even write your own updated C++ REST API server on top of the library, if that’s what is needed. This is done by using the same OatPP C++ framework that DeepDetect Server uses, you can look at the DeepDetect files as useful examples, i.e. in https://github.com/jolibrain/deepdetect/tree/master/src/http

Alternatively, hit us on Gitter with any questions or difficulty.