Deep Learning with C++

Why Deep Learning with C++?

22 January 2021

Why Deep Learning with C++ At Jolibrain, we’ve been building deep learning system for over five years now. Some of us had been in the academia working on AI Planning for robotics sometimes for over a decade. There, C++ is natural fit, because the need for automation meets embedded performances. So in 2015 C++ felt natural to us for Deep Learning, and thus DeepDetect is written with C++. This short post is about sharing our experience, good & bad, regarding C++ for Deep Learning, as of today in early 2021.
DeepDetect v0.12.0

DeepDetect v0.12.0

14 January 2021

DeepDetect release v0.12.0 DeepDetect v0.12.0 was released recently. Here we briefly review the main novel features and important release elements. In summary: Vision Transformers support with two new ViT light architectures Torchvision image classification models NCNN improved inference for image models State-of-the-art time-series forecasting with N-BEATS New local high-throughput REST API server with OATPP DeepDetect release v0.12 with support for Vision Transformers (ViT) for image classification, improved N-BEATS for time-series and new OATPP webserver #DeepLearning #PyTorch https://t.
Docker images status

DeepDetect Docker Images, Releases & CI/CD

8 January 2021

Build type STABLE DEVEL SOURCE Docker image CPU Docker image GPU Docker image GPU+TORCH Docker image GPU+TENSORRT DeepDetect Docker for GPU & CPU DeepDetect docker images are available for CPU and GPU with a range of supported backends, from Pytorch to Caffe, TensorRT, NCNN, Tensorflow, …
Benchmarking DeepDetect models

Benchmarking Deep Neural Models

30 December 2020

Benchmarking code Deep neural models can be considered as code snippets written by machines from data. Benchmarking traditional code considers metrics such as running time and memory usage. Deep models differ from traditional code when it comes to benchmarking for at least two reasons: Constant running time: Deep neural networks run in constant FLOPs, i.e. most networks consume a fixed number of operations, whereas some hand-coded algorithms are iterative and may run an unknown but bounded number of operations before reaching a result.
Importing ONNX models to TensorRT

Speed-up ONNX models with TensorRT

18 December 2020

ONNX models ONNX is a great initiative to standardize the structure and storage of deep neural networks. Almost all frameworks export to ONNX in one way or another. Implementations may then vary, though it’s expected to converge eventually. ONNX models are of great interest since: They can easily be exported around with their weights They can be optimized and converted to a variety of CPUs and GPUs Both interoperability and hardware access are key to replace or enhance modern software stacks with deep neural networks.
Vision Transformer from

Experimenting with Vision Transformer

11 December 2020

Transformer architectures are coming to vision tasks There’s a new breed of computer vision models in the making. This change is mostly due to the coming of the originally NLP oriented Transformer architectures to computer vision tasks. Recent advances of 2020 in this domain include the Vision Tranformer (ViT / Google) and the Visual Transformer (Berkeley / Facebook AI) for image classification. And the DETR (Facebook) and Deformable DETR (SenseTime) architectures for object detection.