Tag

MKL

0 views collected around this technical thread.

58 Tech
58 Tech
Dec 21, 2021 · Artificial Intelligence

dl_inference: Open‑Source Deep Learning Inference Service with TensorRT and MKL Acceleration

dl_inference is an open‑source, production‑grade deep learning inference platform that supports TensorFlow, PyTorch and Caffe models, offering GPU and CPU deployment, TensorRT and MKL acceleration, multi‑node load balancing, and extensive Q&A on model conversion, hardware requirements, INT8 quantization, and performance gains.

CPUDeep LearningGPU
0 likes · 8 min read
dl_inference: Open‑Source Deep Learning Inference Service with TensorRT and MKL Acceleration