Tag

model conversion

0 views collected around this technical thread.

DaTaobao Tech
DaTaobao Tech
Jul 12, 2023 · Artificial Intelligence

Optimizing ChatGLM-6B Deployment with MNN: Model Conversion, Quantization, and Edge Inference

The article details a workflow that converts the PyTorch ChatGLM‑6B model to MNN, splits and compresses embeddings, applies int4/int8 quantization, supports dynamic shapes, and uses hybrid GPU/CPU or CPU‑only loading to enable low‑memory edge inference on PCs and mobile devices with competitive token‑per‑second performance.

ChatGLMLLMMNN
0 likes · 16 min read
Optimizing ChatGLM-6B Deployment with MNN: Model Conversion, Quantization, and Edge Inference
58 Tech
58 Tech
Dec 8, 2021 · Artificial Intelligence

dl_inference: A General Deep Learning Inference Service with TensorRT and Intel MKL Acceleration

The article introduces dl_inference, an open‑source deep learning inference platform that integrates TensorRT GPU acceleration, Intel MKL CPU optimization, and Caffe support, detailing its features, model conversion workflow, deployment steps, performance gains, and how developers can contribute.

Deep LearningDockerInference
0 likes · 12 min read
dl_inference: A General Deep Learning Inference Service with TensorRT and Intel MKL Acceleration