58 Tech
Jan 15, 2021 · Artificial Intelligence
Exploring Text Pre‑training Models for Dialogue Classification in Information Security: From TextCNN to RoBERTa and Knowledge Distillation
This article presents a systematic exploration of text pre‑training models for dialogue classification in information‑security scenarios, comparing baseline TextCNN, an enhanced TextCNN_role, RoBERTa with domain‑adaptive pre‑training, and a distilled mini‑model, and discusses their performance, trade‑offs, and future directions.
Dialog ModelingInformation SecurityKnowledge Distillation
0 likes · 13 min read