Artificial Intelligence 5 min read

Civil Aviation QA Competition (CCL2022‑DQAB): Task Description, Data, Evaluation Metrics, and Prizes

The CCL2022‑DQAB competition, organized by Beihang University and AVIC Mobile Technology, invites participants to develop reading‑comprehension models for extracting accurate question‑answer pairs from civil aviation texts, offering detailed task definitions, evaluation criteria, dataset statistics, a prize structure, and a competition schedule.

DataFunTalk
DataFunTalk
DataFunTalk
Civil Aviation QA Competition (CCL2022‑DQAB): Task Description, Data, Evaluation Metrics, and Prizes

The CCL2022‑DQAB competition, jointly organized by Beihang University and AVIC Mobile Technology, focuses on extracting precise question‑answer pairs from civil aviation web texts using reading‑comprehension techniques.

Task Organization and Access Organizer: Beihang University & AVIC Mobile Technology. GitHub repository: https://github.com/BDBC-KG-NLP/CCL2022-DQAB . Competition registration: https://aistudio.baidu.com/aistudio/competition/detail/313 .

Task Objective Address the frequent updates and high demand for travel‑related Q&A in the civil aviation domain by building models that accurately retrieve and extract answers from large text corpora.

Sub‑tasks and Evaluation

Sub‑task 1 – Document‑level answer retrieval: Accuracy of Top‑N (N=1,3,5) containing the correct document.

Sub‑task 2 – Paragraph‑level answer extraction: Accuracy of Top‑N (N=1,3,5) containing the correct paragraph.

Sub‑task 3 – Fine‑grained text answer extraction: Precision, recall, and F1 computed per answer; for multiple answers, average F1 across answers, then average across questions.

Dataset Source: internal Q&A community of Hanglv Zongheng. Total entries: 5,042 (80% released for training/validation – 3,529 train + 504 valid; 20% held for testing, not released). Each entry includes a question, relevant paragraph, and answer span(s) annotated by experts.

Prize Structure

Sub‑task 1: 1st – ¥5,000; 2nd – ¥3,000; 3rd – ¥2,000.

Sub‑task 2: 1st – ¥10,000; 2nd – ¥5,000; 3rd – ¥3,000.

Sub‑task 3: 1st – ¥20,000; 2nd – ¥7,000; 3rd – ¥5,000.

Winning teams also receive honorary certificates from the Chinese Society of Chinese Information Processing.

Schedule June 5 – Registration and training data open. July 1 – Test set released.

Contact Competition discussion group (QQ) QR code provided in the announcement.

AIevaluation metricsNLPcompetitiondatasetReading ComprehensionCivil Aviation
DataFunTalk
Written by

DataFunTalk

Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.