How a SaaS Client Overcame Performance Bottlenecks with Multi‑Process and Plugin Architecture
This article explains how a SaaS client for enterprise customers tackled stability, security, and latency challenges on low‑end PCs by redesigning its architecture, introducing proactive resource monitoring, multi‑process isolation, and a plug‑in system to enable scalable, customizable performance improvements.
Background
Qidian, a SaaS client targeting B2B customers, often faces demanding requirements for high stability, security, capacity, and low latency, especially from large enterprises. Many client machines run on low‑end hardware or outdated systems such as Windows XP, and financial traders may open over 300 chat windows simultaneously, stressing the client’s performance and delivery pipeline.
Methodology Evolution
Earlier optimizations were reactive, fixing issues only after user or test feedback, which led to short‑term fixes. The new approach embeds performance, system‑resource, and network monitoring into the client’s core architecture. When a slowdown is detected, the architecture can pause or stop resource‑intensive tasks. It also monitors the OS UI message queue, reporting and halting blocked tasks to prevent UI freezes.
Technical Architecture Evolution
3.1 Multi‑Process Architecture
The UI message queue is isolated by moving UI‑related components into a separate process. Web‑embedded pages, custom plugins, and native extensions are also migrated to independent processes, reducing interference with the UI queue and improving stability.
3.2 Plugin Architecture
Core functionalities are packaged as a
core system, while custom features become
plug‑in components. Both can be released independently, and a Web/native SDK is provided for customers to develop their own plug‑ins, lowering Qidian’s maintenance burden.
3.3 Resource Detection System
A resource‑sensitive system monitors CPU, memory, and network usage. When high usage is detected, it dynamically pauses non‑essential tasks such as status updates, full‑text search, unread‑message alerts, or local database writes. Memory‑heavy UI elements like chat windows, auto‑complete suggestions, and embedded pages are also throttled.
3.4 Solution Comparison
Visual comparisons of the previous single‑process model versus the new multi‑process, plug‑in‑based design are provided.
Main Improvements
4.1 Multi‑Window Performance Optimization
Opening 300+ chat windows previously created a separate
ChatViewfor each, wasting memory. A virtual window mechanism now creates only lightweight
ChatNodeobjects for invisible windows, saving memory. Additionally, a multi‑group feature lets users organize windows, reducing on‑screen clutter.
4.2 Global Embedded Page Optimization
Each chat window previously loaded an embedded page, consuming up to 2.5 GB of memory on low‑end PCs. Lazy loading now creates embedded pages only when the window becomes visible. An activation‑based creation mechanism replaces inactive embedded pages with screenshots, and a template‑method pattern eliminates full reloads when switching windows.
4.3 Overall Optimization Effect
Performance testing shows significant reductions in CPU, memory, and UI latency across typical usage scenarios.
Future Plans
Leverage Tencent Cloud for big‑data analysis of user behavior and dynamically enable/disable features based on device capabilities.
Implement a shared chat‑window mechanism to reuse a single OS window for multiple conversations.
Introduce intelligent virtual windows that close unused chats and reopen them on demand.
Further streamline the core system to suit low‑spec devices.
Expand the plug‑in architecture and monitoring tools to a cross‑platform solution.
Optimize network task scheduling to maintain essential functionality under weak network conditions.
Tencent Qidian Tech Team
Official account of Tencent Qidian R&D team, dedicated to sharing and discussing technology for enterprise SaaS scenarios.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.