Frontend Development 9 min read

Frontend Performance Optimization for Transaction Backend

The article details a micro‑frontend transaction backend case study where code‑splitting, CDN delivery, caching, lazy‑loading, and idle‑time loading reduced first‑screen JavaScript size, cut FCP below one second and LCP below two seconds, and improved first‑screen open rate by 15% while halving TTI.

DeWu Technology
DeWu Technology
DeWu Technology
Frontend Performance Optimization for Transaction Backend

This article presents a practical case study of performance optimization for a transaction backend built with a micro‑frontend architecture.

1. Introduction – The author emphasizes the importance of user experience for the front‑end team and states the goal of achieving sub‑second first‑screen loading.

2. System Overview – The transaction backend handles product, bidding, merchant and order domains, serving over 100k daily page views. It is based on the qiankun micro‑frontend framework, loading many JS bundles, styles, images, menus and third‑party SDKs on the first screen.

3. Performance Status

• Dashboard analysis shows long‑running tasks (red zones) that block the main thread.

• Task 1 occurs before DCL, caused by large entry scripts (__webpack_require__, Compile Code).

• Task 2 occurs after FCP, caused by heavy component rendering (componentUpdateFn).

Key metrics before optimization:

First Contentful Paint (FCP) > 2 s

Largest Contentful Paint (LCP) > 4 s

4. Optimization Plan

4.1 First‑screen resource optimization

– Reduce entry JS size via code splitting, CDN static assets, and minification.

– Cache stable APIs and defer low‑priority requests.

// Cache‑aside strategy example
async function cacheRequest(params: any) {
  const startTime = Date.now();
  const key = params.key;
  const cache = JSON.parse(localStorage.getItem(key) || '{}');
  const canCache = cache.cacheTime && moment(cache.cacheTime).isSame(moment(startTime), 'day');
  window.sendTrack?.({
    event: 'main_request_cache',
    tags: { eventTitle: '主应用数据缓存', eventType: canCache ? 1 : 0 }
  });
  const requestCache = async () => {
    const data = await request(params);
    localStorage.setItem(key, JSON.stringify({ data, cacheTime: Date.now() }));
    return data;
  };
  if (canCache) {
    setTimeout(requestCache, DELAY_CACHE_TIME * 1000);
    return cache.data;
  } else {
    const data = await requestCache();
    return data;
  }
}

4.2 Rendering priority optimization

– Lazy‑load the left‑side menu, initially showing only the first‑level items.

– Use requestIdleCallback to load non‑critical content during idle periods.

// Idle‑time loading example
try {
  requestIdleCallback(deadline => {
    while (deadline.timeRemaining() > 0 && !deadline.didTimeout) {
      // load remaining menu items
    }
  });
} catch (e) {
  setTimeout(() => {
    // fallback loading for older browsers
  }, DELAY_LOAD_TIME);
}

5. Results

After the optimizations (July → September), the main performance indicators improved:

First‑screen open rate (+15%)

FMP 90th percentile reduced by 35%

TTI 90th percentile reduced by 50%

Resource size decreased by ~1 MB, total load time dropped > 0.5 s, and both FCP and LCP fell below 1 s and 2 s respectively.

6. Conclusion & Outlook

Future work includes deeper micro‑frontend framework tuning, Service‑Worker based caching, and on‑demand configuration loading to further reduce bandwidth pressure.

frontendperformanceOptimizationwebmicrofrontend
DeWu Technology
Written by

DeWu Technology

A platform for sharing and discussing tech knowledge, guiding you toward the cloud of technology.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.