Frontend Development 34 min read

Understanding the Architecture and Canvas Design of No‑Code/Low‑Code Building Platforms

This article explains the core concepts, architecture, and canvas implementation techniques of modern no‑code/low‑code building platforms, covering NCLC fundamentals, UIDL specifications, IOC modular design, drag‑and‑drop mechanics, smart snapping, multi‑selection, and event handling to improve development efficiency and user experience.

ByteFE
ByteFE
ByteFE
Understanding the Architecture and Canvas Design of No‑Code/Low‑Code Building Platforms

What Is a Building Platform

Before introducing building platforms, we must discuss NCLC (No Code & Low Code), the core idea behind many site‑building products such as Meego, Retool, Notion, etc., which have spawned dozens of development tracks.

The most popular form is the website‑builder, exemplified by Dreamweaver, SaaS Webflow, and various domestic platforms. Building platforms solve two major enterprise pain points: development efficiency and staff transformation.

Development Efficiency

Building platforms act like software IDEs but focus on component reuse and composition rather than raw business logic. Drag‑and‑drop replaces traditional coding, enabling rapid creation of activity pages, e‑commerce promotions, and back‑office tools.

Staff Transformation

Non‑technical staff can build applications without programming experience, reducing development resource waste and improving workflow efficiency.

Derived Concepts

The platform can be expressed as:

Building Platform = Editor (Canvas + Settings) + Generator , where the data source is materials and the communication protocol is UIDL .

Editor: renders front‑end material on the canvas and produces UIDL.

Generator: consumes UIDL and, based on a template project, generates the final page.

This layered architecture isolates responsibilities and communicates via interfaces and protocols.

Architecture Design

IOC Architecture

Modules are loosely coupled through an IOC container; each module declares its usage contract and is injected at runtime, allowing independent iteration and easy extension.

Editor‑Render Separation

The canvas is split into a rendering layer (a mask of virtual elements) and an editing layer (real DOM elements). Users interact with the editing layer, while visual feedback occurs on the rendering layer.

Page preview can be achieved by removing the mask for instant rendering.

Interaction handles (anchors, guidelines) live only in the rendering layer, keeping the editing layer clean.

The design fully decouples editing specifications from rendering specifications.

interface ComponentAddAction {}
interface ComponentDragAction {}
...

Event System

Each functional module can subscribe to lifecycle events such as init (page load), dragEnd (component drop), etc., enabling extensible feature addition.

Specification Design

Two key specifications are highlighted:

UIDL : a JSON‑based "User Interface Definition Language" that describes meta information, project info, page schema, and used materials.

Material Specification : defines component attributes, terminal classification (mobile, TV, etc.), form (component, plugin, action), and functionality (basic, container, gameplay).

interface ComponentProps { id: string; [key: string]: any }
interface Schema { id: string; type: MaterialId; name: string; children: Schema[]; props: ComponentProp }
interface UIDL { meta: { version: string }; project: { id: string; title: string; version: string; author: User; url: string }; schema: Schema; materials: { components: Array
} }

Canvas Design

The canvas part focuses on three core actions: adding components, dragging components, and selecting components.

Adding Components

When a component is added, the platform generates a Schema node for UIDL and loads the material asynchronously to avoid loading all assets upfront.

genSchema(component: ComponentMaterial): Schema {
  const children: Schema[] = [];
  // TODO: fill props from settings
  const props = {};
  // TODO: fill default styles
  const styles: React.CSSProperties = SchemaService.defaultStyles;
  return {
    id: this.genComponentId(),
    type: component.id,
    name: component.name,
    props,
    children,
    styles
  };
}

Materials are packaged using ESM and loaded via SystemJS, with sandboxing (logic isolation via Proxy, style isolation via Shadow DOM or CSS modules) to prevent interference.

Dragging Components

Drag operations are divided into three phases (MouseDown, MouseMove, MouseUp). The platform records start point, direction, distance, and updates a mirror component for visual feedback.

let componentMap = {};
let mirror = { move: xxx, destroy: xxx };
onMouseDown = (e) => {
  const schema = genSchema(e);
  loadComponent(schema);
  mirror = renderMirror(schema);
};
onMouseMove = (e) => { mirror.move(e); };
onMouseUp = (e) => { mirror.destroy(); };

Smart snapping includes position snapping (1‑5 px threshold), distance snapping, and size snapping, all visualized with guideline overlays.

Selecting Components

Selection triggers an event distribution mechanism. Two approaches are discussed: per‑component event binding or a global event listener that resolves the target component. The article recommends the former for accuracy.

function withEventProvider
(Component: React.ComponentType
) {
  const Wrapped: React.ComponentType
= (props) => (
console.log(e, props)}>
);
  const name = Component.displayName ?? Component.name ?? 'Unknown';
  Wrapped.displayName = `withEventProvider(${name})`;
  return Wrapped;
}

Quick actions such as delete, copy‑paste, cut, text editing (via data‑edit="propKey" ), and rotation are provided for selected components.

Multi‑Selection

Users can drag a selection rectangle; the platform computes the minimal bounding box of fully contained components and allows simultaneous move, resize, and smart snapping based on the group’s outline.

let startPoint = null;
const area = { width:0, height:0, x:0, y:0 };
const onMouseDown = e => { startPoint = new Point(e); };
const onMouseMove = e => {
  area.width = Math.abs(e.x - startPoint.x);
  area.height = Math.abs(e.y - startPoint.y);
  area.x = Math.min(e.x, startPoint.x);
  area.y = Math.min(e.y, startPoint.y);
};

Conclusion

The article outlines the technical challenges of the canvas in a no‑code/low‑code building platform, including modular IOC architecture, editor‑render separation, event systems, UIDL and material specifications, drag‑and‑drop mechanics, smart snapping, and multi‑selection.

Future articles will cover the Settings panel and Generator components.

About Us

We are the front‑end team of ByteDance’s Xigua Video product, sharing engineering practices in marketing builders, interactive features, backend stability, Node.js, and more.

Read the original article, join us, and explore challenging projects! Recruitment: https://job.toutiao.com/s/reAThAC (Beijing/Shanghai/Xiamen).

References

[1] InversifyJS best practices – https://github.com/inversify/InversifyJS/blob/master/wiki/good_practices.md

[2] draggable topic – https://github.com/topics/draggable

[3] Native Drag events – https://developer.mozilla.org/en-US/docs/Web/API/Document/drag_event

[4] draw.io – https://app.diagrams.net/

[5] document.elementsFromPoint – https://developer.mozilla.org/en-US/docs/Web/API/Document/elementsFromPoint

Low-codeno-codebuilding platformcanvas architectureUIDL
ByteFE
Written by

ByteFE

Cutting‑edge tech, article sharing, and practical insights from the ByteDance frontend team.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.