Artificial Intelligence 8 min read

UI2CODE: AI‑Powered Automatic UI‑to‑Flutter Code Generation

UI2CODE, an AI‑powered tool from Xianyu’s tech team, automatically transforms UI screenshots into editable Flutter code by extracting visual elements with machine‑vision, classifying them via deep‑learning, generating a domain‑specific language through a recursive neural network, and mapping this DSL onto syntax‑tree templates to achieve pixel‑level precision, full accuracy, and maintainable output.

Xianyu Technology
Xianyu Technology
Xianyu Technology
UI2CODE: AI‑Powered Automatic UI‑to‑Flutter Code Generation

Background: With the rise of mobile internet, UI developers spend a lot of time manually reproducing design drafts.

UI2CODE is a tool developed by the Xianyu tech team that uses machine‑vision and AI to convert UI screenshots into client‑side code (Flutter).

Since its 2018 feasibility study, UI2CODE has undergone three major redesigns. The project focuses on three core problems: pixel‑level precision, 100 % accuracy, and maintainable code.

Workflow (four steps):

Extract GUI elements from the visual draft using machine‑vision.

Identify element types with deep‑learning classifiers.

Generate a domain‑specific language (DSL) via a recursive neural network.

Match the DSL against syntax‑tree templates to produce Flutter code.

Layout analysis first cuts the image horizontally then vertically, recording x‑y coordinates of each cut. The process yields six GUI element images with their positions, which are later classified.

Component recognition uses TensorFlow CNN and SSD models to train on collected component samples. Components are categorized into UI, CI, and BI types for Flutter native components, custom UIKIT, and business‑logic cards respectively.

Attribute extraction handles shape contours, font properties, and component dimensions. The following Python functions illustrate the image‑to‑matrix conversion and column‑cut algorithm used in the pipeline:

def image_to_matrix(filename):
    im = Image.open(filename)
    width, height = im.size
    im = im.convert("L")
    matrix = np.asarray(im)
    return matrix, width, height
def cut_by_col(cut_num, _im_mask):
    zero_start = None
    zero_end = None
    end_range = len(_im_mask)
    for x in range(0, end_range):
        im = _im_mask[x]
        if len(np.where(im == 0)[0]) == len(im):
            if zero_start is None:
                zero_start = x
        if zero_start is not None and zero_end is None:
            zero_end = x
        if zero_start is not None and zero_end is not None and zero_start > 0:
            start = zero_start
            cut_num.append(start)
            zero_start = None
            zero_end = None
        if x == end_range - 1 and zero_start is not None and zero_end is None and zero_start > 0:
            zero_end = x
            zero_start = None
            zero_end = None

Layout analysis originally used a 4‑layer LSTM but was replaced by rule‑based methods due to limited data. The final step templates the DSL tree into Flutter code, allowing developers to edit and verify the generated UI.

The system is packaged as a plugin that also searches for custom UIKIT definitions in the project, improving code reuse.

Conclusion: UI2CODE demonstrates a hybrid approach where machine‑vision handles deterministic tasks and deep‑learning assists classification, aiming to automate UI implementation while providing editable output.

fluttercode generationUI2Codedeep learningMachine Vision
Xianyu Technology
Written by

Xianyu Technology

Official account of the Xianyu technology team

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.