Artificial Intelligence 14 min read

Green Area Generation Method Based on Pix2pix Model

This paper proposes a pix2pix‑based method to automatically generate green areas for large‑scale outdoor 3D scene modeling, detailing dataset creation via OpenCV segmentation, model training, region partitioning, and experimental results showing a 93.8% acceptance rate, significantly improving efficiency over manual drawing.

Beike Product & Technology
Beike Product & Technology
Beike Product & Technology
Green Area Generation Method Based on Pix2pix Model

The authors address the low efficiency of manually drawing green areas in large‑scale outdoor 3D scene reconstruction by introducing an automatic generation method based on the pix2pix generative adversarial network.

After briefly reviewing the fundamentals of GANs and conditional GANs, the paper describes the pix2pix architecture, which combines a U‑Net generator with a patch‑GAN discriminator and optimizes a loss composed of adversarial and L1 components.

In the proposed workflow, OpenCV is used to automatically segment map images into binary masks that separate buildable, road, and water regions from potential green‑area zones. These masks serve as inputs for constructing paired training datasets.

The dataset consists of 1,000 training, 200 validation, and 300 test image pairs, with resolutions ranging from 256 × 256 to 1,024 × 1,024 pixels, each pair containing an original binary map (A) and a manually designed green‑area target (B).

Training is performed in Python with the PyTorch framework on a Windows 10 machine equipped with an NVIDIA GeForce RTX 2080 Ti GPU. The model is initialized with a Gaussian distribution (mean 0, std 0.02), uses a learning rate of 0.001, and is optimized with Adam; batch size is set to 1 to accommodate varying image sizes.

Experimental evaluation combines objective metrics (Inception Score, FID) with a subjective user study involving 50 participants who rated the realism of generated green‑area images. The method achieved a 93.8 % acceptance rate, demonstrating that the pix2pix‑generated results meet practical quality requirements.

Post‑processing includes smoothing the generated masks with OpenCV, exporting the green‑area data to JSON, and converting it to FBX files for import into 3ds Max, where a plugin randomly populates trees, completing the automated reconstruction pipeline.

In conclusion, leveraging deep‑learning‑based image‑to‑image translation substantially reduces manual effort, improves consistency, and accelerates the creation of realistic green‑area textures for extensive outdoor 3D visualizations.

computer visionDeep LearningGAN3D modelinggreen area generationpix2pix
Beike Product & Technology
Written by

Beike Product & Technology

As Beike's official product and technology account, we are committed to building a platform for sharing Beike's product and technology insights, targeting internet/O2O developers and product professionals. We share high-quality original articles, tech salon events, and recruitment information weekly. Welcome to follow us.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.