Creating a Realistic Rain Effect with Depth Rendering in three.js
This tutorial demonstrates how to create a realistic rain effect in three.js by generating depth maps with an orthographic camera, rendering them to a texture, and using custom shaders to position and fade raindrop meshes based on depth information.
Briefly understand the principles before reading.
1. What is depth?
Depth is the z‑coordinate of a point in 3D space after it has been transformed by the model‑view‑projection matrices (MVP) and mapped to camera space, ranging from 0 to 1, where larger values are farther from the camera.
2. Obtaining rendering information in WebGL
In WebGL, rendering data is stored in a FrameBufferObject (FBO). You can create and read from it, though the details are not the focus here.
3. Obtaining rendering information in three.js
three.js provides THREE.WebGLRenderTarget() , which encapsulates FBO handling and lets you capture scene renders such as depth maps.
Ready? Let's start!
1. Skipping the creation of the basic template
2. Define the rain effect space using a BOX and place a plane in the middle to block the rain.
box = new THREE.Box3(
new THREE.Vector3(-200, 0, -200),
new THREE.Vector3(200, 200, 200)
);
const geometry = new THREE.PlaneGeometry(100, 400)
geometry.rotateX(-Math.PI / 2)
const mesh = new THREE.Mesh(
geometry,
new THREE.MeshBasicMaterial({ side: THREE.DoubleSide })
);
mesh.position.y = 100
scene.add(mesh);The result is a simple plane that will serve as the rain blocker.
3. Rendering the depth map
// Create render target
target = new THREE.WebGLRenderTarget(WIDTH, HEIGHT);
target.texture.format = THREE.RGBFormat;
target.texture.minFilter = THREE.NearestFilter;
target.texture.magFilter = THREE.NearestFilter;
target.texture.generateMipmaps = false;
// Create orthographic camera
orthCamera = new THREE.OrthographicCamera();
const center = new THREE.Vector3();
box.getCenter(center);
// Set orthographic parameters based on the BOX
orthCamera.left = box.min.x - center.x;
orthCamera.right = box.max.x - center.x;
orthCamera.top = box.max.z - center.z;
orthCamera.bottom = box.min.z - center.z;
orthCamera.near = .1;
orthCamera.far = box.max.y - box.min.y;
// Position the camera above the BOX
orthCamera.position.copy(center);
orthCamera.position.y += box.max.y - center.y;
orthCamera.lookAt(center);
// Update matrices
orthCamera.updateProjectionMatrix();
orthCamera.updateWorldMatrix();
// Helper to visualise the camera
const helper = new THREE.CameraHelper(orthCamera)
scene.add(helper);The orthographic camera will be used later to render the depth scene.
4. Create a second scene for depth rendering
// Create depth scene
depthScene = new THREE.Scene();
depthScene.overrideMaterial = new THREE.ShaderMaterial({
vertexShader: `
varying float color;
void main() {
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
color = gl_Position.z / 2.0 + 0.5;
}
`,
fragmentShader: `
varying float color;
vec4 encodeFloat2RGBA(float v)
{
vec4 enc = vec4(1.0, 255.0, 65025.0, 16581375.0) * v;
enc = fract(enc);
enc -= enc.yzww * vec4(1.0/255.0, 1.0/255.0, 1.0/255.0, 0.0);
return enc;
}
void main() {
gl_FragColor = encodeFloat2RGBA(1.0 - color);
}
`,
});Rendering the depth scene with the orthographic camera produces a texture where color encodes depth.
// Render depth into the target
renderer.setRenderTarget(target);
depthScene.children = [plane];
renderer.render(depthScene, orthCamera);
renderer.setRenderTarget(null);4. With the depth map ready, create the geometry for the rain
Rain droplets are created as meshes rather than points to avoid automatic facing‑camera behavior.
const geometry = new THREE.BufferGeometry();
const vertices = [];
const poses = [];
const uvs = [];
const indices = [];
for (let i = 0; i < 6000; i++) {
const pos = new THREE.Vector3();
pos.x = Math.random() * (box.max.x - box.min.x) + box.min.x;
pos.y = Math.random() * (box.max.y - box.min.y) + box.min.y;
pos.z = Math.random() * (box.max.z - box.min.z) + box.min.z;
const height = (box.max.y - box.min.y) / 15;
const width = height / 50;
vertices.push(
pos.x + width,
pos.y + height,
pos.z,
pos.x - width,
pos.y + height,
pos.z,
pos.x - width,
pos.y,
pos.z,
pos.x + width,
pos.y,
pos.z
);
poses.push(
pos.x,
pos.y,
pos.z,
pos.x,
pos.y,
pos.z,
pos.x,
pos.y,
pos.z,
pos.x,
pos.y,
pos.z
);
uvs.push(1, 1, 0, 1, 0, 0, 1, 0);
indices.push(
i * 4 + 0,
i * 4 + 1,
i * 4 + 2,
i * 4 + 0,
i * 4 + 2,
i * 4 + 3
);
}
geometry.setAttribute(
"position",
new THREE.BufferAttribute(new Float32Array(vertices), 3)
);
geometry.setAttribute(
"pos",
new THREE.BufferAttribute(new Float32Array(poses), 3)
);
geometry.setAttribute(
"uv",
new THREE.BufferAttribute(new Float32Array(uvs), 2)
);
geometry.setIndex(new THREE.BufferAttribute(new Uint32Array(indices), 1));Material creation
The material makes each raindrop always face the camera horizontally and uses the depth texture to hide droplets that are behind other geometry.
material = new THREE.MeshBasicMaterial({
transparent: true,
opacity: 0.8,
depthWrite: false,
});
material.onBeforeCompile = function (shader, renderer) {
const getFoot = `
attribute vec3 pos;
uniform float top;
uniform float bottom;
uniform float time;
uniform mat4 cameraMatrix;
varying float depth;
varying vec2 depthUv;
#include
float angle(float x, float y){
return atan(y, x);
}
// Compute offset for the updated vertex position
vec2 getFoot(vec2 camera,vec2 _n_pos,vec2 pos){
vec2 position;
float distanceLen = distance(pos, _n_pos);
float a = angle(camera.x - _n_pos.x, camera.y - _n_pos.y);
pos.x > _n_pos.x ? a -= 0.785 : a += 0.785;
position.x = cos(a) * distanceLen;
position.y = sin(a) * distanceLen;
return position + _n_pos;
}
`;
const begin_vertex = `
float height = top - bottom;
vec3 _n_pos = vec3(pos.x, pos.y- height/30.,pos.z);
vec2 foot = getFoot(vec2(cameraPosition.x, cameraPosition.z), vec2(_n_pos.x, _n_pos.z), vec2(position.x, position.z));
// Simulate falling rain. Bottom -> Bottom+Height is the fall space.
float y = _n_pos.y - bottom - height * fract(time);
y += y < 0.0 ? height : 0.0;
// Depth percentage of the raindrop [0,1]
depth = (1.0 - y / height) ;
// Update vertex position
y += bottom;
y += position.y - _n_pos.y;
vec3 transformed = vec3( foot.x, y, foot.y );
// Transform to orthographic camera space
vec4 cameraDepth = cameraMatrix * vec4(transformed, 1.0);
// Sample UV
depthUv = cameraDepth.xy/2.0 + 0.5;
`;
const depth_vary = `
uniform sampler2D tDepth;
uniform float opacity;
varying float depth;
varying vec2 depthUv;
float decodeRGBA2Float(vec4 rgba)
{
return dot(rgba, vec4(1.0, 1.0 / 255.0, 1.0 / 65025.0, 1.0 / 16581375.0));
}
`;
const depth_frag = `
// Discard fragments that are behind geometry according to the depth map
if(1.0 - depth < decodeRGBA2Float(texture2D( tDepth, depthUv ))) discard;
vec4 diffuseColor = vec4( diffuse, opacity );
`;
shader.vertexShader = shader.vertexShader.replace("#include
", getFoot);
shader.vertexShader = shader.vertexShader.replace("#include
", begin_vertex);
shader.fragmentShader = shader.fragmentShader.replace('uniform float opacity;', depth_vary);
shader.fragmentShader = shader.fragmentShader.replace('vec4 diffuseColor = vec4( diffuse, opacity );', depth_frag);
shader.uniforms.cameraPosition = { value: new THREE.Vector3(0, 200, 0) };
shader.uniforms.top = { value: box.max.y };
shader.uniforms.bottom = { value: box.min.y };
shader.uniforms.time = { value: 0 };
shader.uniforms.cameraMatrix = { value: new THREE.Matrix4() };
shader.uniforms.tDepth = { value: target.texture };
material.uniforms = shader.uniforms;
};Finally, perform rendering; to dynamically update the depth map (e.g., a moving model with an umbrella), execute the following in the animation loop.
function render() {
time = clock.getElapsedTime() / 2;
if (material.uniforms) {
material.uniforms.cameraPosition.value = camera.position;
material.uniforms.time.value = time;
material.uniforms.cameraMatrix.value = new THREE.Matrix4().multiplyMatrices(
orthCamera.projectionMatrix,
orthCamera.matrixWorldInverse
);
// Uncomment to update depth texture each frame
// renderer.setRenderTarget(target);
// renderer.render(depthScene, orthCamera);
// renderer.setRenderTarget(null);
// material.uniforms.tDepth.value = target.texture;
}
renderer.render(scene, camera);
}Rare Earth Juejin Tech Community
Juejin, a tech community that helps developers grow.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.