Understanding Node.js Streams: Concepts, Pipe, Pipeline, and Async Iterators
This article explains the fundamentals of Node.js streams, including basic concepts, stream types, pipe and pipeline usage, handling backpressure, and demonstrates how async iterators can simplify stream processing with detailed code examples and visual animations.
The article introduces Node.js streams, describing why streams are needed for processing large data sets without loading everything into memory, and highlights their benefits of memory savings and improved throughput.
It outlines the four basic stream types—Writable, Readable, Duplex, and Transform—providing examples such as fs.createWriteStream() and net.Socket , and explains key stream characteristics like events, independent buffers, and character encoding.
Readable streams operate in either flowing or paused mode; the article details how to switch modes using the 'data' event, stream.resume() , stream.pause() , and stream.unpipe() , and discusses backpressure handling with the highWaterMark threshold.
For combining streams, the traditional pipe method is presented alongside its limitations, followed by the modern pipeline API introduced in Node.js v10, which automatically manages errors and closes all streams in a chain.
Several code examples illustrate these concepts, including a pipe chain with compression and error handling:
import * as path from 'path';
import * as fs from 'fs';
import * as chalk from 'chalk';
import { createGzip } from 'zlib';
import { Transform } from 'stream';
const readableStream = fs.createReadStream(smallFile, { encoding: 'utf-8', highWaterMark: 256 });
const writeableStream = fs.createWriteStream(upperFile, { encoding: 'utf-8', highWaterMark: 10 });
const upperCaseTr = new Transform({
transform(chunk, encoding, callback) {
this.push(chunk.toString().toUpperCase());
callback(new Error('error'));
}
});
readableStream.pipe(createGzip()).pipe(upperCaseTr).pipe(writeableStream);
upperCaseTr.on('error', (err) => {
console.log('upperCaseTr error', err);
writeableStream.destroy();
readableStream.destroy();
});A pipeline example shows a simple server streaming a large file to the client:
import * as fs from 'fs';
import { createServer } from 'http';
import * as path from 'path';
import { pipeline } from 'stream';
const server = createServer();
server.on('request', (req, res) => {
const readable = fs.createReadStream(path.join(__dirname, '../../../temp/big.txt'));
pipeline(readable, res, (err) => {
console.log('pipeline error', err);
});
});
server.listen(8000);The article then introduces async iterators as a modern way to consume readable streams, showing how Readable.from() can turn iterables or generators into streams, and how for await...of loops simplify data processing.
async function* generator() {
for (let i = 0; i < 1024; i++) {
yield i;
}
}
for await (let chunk of generator()) {
console.log(chunk);
}It also provides a pattern for wrapping writable streams with promises to handle backpressure and errors, using once(stream, 'drain') and Stream.finished .
Finally, the article summarizes best practices: prefer pipeline over pipe , use Readable.from() for creating readable streams, wrap writable streams with promises to avoid data accumulation, and leverage async iterators for more ergonomic stream consumption.
ByteDance ADFE Team
Official account of ByteDance Advertising Frontend Team
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.