Big Data 17 min read

Understanding Apache Pulsar: Cloud‑Native Messaging, Storage‑Compute Separation, and Batch‑Stream Fusion with Flink

This article explains Apache Pulsar’s cloud‑native, storage‑compute separated architecture, its data model and scalability features, and how it integrates with Flink to provide a unified platform for both real‑time streaming and batch processing in big‑data applications.

DataFunTalk
DataFunTalk
DataFunTalk
Understanding Apache Pulsar: Cloud‑Native Messaging, Storage‑Compute Separation, and Batch‑Stream Fusion with Flink

1. What is Apache Pulsar

Apache Pulsar is a cloud‑native, distributed messaging and streaming platform that became a top‑level Apache project in 2018. It adopts a storage‑compute separated architecture and uses Apache BookKeeper as a purpose‑built storage engine for messages and streams.

2. Architecture

The architecture consists of a stateless broker layer (the service layer) and a storage layer powered by BookKeeper. Brokers handle only routing and consumer interaction, while BookKeeper provides durable, high‑throughput log storage. This separation enables independent scaling of compute (adding more brokers) and storage (adding more BookKeeper nodes).

Both layers are peer‑to‑peer; BookKeeper uses a quorum‑based replication model rather than master/slave, allowing easy node addition and high availability.

3. Data View

Data is organized around topics, which are partitioned for scalability. Each partition can be split into time‑ or size‑based segments (shards). Segments are stored across the BookKeeper cluster, providing balanced storage and enabling infinite stream storage by offloading older segments to secondary storage (e.g., AWS, Azure, GCP, HDFS).

Two access interfaces are offered: a real‑time consumer API for the latest data and a reader API for historical segments, allowing unified batch‑stream processing.

4. Pulsar & Flink Batch‑Stream Fusion

In Flink, streams are fundamental; Pulsar serves as the underlying data carrier. Bounded Flink jobs can read a specific range of segments (treated as a bounded stream), while unbounded jobs continuously consume the tail of a topic. This enables a single Flink program to handle both real‑time and historical data.

5. Current Capabilities

Schema : Pulsar supports schema definitions (metadata and payload schemas such as Avro) that Flink can directly consume.

Source : Schemas allow Pulsar to act as a Flink source.

Sink : Flink results can be written back to Pulsar topics.

Streaming Tables : Pulsar topics can be exposed as Flink tables for SQL‑style queries.

Pulsar Catalog : The multi‑level namespace (tenant/namespace/topic) maps naturally to Flink’s catalog concept.

FLIP‑27 : Provides a splitter‑reader framework for batch‑stream fusion, using segment metadata to select optimal readers.

High Concurrency Source : Supports Key‑Shared consumption, allowing many Flink parallel instances to read from the same partition while preserving key ordering.

Automatic Reader Selection : Chooses the appropriate reader based on segment location and format.

6. Recent Work

Ongoing efforts include integration with Flink 1.12, support for Pulsar 2.7 transactions and exactly‑once semantics, reading Parquet files from secondary storage, and using Pulsar as Flink state backend.

7. Conclusion

Pulsar’s storage‑compute separation, infinite stream storage, and rich integration with Flink make it a powerful foundation for unified data processing in big‑data environments.

cloud-nativeBig DataFlinkMessage QueueApache PulsarBatch-Stream Integration
DataFunTalk
Written by

DataFunTalk

Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.