Big Data 17 min read

Apache Flink 1.10 Release: New Features, Optimizations, and Kubernetes Integration

Apache Flink 1.10 introduces major performance and stability improvements, unified memory configuration, native Kubernetes session mode, enhanced Table API/SQL with production‑ready Hive integration, expanded Python UDF support, and a host of important bug fixes and connector updates, marking the largest community‑driven release to date.

Big Data Technology Architecture
Big Data Technology Architecture
Big Data Technology Architecture
Apache Flink 1.10 Release: New Features, Optimizations, and Kubernetes Integration

Apache Flink 1.10.0 has been officially released, representing the largest community‑driven upgrade with contributions from over 200 developers addressing more than 1,200 issues. The release brings significant performance and stability enhancements, initial native Kubernetes integration, and major improvements to PyFlink.

Memory Management and Configuration Optimizations – Flink’s TaskExecutor memory model has been overhauled (FLIP‑49) to simplify configuration, unify managed memory (including RocksDB state backend), and allow precise control of memory usage across deployment environments such as Kubernetes, YARN, and Mesos.

Managed Memory Expansion – Managed memory now covers RocksDB state backend memory, and managed memory is restricted to off‑heap to avoid configuration changes between batch and streaming jobs.

Simplified RocksDB Configuration – Users can now adjust RocksDB memory budgets simply by changing the managed memory size, and native memory limits can be set to prevent exceeding container limits (FLINK‑7289).

Note: FLIP‑49 changes the cluster resource configuration process; upgrading from earlier versions may require configuration adjustments (see documentation).

Unified Job Submission Logic – Job submission is abstracted to a common Executor interface (FLIP‑73) with a new ExecutorCLI (FLIP‑81) for unified configuration across targets. The JobClient API decouples result retrieval from submission.

Native Kubernetes Integration (Beta) – Flink now supports a native session mode on Kubernetes (FLINK‑9953). Users can submit jobs with a single CLI command:

./bin/flink run -d -e kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar

Additional documentation is available for configuring and deploying on Kubernetes.

Table API/SQL: Production‑Ready Hive Integration – Flink 1.10 extends Hive support to all major Hive versions, adds native partition handling (INSERT OVERWRITE / PARTITION), and introduces optimizations such as projection push‑down, LIMIT push‑down, and ORC vectorization.

INSERT { INTO | OVERWRITE } TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...)] select_statement1 FROM from_statement;

These enhancements enable partition pruning and significantly improve query performance.

Pluggable Modules (Beta) – A generic pluggable module mechanism (FLIP‑68) is introduced, initially used for system functions and Hive integration via a pre‑implemented HiveModule . Users can develop custom modules as needed.

Other Table API/SQL Optimizations – New DDL syntax for watermarks and computed columns (FLIP‑66) and stricter function catalog handling (FLIP‑57, FLIP‑79) improve usability and reduce ambiguity.

Important Changes

Flink now runs on Java 11 (FLINK‑10725).

The SQL client defaults to the Blink planner (FLINK‑15495).

Elasticsearch sink connector fully supports ES 7.x (FLINK‑13025).

Kafka 0.8/0.9 connectors are deprecated (FLINK‑15115).

Network credit‑based flow control is now mandatory (FLINK‑14516).

Old Web UI removed in favor of the new UI.

Release Notes & Upgrade Guidance – Detailed release notes, API compatibility information, and migration instructions are available in the official documentation.

Contributors – The release acknowledges over 200 contributors listed in the community thank‑you section.

References – A comprehensive list of linked JIRA tickets, FLIP proposals, and documentation URLs is provided for further reading.

PythonStream ProcessingSQLkubernetesApache FlinkHive Integration
Big Data Technology Architecture
Written by

Big Data Technology Architecture

Exploring Open Source Big Data and AI Technologies

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.