국내 No.1 에너지 IT기업 ‘해줌’의 컨플루언트 클라우드 도입 스토리 | 알아보고 등록하기

What is Apache Beam?

Apache Beam is a flexible programming SDK for building data processing pipelines that can handle batch processing, stream processing, and parallel processing in one. Its unified model allows developers to define and execute abstract data workflows to be deployed on one of any number of different data processing engines, such as Apache Flink, Apache Spark, Google Cloud Dataflow, and Kafka.

Built by the original creators of Apache Kafka, Confluent powers scalable, continuous, fault-tolerant data stream processing, real-time integration, streaming analytics, governance, and more to modernize your data infrastructure.

Apache Beam's programming model is based on data transforms, which can be optimized and combined to create efficient and scalable workflows. The benefits abstracting stream processing through Beam is unclear: the requirement to run the same stream processing job on multiple frameworks is extremely rare, so the real benefits of this abstraction are slim after considering the costs of adopting Beam as a separate framework.

Use Cases and Examples

Beam aims to provide a framework independent logical model for data processing and stream processing. One use case for Beam might be to specify data processing pipelines for real-time streaming analytics, but this can be done without Beam in the processing framework of choice. Beam might be chosen by an organization seeking to standardize its data processing and not require its developers to have specific expertise in a specific framework such as Spark or Flink.

A company that runs a social media platform may choose to use Beam on top of Spark, Flink, or Kafka to specify the processing of real-time data streams from various sources, such as user activity logs, clickstreams, and social media feeds. The intention behind this choice might be to allow the developers for that company to focus on their processing logic rather than platform-specific idiosyncrasies.

How Apache Beam Works

Apache Beam offers a unified programming model that allows developers to write batch and streaming data processing pipelines that can run on various processing engines such as Apache Flink, Apache Spark, and Google Cloud Dataflow. It’s easy to deploy Apache Beam pipelines with Confluent Cloud as discussed in this talk, which discusses using Confluent Cloud as a source:

Architecture

Apache Beam is a unified programming model and SDK for building batch and streaming data processing pipelines. It provides a set of APIs that can be used to build data pipelines in a variety of programming languages, including Java, Python, and Go. Beam involves the following components:

  • Data Pipeline: A pipeline is a set of data processing operations that are chained together to form a data processing pipeline. The pipeline is defined using the Beam API and can be executed on a variety of distributed processing engines, including Apache Flink, Apache Spark, and Google Cloud Dataflow.
  • Source: A source is a data input for the pipeline. Sources can be files, databases, or other data storage systems.
  • Transform: A transform is an operation that takes one or more input data elements and produces one or more output data elements. Transforms can be used to filter, aggregate, or transform data in a variety of ways.
  • Sink: A sink is a data output for the pipeline. Sinks can be files, databases, or other data storage systems.
  • Runner: A runner is the execution engine for the pipeline. The runner takes the pipeline definition and executes it on a distributed processing engine. Apache Beam supports multiple runners, including Apache Flink, Apache Spark, and Google Cloud Dataflow.
  • Dataflow Model: The dataflow model is the underlying model for Apache Beam. It is a directed acyclic graph (DAG) of data processing operations, where each node in the graph represents a processing operation, and each edge represents a data dependency between operations.

Why Beam? Intended Benefits and Advantages

Accessibility

The Apache Beam project hosts a fantastic tutorial and execution environments for getting started with Beam quickly and testing different aspects of data flows.

Unified Model

Apache Beam aims to provide a unified model for batch and streaming data processing. This means that you can use the same Beam code to process data that is either coming in as a stream or that has already been collected into a batch. This can save you time and effort, as you don't need to learn two different sets of APIs.

Flexible Execution

Apache Beam supports a variety of execution engines, including Apache Spark, Google Cloud Dataflow, and Apache Flink. This offers you the flexibility to choose the execution engine that best meets your needs.

Portable Code

Theoretically, Apache Beam code can be run on any execution engine without modification. This means that you can develop your code once and then run it on any platform that supports Apache Beam. This can save you time and money, as you don't need to develop and maintain separate versions of your code for different platforms.

Scalable

Apache Beam can scale to process large amounts of data. This is because Apache Beam uses a distributed architecture that can be scaled out to multiple machines. This can help you to process data more quickly and efficiently.

Extensible

Apache Beam is extensible with a variety of plugins and libraries. This means that you can add new features and functionality to Apache Beam to meet your specific needs. For example, you can add support for new data sources or new data processing operations.

Disadvantages of Apache Beam

  • Unified Model: Beam’s abstraction layer is well thought out and enables developers to reason about data processing instead of the execution platform specifics or details about whether the data is received in a batch or streaming fashion. Apache Beam also aims to provide a unified model for batch and streaming data processing under the idea that you can use the same code to process data that is coming in as a stream or a batch. While there is a benefit to the unification of batch and stream processing, other frameworks, such as Apache Flink, also aim to do this.
  • Flexible Execution: The idea behind Beam is to decouple the logic of data processing from the data processing platform. The same job ought to be able to execute on any platform for which a Beam Runner is available. While this is an elegant idea, most organizations do not have the requirement to repurpose a processing job from one platform to another, and most jobs will require some degree of platform-specific tweaking to run optimally.
  • Portable Code: The portability benefits of Beam come into doubt when it comes time to optimize a job for production, or to debug a job running insufficiently on a specific platform. A key step in both of these cases might be to dispense with the abstraction layer and implement directly on the platform to take advantage of that platform’s specific capabilities.
  • Scalability: There’s nothing about Beam that improves scalability of the execution platform on which the Beam job runs. While it’s nice that Beam jobs can scale, that scalability is entirely handled by the execution platform.
  • Extensibility: While Apache Beam is extensible with a variety of plugins and libraries, and you can add new features and functionality to support your needs, the execution platforms supported by Beam. Apache Flink, Apache Spark, and Google Cloud Dataflow, are all similarly extensible.

Simplify Powerful, Modern Data Streaming with Confluent

Confluent democratizes stream processing by operating Confluent Cloud, a fully managed, multi-cloud data streaming platform with 120+ pre-built integrations, including Apache Beam.

However, Apache Beam is neither necessary nor required for effective stream processing with Confluent. Rather than requiring developers to adhere to a Beam-specific API model, Confluent users can specify their stream processing jobs in SQL, a much simpler, more standard, and more universally adopted language.

While Beam jobs can run with Confluent, more sophisticated stream processing can also be accomplished by directly choosing Kafka Streams, Apache Flink, or Apache Spark.