Ahorra un 25 % (o incluso más) en tus costes de Kafka | Acepta el reto del ahorro con Kafka de Confluent

Getting Started with Stream Processing: The Ultimate Guide

Escrito por

Whether it’s ordering shoes online, depositing a check through a banking app, or reserving a ride-share to the airport, customers today expect transactions to be fast and seamless. To make this possible, businesses rely on constantly evolving streams of data. Stream processing is critical to building these real-time applications by continuously capturing, storing, and processing data streams and serving queries (e.g. push and pull) against them to applications.

However, the challenge with stream processing is that the underlying architecture is typically a convoluted blend of separate solutions from different projects. That’s why Confluent wanted to make it easier to simplify stream processing architecture by building ksqlDB, a stream processor that’s part of Confluent Cloud and uses a declarative approach with lightweight SQL syntax. Let’s look at how ksqlDB can help your business create the types of applications that deliver rich, real-time customer experiences.

The Many Benefits of ksqlDB

By using ksqlDB to build real-time applications, developers can unlock the value of real-time data using efficient new design techniques.

1. Processing Data in Motion
Traditionally, databases are designed to query and store data at rest. But data at rest gives answers at rest, which are immediately out of date. ksqlDB’s model handles real-time data manipulation, so development teams can adapt their designs on the fly and businesses can innovate faster.

2. Simplified Stream Processing Architecture
Stitching together multiple distributed systems for event capture, processing, or query serving can be complex and inefficient. ksqlDB’s single mental model works across your entire stack: event capture, transformations, aggregations, and serving materialized views. It’s built specifically to work with Kafka, and was designed to integrate with the data movement layer to handle both data processing and movement. All of this means less infrastructure maintenance and fewer relationships to manage.

3. Lightweight SQL Syntax
Developers using ksqlDB benefit from its high-level declarative language, which is far simpler than Java. As a result, they can build real-time applications as easily as traditional apps on a standard database.

Exploring Technical Use Cases for ksqlDB

How does ksqlDB’s simplified architecture let businesses unlock real-time insights and customer experiences? Here are a few popular examples:

1. Streaming Data Pipeline
Sometimes data needs to be changed as it flows from one place to another. Personal identifiable information may need to be removed, for example. And on some occasions, data may need to be incorporated from another system. Data may also need to be preprocessed in anticipation of future usage. For instance, you might take start/stop timestamps for time on a site and preprocess that into a number of seconds. A streaming ETL pipeline, also known as a “streaming data pipeline,” is a set of software services that makes it possible to stream events between sources and sinks—and make changes to data in flight when necessary. ksqlDB helps simplify the process of writing and deploying these pipelines.

2. Materialized Cache
The usual way of building a materialized cache, also known as “materialized view,” is to capture a database’s changelog and process it as a stream of events. But it can be complicated to monitor, secure, and scale multiple systems running at once—databases, Kafka clusters, connectors, stream processors, and other data stores. With Confluent’s solution, developers can reduce the architecture to only two components: data (Kafka) and compute (ksqlDB). Here’s a helpful blog on how to build a materialized cache with ksqlDB.

3. Event-Driven Microservices
The challenge of scaling stateful services becomes even greater when development teams have to couple a stateful service with the responsibility of triggering additional actions. Each one might have completely different needs, but teams have to manage both as if they are one. An event-driven microservice, where the outcome of an application’s code is a stream of events, can make things simpler by localizing state within each microservice. Once again, ksqlDB simplifies the process by allowing stateful stream processing to be managed on ksqlDB, while side effects run inside the stateless microservice.

What Makes ksqlDB Different?

Since it was designed to support real-time applications specifically, ksqlDB has a number of key constructs that make it uniquely different.

    • Streams and Tables
      To represent collections of data and how they’re connected to each other, ksqlDB uses two constructs. Streams are immutable, append-only collections that can represent a series of historical facts or see data in motion. Mutable collections, called tables, represent the latest version of each value per key. Read more for a complete primer on streams and tables.
    • Persistent Queries
      To process data, you need persistent queries. Persistent queries can transform, filter, aggregate, and join data collections together. By executing continuous computations over unbounded streams of events, persistent queries derive new collections or materialized views.
    • Push and Pull Queries
      Push queries emit refinements to a query’s result when new events arrive, making it possible to react quickly to new information. Pull queries fetch the current state of a materialized view and they run with predictable low latency.

ksqlDB is Just One Way We’re Simplifying Stream Processing

ksqlDB is just one component of Confluent’s complete platform for harnessing data in motion. A rich pre-built connector ecosystem, broad compatibility between applications, and industry-leading reliability and security all combine to give enterprises the tools they need to build seamless customer experiences and powerful data-driven operations.

Try Confluent Cloud for Free

  • Sophia Jiang is a Group Product Marketing Manager at Confluent, where she is responsible for leading go-to-market for key marketing campaigns and product-led growth strategies. Prior to Confluent, Sophia led GTM for retail, CPG, and manufacturing verticals at MuleSoft. Sophia has a BA in Economics and International Studies from Emory University.

¿Te ha gustado esta publicación? Compártela ahora

MiFID II: Data Streaming for Post-Trade Reporting

The Markets in Financial Instruments Directive II (MiFID II) came into effect in January 2018, aiming to improve the competitiveness and transparency of European financial markets. As part of this, financial institutions are obligated to report details of trades and transactions (both equity and...

Unlocking the Edge: Data Streaming Goes Where You Go with Confluent

While cloud computing adoption continues to accelerate due to its tremendous value, it has also become clear that edge computing is better suited for a variety of use cases. Organizations are realizing the benefits of processing data closer to its source, leading to reduced latency, security and...