Show Me How: Build Streaming Data Pipelines for Real-Time Data Warehousing | Register Today
Now that KSQL is available for production use as a part of the Confluent Platform, it has never been easier to run the open-source streaming SQL engine for Apache Kafka®. Which is not to say that everything is entirely obvious to the new user. A beginning or even intermediate streaming SQL user might still need a hand, and we’re here to give you one!
Maybe you’ve already been using KSQL, and you have fallen in love with its intuitive syntax for creating and enriching streams of real-time data. Maybe you run Confluent Platform, and you already love the handy KSQL user interface and Confluent Control Center’s stream monitoring capabilities to monitor the performance of your KSQL queries.
Or maybe not yet. Regardless, we can tell you that now is the time to level up your KSQL. Whether you are brand new to it or ready to take it to production, now you can dive deep on core KSQL concepts, streams and tables, enriching unbounded data and data aggregations, scalability and security configurations, and more. Stay tuned with us over the next few weeks as we release the Level Up Your KSQL video series that enables you to really understand KSQL.
There are more videos besides these. We also cover:
Interested in more? Learn more about what KSQL can do:
Use the Confluent CLI and API to create Stream Designer pipelines from SQL source code.
This post details how to minimize internal messaging within Confluent platform clusters. Service mesh and containerized applications have popularized the idea of control and data planes. This post applies it to the Confluent platform clusters and highlights its use in Confluent Cloud.