Bâtissez votre pont vers le cloud en temps réel avec la plateforme Confluent 7.0 et Cluster Linking | Lire le blog

Confluent CSID Accelerators

Customer Solutions & Innovation Division

While many customers are realizing the potential of the Confluent Streaming Platform for their organization, some have additional requirements for their Kafka implementation. In the Customer Solutions & Innovation Division (CSID), we see these requirements as an opportunity to extend the Platform further with solutions we call Accelerators. These are projects we develop and refine, with the intent we may discover new offerings that may be adopted as products at Confluent.

Accelerators help Confluent customers:

  • Migrate and Upgrade from Legacy Systems
  • Secure and Protect Kafka Clusters
  • Improve Kafka Consumer Efficiency
  • Cluster Planning and Operations

Extend your Streaming Platform further with these CSID Accelerators for your organization

Migrate and Upgrade from Legacy Systems

We’ve developed custom integrations and connectors to make your migration to Confluent smooth and fast. If you need to fully integrate topic data between your legacy JMS-based systems and Kafka, the JMS 2.0 Bridge accelerator might be a solution.

  • Provide a bridge without disrupting existing applications
  • Publish and subscribe anywhere data is intended for end users
  • Reduce Kafka adoption time for more legacy application breathing room

Secure and Protect Kafka Clusters

The Encryption Accelerator helps you meet business security requirements by encrypting Kafka messages, or tokenizing message fields, wherever your cluster is. Whether your plans are to secure your on-premise cluster, or include moving to the Cloud, this allows you to tokenize or encrypt your messages.

  • Exceed Kafka security requirements
  • Encrypt full messages on-premise or in cloud
  • Tokenize or encrypt message fields

Improve Kafka Consumer Efficiency

If you need to circumvent potential bottlenecks from CPU, I/O, or third-party providers, so that your Kafka messages can be processed much faster, we have developed a Parallel Consumer library you can use.

  • Process Kafka messages faster without too many partitions or clients
  • Alleviate slow consumer systems, like transactional systems
  • Better handling of load spikes with latency sensitive requirements

Cluster Planning and Operations

The eventsizer.io tool can help you to accurately estimate cluster and Kafka component requirements, and to assess or forecast usage and sizing impacts from changing Kafka requirements.

  • Estimate a rough cluster size needed from broad requirements
  • Get a more detailed estimate with additional input
  • Understand scalability from current clusters
  • Determine a recommended partition count for a topic

End-to-end encryption & tokenization

If you are security conscious of the data in motion in your Kafka cluster, or how to protect that data from an advanced persistent threat, insider threat, or even Confluent or other third party, then the Encryption Accelerator can help.

The CSID Encryption Accelerator, originally developed by Pascal Vantrepote, provides Confluent customers with an additional level of security for data at rest, to address their specific compliance and security requirements. The library enables customers to encrypt and decrypt Kafka messages, either at the full payload level, or down to the data in individual fields within a message. Customers can select from many third-party encryption, tokenization, and key management service providers, or use the methods the library provides. When combined with Schema Registry, access control protocols, and the ability to use field classifications, the library provides customers the means to encrypt and decrypt, tokenize and detokenize, and limit access to, specific fields in a message. This is extremely important to many financial services and healthcare customers.

The Encryption Accelerator is designed as a client-side library, and works with Confluent Platform v5.5 and up, and has been developed to use with Kafka clients developed in the following languages: Java, C#/.NET, Python, Node.js, C++. For field-level encryption and tokenization, the use of Schema Registry is required. Encryption and tokenization is designed to work with all message data formats, and message fields as strings, bytes, or byte arrays, and not with integers or boolean values.

Get more info:

Sign up to learn more
Download a trial version of the Encryption Accelerator
Learn more about Professional Services

Parallel Consumer

Kafka clusters can encounter “slow consumer” scenarios from CPU, I/O or third-party limitations. This in turn can result in excessive consumer clients or topic partitions. The Parallel Consumer Accelerator addresses these situations.

This CSID Accelerator library, developed by Antony Stubbs, was developed to seamlessly process messages in parallel on a thread pool, with a choice of concurrency levels, and maintain correct offset ordering, across retries. This enables faster partition hand off and recovery with client-side queue implementation.

This is a parallel Apache Kafka client wrapper with client side queueing, a simpler consumer/producer API with key concurrency and extendable non-blocking IO processing. In providing just one function, the Parallel Consumer Accelerator library wrapper seamlessly handles:

JMS 2.0 Bridge connector

If your enterprise is confined to deadlines to migrate from, or to replace, legacy messaging systems, the JMS 2.0 Bridge Accelerator can facilitate and provide your organization additional time to make your application transitions happen more smoothly.

Customers seeking to transition from legacy JMS-based systems to Confluent, without major disruptions to their existing applications, can leverage the JMS 2.0 Bridge Accelerator library to provide full integration of topic data between Kafka and the JMS Bridge. Built around a modified OSS JMS engine, this library enables organizations:

  • Publish from JMS, consume from Confluent Kafka, and vice versa
  • JMS specific message formats can be supported via Schema Registry (Avro)
  • Faster Kafka adoption provides customers with breathing room for legacy JMS apps

The JMS-Bridge is a component that can be used to facilitate quicker migration from legacy JMS-based systems to ones built around the Confluent Platform. It is quite common for enterprise systems to use JMS as a means of integrating external applications to a central system. These applications are usually not maintained by the system owners but by external teams which may not share the same goals or priorities of the system team. This creates a problem when the system team wants to migrate away from their legacy JMS vendor to the Confluent Platform since it would require updating all of those external clients.

Easing this transition is the goal of the JMS 2.0 Bridge Accelerator. By providing a fully compliant JMS 2.0 implementation of the client jars and a server side component, it can accommodate the existing patterns of those external JMS applications. With a tight integration to Confluent, all JMS-originated topic data can be available in Kafka and all Kafka topic data can be available in JMS topics. Since the JMS 2.0 Bridge is built on top of Confluent, using it as its storage mechanism, it does not require additional disk space or SAN provisioning. If you are monitoring Kafka, you are monitoring the JMS 2.0 Bridge.

  • Bridge is built around a modified OSS JMS engine
  • Supports full JMS 2.0 spec as well as Confluent/Kafka
  • Publish and subscribe anywhere data is intended for end users
  • JMS specific message formats can be supported via Schema Registry (Avro)
  • Map, Object, Text, Bytes, Stream (list)
  • Existing JMS applications replace current JMS implementation jars with OSS engine implementation jars

Get more info:

Sign up to learn more
Learn more about Professional Services

Eventsizer online tool

Using a simple user interface, you can easily determine how large a Confluent Kafka cluster, and individual components, is needed for your use case. This tool can then guide you toward hardware requirements for planning and forecasting purposes, as well as determine the impact when altering throughput, retention, and rebuild time requirements.

This online calculator tool was designed around four modes:

  • Simple, where a rough cluster size can be calculated using broad requirements
  • Granular, where more detailed use case and corresponding specifications can be determined
  • Reverse, to estimate the capacity and scalability of a cluster
  • Partitions, to estimate the number of partitions a topic should have

Get more info:

Try the tool and provide feedback
Read more about Streams capacity planning in Confluent Documentation
Learn more about Professional Services

What exactly is an Accelerator?

An Accelerator is software developed and refined by Confluent’s Innovation Division and that functions alone or in concert with Confluent. Most Accelerators may be licensed under an order form with a paid engagement through Professional Services. Some Accelerators are still in active development and are therefore not ready to be shared with customers. Any support obligations under a Confluent Platform or Confluent Cloud subscription do not apply to CSID Accelerators. Any work by Confluent related to Accelerators is limited to Professional Services engagement time and resources.

We’re continuously on the lookout for innovative ideas that could become an Accelerator. If you and your organization are interested in learning more about this program, to become a customer, or to co-develop an accelerator, please contact us below.

Ready to try an accelerator out?

We'll reach out to provide the software and instructions, and point you the way to success.