The Elasticsearch connector allows moving data from Kafka to Elasticsearch It writes data from a topic in Kafka to an index in Elasticsearch and all data for a topic have the same type.
Elasticsearch is often used for text queries, analytics and as an key-value store (use cases)
The connector covers both the analytics and key-value store use cases For the analytics use case, each message is in Kafka is treated as an event and the connector uses topic+partition+offset as a unique identifier for events, which then converted to unique documents in Elasticsearch For the key-value store use case, it supports using keys from Kafka messages as document ids in Elasticsearch and provides configurations ensuring that updates to a key are written to Elasticsearch in order For both use cases, Elasticsearch’s idempotent write semantics guarantees exactly once delivery.
Mapping is the process of defining how a document, and the fields it contains, are stored and indexed Users can explicitly define mappings for types in indices When mapping is not explicitly defined, Elasticsearch can determine field names and types from data, however, some types such as timestamp and decimal, may not be correctly inferred To ensure that the types are correctly inferred, the connector provides a feature to infer mapping from the schemas of Kafka messages.
Use the Confluent Hub client to install this connector with:
confluent-hub install confluentinc/kafka-connect-elasticsearch:4.1.1
Or download the ZIP file and extract it into one of the directories that is listed on the Connect worker's plugin.path configuration properties. This must be done on each of the installations where Connect will be run. See here for more detailed instructions.
Once installed, you can then create a connector configuration file with the connector's settings, and deploy that to a Connect worker. See here for more detailed instructions.
The source code is located in this repository.
For more information, see the documentation.