If you follow the press around Apache Kafka you’ll probably know it’s pretty good at tracking and retaining messages, but sometimes removing messages is important too. GDPR is a good example of this as, amongst other things, it includes the right to be forgotten. This raises a very obvious question: how do you delete arbitrary data from Kafka? After all, its underlying storage mechanism is an immutable log.
As it happens, Kafka is a pretty good fit for GDPR. The regulatory regime specifies not only that users have the right to be forgotten, but also have the right to request a copy of their personal data. Companies are also required to keep detailed records of what data is used for—a requirement for which recording and tracking the messages that move from application to application is a boon.
How do you delete (or redact) data from Kafka?
The simplest way to remove messages from Kafka is to simply let them expire. By default, Kafka will keep data for two weeks, and you can tune this to an arbitrarily large (or small) period of time. There is also an Admin API that lets you delete messages explicitly if they are older than some specified time or offset. But businesses increasingly want to leverage Kafka’s ability to keep data for longer periods of time, say for Event Sourcing architectures or as a source of truth. In such cases it’s important to understand how to make long lived data in Kafka GDPR compliant. For this, compacted topics are the tool of choice, as they allow messages to be explicitly deleted or replaced via their key.
Data isn’t removed from compacted topics in the same way as in a relational database. Instead, Kafka uses a mechanism closer to those used by Cassandra and HBase where records are marked for removal then later deleted when the compaction process runs. Deleting a message from a compacted topic is as simple as writing a new message to the topic with the key you want to delete and a null value. When compaction runs the message will be deleted forever.
//Create a record in a compacted topic in kafka
producer.send(new ProducerRecord(CUSTOMERS_TOPIC, “Customer123”, “Donald Duck”));
//Mark that record for deletion when compaction runs
producer.send(new ProducerRecord(CUSTOMERS_TOPIC, “Customer123”, null));
If the key of the topic is something other than the CustomerId, then you need some process to map the two. For example, if you have a topic of Orders, then you need a mapping of Customer to OrderId held somewhere. Then, to ‘forget’ a customer, simply lookup their Orders and either explicitly delete them from Kafka, or alternatively redact any customer information they contain. You might roll this into a process of your own, or you might do it using Kafka Streams if you are so inclined.
There is a less common case, which is worth mentioning, where the key (which Kafka uses for ordering) is completely different to the key you want to be able to delete by. Let’s say that you need to key your Orders by ProductId. This choice of key won’t let you delete Orders for individual customers, so the simple method above wouldn’t work. You can still achieve this by using a key that is a composite of the two: make the key [ProductId][CustomerId], then use a custom partitioner in the Producer (see the Producer Config: “partitioner.class”) that extracts the ProductId and partitions only on that value. Then you can delete messages using the mechanism discussed earlier using the [ProductId][CustomerId] pair as the key.
What about the databases that I read data from or push data to?
Quite often you’ll be in a pipeline where Kafka is moving data from one database to another using Kafka Connectors. In this case, you need to delete the record in the originating database and have that propagate through Kafka to any Connect Sinks you have downstream. If you’re using CDC this will just work: the delete will be picked up by the source Connector, propagated through Kafka and deleted in the sinks. If you’re not using a CDC enabled connector you’ll need some custom mechanism for managing deletes.
How long does Compaction take to delete a message?
By default, compaction will run periodically and won’t give you a clear indication of when a message will be deleted. Fortunately, you can tweak the settings for stricter guarantees. The best way to do this is to configure the compaction process to run continuously, then add a rate limit so that it doesn’t affect the rest of the system unduly:
# Ensure compaction runs continuously
log.cleaner.min.cleanable.ratio = 0.00001
# Set a limit on compaction so there is bandwidth for regular activities
Setting the cleanable ratio to 0 would make compaction run continuously. A small, positive value is used here, so the cleaner doesn’t execute if there is nothing to clean, but will kick in quickly as soon as there is. A sensible value for the log cleaner max I/O is [max I/O of disk subsystem] x 0.1 / [number of compacted partitions]. So say this computes to 1MB/s then a topic of 100GB will clean removed entries within 28 hours. Obviously you can tune this value to get the desired guarantees.
One final consideration is that partitions in Kafka are made from a set of files, called segments, and the latest segment (the one being written to) isn’t considered for compaction. This means that a low throughput topic might accumulate messages in the latest segment for quite some time before rolling, and compaction kicking in. To address this we can force the segment to roll after a defined period of time. For example log.roll.hours=24 would force segments to roll every day if it hasn’t already met its size limit.
Tuning and Monitoring
There are a number of configurations for tuning the compactor (see the log.cleaner.* properties in the docs), and the compaction process publishes JMX metrics regarding its progress. You can actually set a topic to be both compacted and have an expiry so data is never held longer than the expiry time.
Kafka provides immutable topics where entries are expired after some configured time, compacted topics where messages with specific keys can be flagged for deletion and the ability to propagate deletes from database to database with CDC enabled Connectors. Despite being built on immutable logs as its fundamental underlying abstraction, Kafka provides tools that accommodate GDPR requirements easily and elegantly.
If you’d like to know more, here are some resources for you:
- Watch our webinar with Ovum Compliance in Motion: Aligning Data Governance Initiatives with Business Objectives in the Streaming Era
- You can download the Confluent Platform
- Read our documentation
- Confluent Professional Services offers advice and help with deployments
- Confluent Training can get your team ready for development and deployment of the Confluent Platform