실시간 움직이는 데이터가 가져다 줄 가치, Data in Motion Tour에서 확인하세요!
In our analytics use case we have a real-time ingestion pipeline where we ingest data from Kafka topics, map it to a particular type of object, perform the desired aggregations on the input records and finally write it to our database which is then available for querying. We calculate the ingestion end to end latency from the time when the record was published to Kafka until it is available for querying. The latency that is calculated is then populated to a dashboard for SLO and SLA calculation. This talk will dive into the details of calculating the end to end latency for our real time ingestion pipeline.