With the need to build systems handling massive loads of bursty high frequency data, streaming data ingestion, IoT, analytics and machine learning for complex use cases like real time bidding (RTB), as well as intelligence and management user applications for highly-aggregated views into global data, there is the need to simplifying distribution and consistency problems in data flows.
Helena has been building large-scale distributed cloud-based systems for many years, distributed big data systems for the last four, choosing Kafa and Scala for all. She will discuss simplification of big data architecture, data flows, and a collaborative set of supporting technologies, within the use case of real time data ingestion streams, analytics and ML. Helena will target where and how Kafka can simplify distribution and data flows, and how CRDT, CQRS and Scala frameworks like Eventuate can help solve consistency and in-memory problems where low-latency is critical. Then walk through an example integrating Scala, Akka, Akka Streams, Eventuate, Kafka and Cassandra, highlighting NoETL and Functional Programming (FP) in big data.