์‹ค์‹œ๊ฐ„ ์›€์ง์ด๋Š” ๋ฐ์ดํ„ฐ๊ฐ€ ๊ฐ€์ ธ๋‹ค ์ค„ ๊ฐ€์น˜, Data in Motion Tour์—์„œ ํ™•์ธํ•˜์„ธ์š”!

Apache Kafka for Real-Time Retail at Walmart Labs

Get started with Confluent, for free

Watch demo: Kafka streaming in 10 minutes

Written By

This post is by Anil Kumar, Global eCommerce Engineer at Walmart Labs. Anil specializes in enabling stream processing, cloud computing, ad technology, search services, information retrieval, transport and application layer protocols, machine learning algorithms and firewall and NAT traversal mechanisms. Prior to working with the WalmartLabs team, Anil was Principal Software Architect for Adchemy, acquired by Walmart.This blog is re-posted with his permission from Anilโ€™s post on Medium.

At Walmart.com in the U.S. and at Walmartโ€™s 11 other websites around the world, we provide seamless shopping experience where products are sold by:

  1. Own Merchants for Walmart.com & Walmart Stores
  2. Suppliers for Online & Stores
  3. Sellers on Walmartโ€™s marketplaces
Walmart Stores
Product sold on walmart.com โ€“ Online, Stores by Walmart & by 3 marketplace sellers

The Process is referred to internally as โ€œItem Setupโ€ and the visitors to the sites see Product listings after data processing for Products, Offers, Price,Inventory & Logistics. These entities are comprised of data from multiple sources in different formats & schemas. They have different characteristics around data processing:

1. Products requires more of data preparation around:

  • Normalizationโ€Šโ€”โ€ŠThis is standardization of attributes & values, aids in search and discovery
  • Matchingโ€Šโ€”โ€ŠThis is a slightly complex problem to match duplicates with imperfect data
  • Classificationโ€Šโ€”โ€ŠThis involves classification against Categories & Taxonomies
  • Contentโ€Šโ€”โ€ŠThis involves scoring data quality on attributes like Title, Description, Specifications etc. , finding & filling the โ€œgapsโ€ through entity extraction techniques
  • Imagesโ€Šโ€”โ€ŠThis involves selecting best resolution, deriving attributes, detecting watermark
  • Groupingโ€Šโ€”โ€ŠThis involves matching, grouping products based on variations, like shoes varying on Colors & Sizes
  • Mergingโ€Šโ€”โ€ŠThis involves selection of the best sources and data aggregation from multiple sources
  • Reprocessingโ€Šโ€”โ€ŠThe Catalog needs to be reprocessed to pickup daily changes

2. Offers are made by Multiple sellers for same products & need to checked for correctness on:

  • Identifiers
  • Price variance
  • Shipping
  • Quantity
  • Condition
  • Start & End Dates

3. Pricing & Inventory adjustments many times of the day which need to be processed with very low latency & strict time constraints

4. Logistics has a strong requirement around data correctness to optimize cost & delivery

Products & Offers

Modified Original with permission from Neha Narkhede

This yields architecturally to lots of decentralized autonomous services, systems & teams which handle the data โ€œBefore & Afterโ€ listing on the site. As part of redesign around 2014 we started looking into building scalable data processing systems. I was personally influenced by this famous blog post โ€œThe Log: What every software engineer should know about real-time dataโ€™s unifying abstractionโ€ where Kafka could provide good abstraction to connect hundreds of Microservices, Teams, and evolve to company-wide multi-tenant data hub. We started modeling changes as event streams recorded in Kafka before processing. The data processing is performed using a variety of technologies like:

  1. Stream Processing using Apache Storm, Apache Spark
  2. Plain Java Program
  3. Reactive Micro services
  4. Akka Streams

The new data pipelines which was rolled out in phases since 2015 has enabled business growth where we are on boarding sellers quicker, setting up product listings faster. Kafka is also the backbone for our New Near Real Time (NRT) Search Index, where changes are reflected on the site in seconds.

Message Rate
Message Rate filtered for a Day, split Hourly

The usage of Kafka continues to grow with new topics added everyday, we have many small clusters with hundreds of topics, processing billions of updates per day mostly driven by Pricing & Inventory adjustments. We built operational tools for tracking flows, SLA metrics, message send/receive latencies for producers and consumers, alerting on backlogs, latency and throughput. The nice thing of capturing all the updates in Kafka is that we can process the same data for Reprocessing of the catalog, sharing data between environments, A/B Testing, Analytics & Data warehouse.

The shift to Kafka enabled fast processing but has also introduced new challenges like managing many service topologies & their data dependencies, schema management for thousands of attributes, multi-DC data balancing, and shielding consumer sites from changes which may impact business.

The core tenant which drove Kafka adoption where โ€œItem Setupโ€ teams in different geographical locations can operate autonomously has definitely enabled agile development. I have personally witnessed this over the last couple of years since introduction. The next steps are to increase awareness of Kafka internally for New & (Re)architecting existing data processing applications, and evaluate exciting new streaming technologies like Kafka Streams and Apache Flink. We will also engage with the Kafka open source community and the surrounding ecosystem to make contributions.

Get started with Confluent, for free

Watch demo: Kafka streaming in 10 minutes

Did you like this blog post? Share it now