Simplify & secure Kafka at scale—see what’s new in Confluent Platform 8.0 | Register Now

Data in Motion – Definition, Use Cases & Challenges

Why are leading enterprises embracing data in motion? With data in motion, businesses gain a powerful competitive edge—immediate insights, operational efficiency, and the ability to scale confidently. The Confluent data streaming platform powered by Apache Kafka® and Apache Flink® drives critical outcomes, such as real-time inventory visibility, fraud prevention, AI-powered search, faster forecasting, and seamless personalization. By replacing static data pipelines and legacy batch systems, Confluent unifies fragmented systems, improves agility, and accelerates innovation.

What Is Data in Motion?

Data is one of the most valuable assets of any organization. But unlocking its true business impact means using data in real time across organizational boundaries in a way that is secure, reliable, and scalable. This is the essence of data in motion—capturing digital information as it is streamed and processed, and using it to power real-time decision-making and seamless digital experiences.

Acting on real-time data is essential for modern businesses. Companies cannot afford to make decisions based on stale, outdated data that offers no valuable insights. By embracing a data-in-motion architecture, organizations can unlock far more value from their data and meet business goals faster and more efficiently.

Technologies like Apache Kafka® and Apache Flink® are foundational to this architecture. Kafka enables organizations to capture and route high-throughput, real-time data streams, while Flink allows them to transform, enrich, and analyze that data as it is in motion.

With this kind of data streaming backbone in place, companies gain:

  • Immediate insights for faster responses
  • Improved customer experiences with up-to-date, context-aware information
  • Operational efficiency by automating real-time workflows
  • Better data governance and observability across systems

Moving to a data-in-motion strategy means adopting a centralized streaming architecture that aggregates real-time data from systems like databases, APIs, microservices, and external apps (e.g., Slack, Jira, Google Drive), and makes it instantly usable by downstream applications, dashboards, or AI models.

This approach doesn’t just modernize your stack, it transforms how your business operates. By treating data as a continuous, real-time asset instead of a static resource, companies can unlock new opportunities, improve agility, and meet their business goals.

Examples of Data in Motion

Organizations all over the world are setting their data in motion in order to move faster, deliver exceptional customer experiences, and outpace the competition. Learn how Swiggy and Notion each use data in motion to excel in hyperlocal food delivery and the digital productivity spaces.

Real-Time Food Delivery: Swiggy Boosts Customer Experience With Data in Motion

Swiggy, India’s leading food delivery platform, has delivered over three billion orders across 680 cities. With a mission to enhance urban life for 14 million+ users, the company depends on real-time data to improve its services. However, managing its open-source Apache Kafka® system became a challenge, requiring constant infrastructure oversight. To shift focus from maintenance to innovation, Swiggy adopted Confluent’s fully managed data streaming platform.

“With Confluent, we focus more on governance and data outcomes—and less on infrastructure,” said Akash Agarwal, Data Architect at Swiggy.

This move freed up resources, allowing the team to strengthen areas like data resiliency and governance. One major use case: predicting grocery delivery times. Confluent powers Swiggy’s SLA predictor by aggregating data from multiple services to deliver accurate, real-time ETAs.

During peak demand periods, such as holidays, Confluent’s elastic scalability ensures uninterrupted service. Instead of manually scaling infrastructure, Swiggy can now adjust capacity with a click.

Confluent also supports real-time insights across Swiggy’s ecosystem—customers, restaurants, and delivery partners—through Apache Flink® stream processing. As Swiggy expands across India, Confluent’s platform continues to support fast, reliable growth.

“We’ve reallocated time to strategy and growth, rather than infrastructure upkeep,” Agarwal said.

Learn more about Swiggy’s use cases

 

The Connected Workspace: Notion Keeps Its Data in Motion

Notion—the connected workspace for docs, notes, projects, and knowledge—empowers teams to collaborate and innovate with AI deeply integrated. As its user base surged past 100 million, the company faced challenges scaling its legacy event logging and messaging architecture.

To support rapid growth and deliver real-time product features like AI search, content generation, and analytics, Notion turned to Confluent’s fully managed data streaming platform. This shift enabled the team to move from infrastructure management to a scalable, event-driven architecture built on Apache Kafka®.

“With Confluent, we don’t worry about infrastructure—we focus on delivering product innovation,” said Engineering Lead Ekanth Sethuramalingam. Pre-built connectors for AWS, Snowflake, and PostgreSQL made data routing seamless, while stream processing and Schema Registry ensured real-time data enrichment without complexity.

Now, Notion AI can instantly reflect content changes in its vector database—vital for accurate, up-to-date AI search and generation. The platform has also helped Notion cut over $500K in annual operational costs and accelerate feature delivery.

With Confluent, Notion has built a flexible, cloud-native data backbone that supports its growing AI capabilities, streamlines real-time processing, and empowers its lean engineering team to focus on what matters most: innovation.

Learn more about Notion use cases

 

Common Use Cases

Data in motion is at the center of every kind of company today. It transcends lines of businesses and continues to provide valuable, real-time insights, streamlined efficiencies, and transform user experiences.

  • Data is essential to modern healthcare. From electronic health records to medical imaging, data is collected from numerous sources and used to inform care decisions on a patient-to-patient basis
  • Advertisers use Apache Kafka®  and Apache Flink® to improve performance of client ads dramatically
  • Retailers use Kafka and Flink for real-time replenishment of inventory at scale
  • Companies use data governance solutions to automatically detect PII for real-time cyber defense
  • Commercial real estate builders use data-driven monitoring, analytics and controls solutions to optimize building operations.

The 3 States of Data: Data at Rest vs Data in Motion vs Data in Use

In the world of modern data architecture, it’s essential to understand how data is stored, moved, and processed. Engineers typically hold that data exists in three states—data at rest, data in motion, and data in use. Together, these concepts define the data lifecycle and highlight why moving from static, siloed systems to real-time architectures is key to unlocking greater value.

Data at Rest

Data at rest is data that's sitting idle in storage, like a database or data warehouse, on a data storage device, or in the cloud.

In a traditional data architecture model, data sits in various siloed repositories, each with its own specialized tools for access. To perform analytics and use any of that data, it must be collected from various sources and aggregated in one place — by which point it has already become stale and outdated.

Data in Motion

Sometimes referred to as data in transit or data flow, this refers to digital information that's actively being transferred between one place and another. To get a deeper understanding of data in transit, data pipelines and ETL, ELT, or streaming ETL provide a thorough introduction to data movement.

Data in Use

The moment data is actively created, updated, or processed, it’s considered in use. For example, data in motion could be an email being sent, data being sent from your phone to iCloud, or hourly weather updates based on IoT sensors. Cloud and SaaS providers often consider data in use when it’s being currently processed by an application.

Challenges to Consider When Setting & Using Data in Motion

While data is the fuel that businesses run on, most organizations continue to face challenges in managing it effectively. As noted in Confluent’s 2025 Data Streaming Report, a majority of organizations report five or more challenges—with inconsistency in data sources, uncertain data quality, and data silos ranking among the top barriers to success. 
Data Sources: With businesses demanding real-time data insights, they can often find themselves with a tangled data mess of point-to-point integrations across numerous systems, apps, and databases. The Confluent data streaming platform empowers businesses to leave behind batch processing to build reliable, trustworthy data solutions that can tap into continuous flows of real-time data. 

Data Silos: Many businesses find themselves with silos of data spread over different projects, applications, and teams. Extracting this data from legacy systems and sources means overcoming hurdles like cost, compatibility, and resource allocation, creating a level of complexity that can derail ambitious business goals. With the Confluent data streaming platform, businesses can tap into the power of 120+ pre-built connectors that prepare and transform data by filtering, aggregating, and enriching it before sending it to the destination system.  

Data Quality: Poor data quality is a huge problem for businesses that can lead to a garbage-in, garbage-out scenario with tremendous impact downstream. System failures, inaccurate reporting, and bad decision-making are ultimately the consequences of relying on bad data. Unfortunately, data streaming solutions can often amplify data quality issues and make the problems spread far and wide across an organization. With best practices like governance and data contracts as the backbone of their data streaming architecture, businesses can ensure high data quality, discoverability, and compatibility for their real-time data streams. 

Michelin adopted Apache Kafka to power its real-time, cloud-based inventory management system but found that managing Kafka infrastructure in-house was complex, costly, and time consuming. To overcome these challenges, the company turned to Confluent Cloud, a fully managed Kafka service. By migrating to Confluent, Michelin significantly reduced operational overhead and improved engineering efficiency, enabling teams to focus more on innovation and less on infrastructure maintenance. The move also brought measurable business benefits, including an estimated 35% cost savings compared to on-premises operations and a highly resilient architecture with 99.99% uptime—delivering the scalability and reliability required to support Michelin’s global operations.
When the pandemic struck in 2020, Instacart experienced an unprecedented surge in demand, gaining over half a million new customers in just weeks. To meet the need for real-time inventory updates and ensure timely deliveries, the company turned to Confluent Cloud. This fully managed data streaming platform enabled Instacart to scale rapidly—supporting a decade’s worth of growth in just six weeks across 59,000 locations. With Confluent, Instacart built a scalable, resilient system that powers real-time inventory visibility, fraud detection, and other critical use cases. The platform also became a unifying data backbone across teams, all while reducing total cost of ownership compared to managing open-source Kafka internally.

Why Modern Businesses Are Turning to Data Streaming Platforms

As businesses grow and the pace of digital interaction accelerates, data flows from many different sources that all demand real-time responses. Traditional batch systems and point-to-point integrations simply can’t keep up the demand for instant feedback and insights. Enter the modern data streaming platform—a powerful solution that lets organizations manage, process, and act on data as it’s generated in real time.

At the heart of the Confluent data streaming platform are proven technologies like Apache Kafka® and Apache Flink®. Kafka acts as the real-time data backbone, capturing and delivering high volumes of events with durability and reliability. Flink builds on this by enabling real-time stream processing, allowing teams to enrich, aggregate, and analyze data as it flows. Together, they power the shift from passive data storage to data in motion—an approach that unlocks faster insights and better decisions.

Victoria’s Secret discovered the power of this solution when it turned to Confluent to unravel its complex data infrastructure with siloed systems that delayed decision-making and was unable to scale quickly. With Confluent’s cloud-native data streaming platform, Victoria’s Secret’s team was able to address real-time data needs, along with multi-region and multi-cloud replication capabilities for improved data resiliency and security. The result was increased operational efficiency, data consistency, and faster business forecasting and decision-making. 

Likewise, Booking.com discovered how to reduce operational complexity and focus more on delivering value to customers by migrating from a self-managed Apache Kafka® setup to Confluent Platform. Confluent’s built-in capabilities enabled Booking.com to support a broader range of use cases—from marketing and payments to personalization and core booking flows—while also enhancing support for analytical workflows and event-driven architecture. The result: faster innovation, better data-driven decisions, and more resources dedicated to optimizing the customer experience.

With the Confluent data streaming platform, you can:

  • Unify and simplify your data architecture
  • Streamline integration between operational and analytical systems
  • Drive real-time insights with Flink-powered, in-stream processing
  • Enforce quality and governance with schema registries, lineage, and access controls
  • Scale effortlessly across teams, clouds, and global operations

Getting Started With Data in Motion

Created by the original creators of Apache Kafka®, Confluent is the only fully managed, cloud-native data streaming platform that enables you to easily access, store, and manage data as continuous, real-time streams with enterprise scalability, security, and performance.

It’s not just a fully managed Kafka service but a complete data system with automated scaling, pre-built connectors, advanced governance capabilities, and real-time stream processing. Simply put, Confluent is how you set your data in motion.

Learn more about why and how modern organizations use data streaming and data streaming platforms to solve their biggest challenges with data access and quality while achieving faster time to market, accelerated innovation, and increased revenue.

Read the Latest Data Streaming Report