Each day thousands of companies and more than a million developers rely on Scrapinghub tools and services to extract the data they need from the web. To strengthen its position as a market leader, Scrapinghub recently launched a new product, AutoExtract, that provides customers with AI-enabled, automated web data extraction at scale. Scrapinghub built AutoExtract on Confluent Cloud running on Google Cloud Platform (GCP), with an Apache Kafka®-based, event-streaming backbone for its service architecture. These technologies were chosen to shorten time to market, and to ensure reliability and scalability.
Accelerate the delivery of a next-generation web scraping service, capable of handling growing customer demand with no downtime.
Use Confluent Cloud and Apache Kafka to implement a reliable, scalable event-streaming backbone that links web crawlers with AI-enabled data extraction components.
A key advantage of Confluent Cloud in delivering AutoExtract is time to market. We didn’t have to set up a Kafka cluster ourselves or wait for our infrastructure team to do it for us. With Confluent Cloud we quickly had a state-of-the-art Kafka cluster up and running perfectly smoothly. And if we run into any issues, we have experts at Confluent to help us look into them and resolve them. That puts us in a great position as a team and as a company.
Severstal is using Confluent Platform to stream data from manufacturing sites, integrate microservices and feed machine learning models for predicting problems before they occur.
TiVo works with Confluent to better manage and leverage their data to continue their legacy of revolutionizing how people find and enjoy TV, movies and music.