Warum Kafka in der Cloud mit Confluent 10x besser ist | Kostenloses E-Book erhalten

Online Talk

Streaming Data Pipelines for Automatic Vendor Detection in 5 Steps

Available On-demand

When a business wants to understand their credit risk, they consult one of three major credit scoring companies. When companies want to see the security risks of their 1st party and 3rd party systems in their data infrastructure, they turn to SecurityScorecard. Accurate, up-to-date data is the lifeblood of any modern business, and it needs to come from myriad sources. SecurityScorecard continuously monitors and rates the cyber posture of more than 12 million companies worldwide. Up until a few years back, the teams at SecurityScorecard relied on batch pipelines to push data to and from Amazon S3 as well as expensive REST API based communications to carry data between systems. They also spent significant time and resources on Kafka upgrades when using Amazon MSK.

SecurityScorecard’s Brandon Brown, lead of the company’s Pipelines team, pivoted away from antiquated batch processing and toward Confluent’s streaming approach for the development of a new Automatic Vendor Detection (AVD) product. His team continues to leverage the full breadth of Confluent Cloud’s connectors to deliver several new use cases, including:

  1. AVD to onboard new sources of data quickly via topics and refresh that data every hour
  2. A source of truth using PostgreSQL and pushing data to downstream systems using CDC connectivity
  3. A future use case using ksqlDB to have consolidated observability, and save on infrastructure costs

Watch now to hear the reasons why SecurityScorecard switched to Confluent.

Helpful Resources:


Brandon Brown

Senior Staff Software Engineer, SecurityScorecard

Brandon Brown, Senior Staff Software Engineer, SecurityScorecard Brandon Brown is a Senior Staff Software Engineer leading the Pipelines team at SecurityScorecard. His team is responsible for the AVD and AVD+Enhanced Illumination product as well as working with other teams to support a move towards a hybrid data processing strategy that mixes traditional batch processing with a combination of micro batching and streaming.

With 10 years of experience in software development, Brandon has experience across the full SDLC specializing in Data Pipelines over the last 7+ years. His language of choice is Scala but he has a soft spot for SQL. He’s contributed to top open source projects such as Debezium and was an early contributor to the ZIO ecosystem. He’s passionate about Kafka and Hybrid Data Pipeline strategies. Outside of work he’s helping raise his young son and catching live music or talking about movies.

Bharath Chari (Bio)

Team Lead, Solutions Marketing, Confluent

Bharath Chari, Team Lead, Solutions Marketing, Confluent Bharath Chari is the solutions marketing team lead for horizontal use-cases at Confluent. In his current role, he has played an integral part in defining Confluent’s solution-centric approach and has successfully launched several end-end campaigns and GTM motions to generate market awareness and drive demand for solutions and use-cases. Before joining Confluent, Bharath led global product marketing for IBM’s global data integration and AI portfolio.

Jetzt ansehen

Weitere Ressourcen

cc demo
kafka microservices
Image-Event-Driven Microservices-01

Weitere Ressourcen

cc demo
kafka microservices