方法を確認 : リアルタイムのデータウェアハウジングのためのストリーミングデータパイプライン構築 | 今すぐ登録

Modernize Hybrid and Multi-Cloud Environments with Treehouse Software and Confluent

Get started with Confluent Cloud

作成者 :

At Treehouse Software, when we speak with customers who are planning to modernize their enterprise mainframe systems, there’s a common theme:  they are faced with decades of mission-critical and historical legacy mainframe data in disparate databases, as well as a variety of other data stores inherited through mergers, acquisitions, and other company growth scenarios. Many applications, connections, databases, and stores are located in different on-premises systems, or on multiple cloud platforms. Customers say they often find themselves in the middle of an organically formed, complex, and multi-cloud environment with little historical context, and they are trying to connect and integrate these systems as best they can. Generally, those integrations tend to be point-to-point, brittle, and tend not to scale with growth.

We’ve seen a growing interest in setting up mainframe-to-Confluent data pipelines without necessarily having a final target fully thought out.  At first glance, this can come across as strange – rather like building a bridge to nowhere, as Confluent is often not considered a final datastore, but simply an event streaming platform. But in the bigger, longer-term picture, an enterprise can keep its options open by propagating data to a highly reliable, very scalable platform like Confluent that can be “subscribed to” by any number of current or yet-to-be-invented ETL toolsets and target datastores. Many firms have found success with this approach of using their event brokers as a central nervous system for business-critical data.

With that said, a site that is doing mainframe-to-Confluent propagation does ultimately need to be able to pull the data from Confluent and land it into a viable target datastore. So Treehouse Software’s current work—enabling the targeting of DynamoDB, Cosmos DB, Snowflake, and others as destinations—is seeing increased popularity among new and existing customers.

The most common Mainframe-to-Confluent use cases

Customers want to modernize applications on cloud and/or open systems without disrupting the existing critical work on legacy systems. They also want to bring together, view, and manage data from applications, databases, data warehouses, etc. that have been spread over many vastly different systems.

The Treehouse and Confluent Solution: Avoid replicating the same complexity to newer systems

Confluent allows customers to eliminate the need for point-to-point interconnections and replace brittle point-to-point interconnections with a real-time, global data plane that connects all of the systems, applications, datastores, and environments that make up an enterprise. That’s possible regardless of whether systems are running on-prem, in the cloud, or a combination of both.

Greg DeMichillie, Vice President of Product and Solutions Marketing at Confluent discusses transitioning to a hybrid or multicloud architecture:

For those customers looking to move mainframe data to Confluent, Treehouse Software’s tcVISION is the mainframe data connector that performs real-time synchronization of data sources to Confluent Platform or Cloud, allowing for rapid data movement to newer data sinks/target platforms on AWS, Azure, Google Cloud, and other services.

Additionally, tcVISION supports many mainframe data sources for both online and offline scenarios. Data can be replicated from IBM DB2 z/OS, DB2 z/VSE, VSAM, IMS/DB, CA IDMS, CA DATACOM, or Software AG ADABAS. tcVISION can replicate data to many targets including Confluent Platform or Cloud. To learn more, see the complete list of supported tcVISION sources and targets. Here’s a look at the architecture that’s created with Confluent and tcVISION:

Learn more in this post on enterprise change data capture (CDC) to Kafka with tcVISION and Confluent.

With tcVISION’s groundbreaking mainframe CDC connector and Confluent’s ability to serve as the multi-tenant data hub, it’s possible to aggregate data from multiple sources and have data published into various Kafka topics. End-to-end data in motion under a simplified hybrid and multicloud architecture enables enterprise customers to oversee data pipelines as well as manage policy and governance.

  • Ram Dhakne works as a solutions engineer at Confluent. He has a wide array of experience in NoSQL databases, filesystems, distributed systems, and Apache Kafka. He has supported industry verticals ranging from large financial services, retail, healthcare, telecom, and utilities companies. His current interests are in helping customers adopt event streaming using Kafka. As a part-time hobby, he has written two children’s books.

  • Joseph Brady is the director of business development and cloud alliance leader at Treehouse Software.  He has been with Treehouse since 1996, and leads enterprise mainframe modernization strategies and partnerships with some of the largest Cloud technology companies and systems integrators in the world.

Get started with Confluent Cloud

このブログ記事は気に入りましたか?今すぐ共有