Build Predictive Machine Learning with Flink | Workshop on Dec 18 | Register Now

What is Interoperability?

Interoperability is the ability of systems, apps, and services to communicate in order to accomplish a task. While interoperability is used in many contexts, it is mostly relevant when disparate software and hardware systems are being composed to solve a single problem. The higher the interoperability of the individual components, the less custom work needs to be done.

In every scenario where Interoperability is needed, the solution is always to make the systems conform to a common communication standard. Within a single company or project, the components are well-known and can conform to this standard. However, when products are made at different times or by different companies then a global standard is required.

Confluent provides a global standard for interoperability by decoupling systems from rigid interfaces and automating data collection, processing, and integration. Unlock seamless, real-time data streaming, interoperability, and insights across your entire infrastructure.

Examples of Interoperability

An everyday example of Interoperability comes from Observability and Business Intelligence. Does the visualization platform support all the data sources that I need to report from? Most support database access via ODBC/JDBC, as well as HTTP calls. However, can they support live data pushed in via something like Kafka? Can they support authentication mechanisms like OAuth? Think about this harder problem: can the visualization tool query the data source by forwarding the logged-in user’s credentials to the data source instead of some system-defined one for all users? This level of interoperability is usually provided using some form of impersonation or a mechanism (token) to forward credentials to downstream systems.

Another example of Interoperability is the case of authentication in enterprise software. Does each application that an employee needs access to require a separate account within each application, or can they authenticate via single sign-on based on a central directory (e.g. Active Directory or X.500)? Can the groups of employees defined in the directory be used as roles in the applications for RBAC?

An important consideration is that interoperability of systems depends on interoperability at all levels. For a browser to display data, many pieces need to work together:

  • Data format: the data returned must conform to the HTML standard
  • Supported Media Formats: images and videos must conform to JPEG/GIF/SVG
  • The network must run HTTP over TCP/IP
  • If the browser is on a mobile device then network standards like WiFi or the 5G protocol

Kinds of Interoperability

Interoperability is a requirement in many contexts across industries. For example

Distributed systems

  • Client-server applications like databases and their clients (JDBC/ODBC)
  • Middleware like Apache Kafka, and RabbitMQ
  • Distributed Object frameworks like DCOM and CORBA
  • Request-Response frameworks like RPCs (Remote Procedure Calls), Java RMI (Remote Method Invocation), WCF (Windows Communication Foundation) and so on
  • Domain-specific solutions like FIX (Financial Information eXchange) for financial transactions, or HL7 (Health Level 7) for healthcare software solutions

Hardware

  • Healthcare interoperability: inter-system integration between medical devices, imaging systems, and electronic health care data
  • IoT interoperability across sensors, receivers, and devices
  • Little Endian vs. Big Endian design in computer chips. See why Apple had to create the Rosetta emulation software to ensure that programs written for Apple’s Intel-based computers would continue to run on their Apple silicon-based ones.

Why is Interoperability important?

The key insight is that every system evolves. This could be the need to scale some of its functionality where, if it were built with microservices, the appropriate one could be horizontally scaled out. Perhaps the system needs to support another client application. Having a standardized way that does not constrain either the system or the client would reduce the need to write custom glue code and thereby improve the time to market. Components of systems often need to be either changed out or upgraded and sometimes the change is not backwards compatible. Ensuring that each piece is interoperable to a global standard makes that very easy.

Challenges in Interoperability

Defining standard interfaces as events, not packets

NASA reported that the 1999 Mars Lander crashed because two modules were communicating with a number, but each assumed that the number was in different units. When systems interoperate, it is critical that they do so not simply using messages or packets but rather events. The distinction being that the event has all the context needed, the units in this case, for the destination to process it safely.

Versioning

As is the case in any computing system, it is easy to build interoperability on day one, but keeping it going across multiple versions of different systems is hard. Given that upgrades of different components are usually on different timelines, it takes work to keep it in sync. Here again, technologies like schema registries with an intrinsic ability to handle backward and forward compatibility of messages are crucial to making a long-lasting interoperability solution.

Real-Time Communications

A common path for systems to interoperate is to export data from one system in some known file format like CSV and import it into another. While ubiquitous, this is wrong on many fronts. One, it implies a batch processing model which results in stale information until the next batch is processed. Second, with multiple systems needing to communicate with each other, they all have to agree on file format, file naming conventions, file locations, and the archiving of old files. All of this needs infrastructure to be created and maintained. Solutions, like managed Kafka, make all of this go away and provide real-time communications on top.

Scalability with Asynchronous Middleware like Kafka

Inevitably, systems will add and remove components and the need to scale up as traffic increases. Systems that communicate synchronously with request-response messages have a difficult time adding new systems since all the other systems now need to know about their endpoints. If a component needs to scale horizontally, all the other components need to know about its instances. These are all solved problems when one uses a technology like Kafka with these features as part of its core functionality.

Empowering Real-Time Interoperability with Confluent

Interoperability should be a key deliverable in the design of any system. It facilitates scalability, increases adoption rates, improves maintainability, as well as reduces time to market for solutions that use the system.

Confluent’s complete suite of products facilitates building interoperable systems. The managed Kafka offering leveraging the enhanced Kora engine allows teams to decouple their systems into event-based, asynchronous ones without the burden of administering your own cluster. The Data Streaming Platform provides the Schema Registry product, along with a Data Portal, allowing teams to provide well-defined interfaces for other teams to leverage. Lastly, the managed offerings of data stream processing building blocks, Kafka Connect, Kafka Streams, and Flink, make it easy to not only create interoperable systems but also provide developers with an easy way to enhance the value of existing, closed systems by refactoring them into ones that interoperate with others in the firm.

Learn more about Interoperability and Integration