Découvrez de nombreuses nouveautés à l'occasion de notre lancement du 2e trimestre 2023 : moteur Kora Engine, règles de qualité des données, et bien plus encore | S'inscrire pour une démo
Data pipelines perform much of the undifferentiated heavy lifting in any data intensive application. And yet despite their being critical to the data value stream, pipelines haven't always served us well. What can we learn from recent trends in the data platform space that can help us build better pipelines?
Recent trends, from Data Mesh, through the explosion of new data platform products and services that comprise the modern data stack, have helped unlock the ability for all parts of an organization to contribute reusable data capabilities into the data value stream. Practices that have emerged over the last twenty years in the adjacent areas of software architecture and development are now influencing the ways in which organizations manage, share and derive value from their data using data pipelines. We've looked at these trends and modern practices, and from this analysis we've derived five principles for building better pipelines, which we call Modern Data Flow - a better way of building data pipelines.
In this session we'll review the Modern Data Flow principles, and discuss them in the context of trends in the data landscape and modern software engineering practices.