As an engineering leader, I've worked with data all my life. In fact, most recently, I was in charge of the data layer of Salesforce Einstein, Salesforce’s AI platform. Even with all the data expertise in our organization, it was strikingly obvious that the engineering function in most companies my peers and I worked for, have not been able to fully leverage all the data in a unified manner. The problem - data is often scattered across disparate systems. A better data-driven approach is a must if we want to move from gut-feeling and guesswork to intelligent actions that impact real business outcomes.
All other functions have great data fabrics:
- Sales teams have Salesforce. They have sales pipelines, automated data enrichment processes, revenue predictions, and SalesOps, which is now a very well understood role.
- Marketing gurus have Segment & Google Analytics. They can track visits, attribute them to campaigns, and can calculate cost of leads to the last dollar.
- Product Managers have Amplitude. They can map customer journeys, predict churn and LTVs, and segment audiences into personas.
On the other hand, engineering usually does not have anything similar. That is because compared to other functions, software engineering is an artful craft, one that is rapidly evolving. As such, choices of tools are made locally, in a bottoms-up fashion, which leads to massive fragmentation of data. How many CI/CD systems does your engineering organization use? How many CRMs does your Sales organization use?
In many cases, engineering leaders are often forced to cobble data together in spreadsheets in order to perform meaningful analysis. Take Lead Time for Change as an example, one of the 4 DORA metrics that research suggests is meaningful to track for engineering organizations: not only do you need to ETL data from multiple systems (commits, pull requests, build, artifacts, deployments) to compute it, the collected data needs to link properly together. You need a robust data system to gracefully deal with missing data and out-of-order data ingestion. Most likely, you will also need to capture changesets for your deployments. A very tall order. As the old saying goes, the shoemaker's child always goes barefoot.
Even though metrics vendors may alleviate that pain somewhat, it is not sufficient. The metrics those tools capture and surface are fairly static, and their domain of applicability is limited. Notice that the products mentioned above have analytics as a foundational capability: you can measure and track anything you want on your data. What you don’t know can hurt your teams - and your bottom line.
I want to make the case that engineering organizations similarly need a new data fabric centered around EngOps; a fabric that should of course cover the main software engineering value stream elements (Tasks, Pull Requests, Incidents, Builds, Deployments, and more), but can also extend and simplify compliance, recruiting, employee satisfaction, and OKRs.
Data fabrics usually have, at a minimum, the following characteristics:
- Practical and Connected: Value comes from how well the data is modeled after the world - Lead / Opportunity / Account in Salesforce; Campaigns / Sources / Mediums in Google Analytics; User Sessions in Amplitude. Great data models have relationships properly connecting events and entities together for increased value: for example in Amplitude, a user can be in the ‘new’, ‘current’, ‘dormant’ or ‘resurrected’ based state on their behaviors. For EngOps, that modeling and how the different data elements connect is especially critical given how many different systems are at play.
- Actionable and Extensible: Data can be analyzed, aggregated, and visualized any way the user sees fit. It can be used for automation purposes through APIs and exported for further processing. It can be extended by the user: for example custom objects / fields in Salesforce; properties in Segment / Amplitude.
- Trusted and Intelligent: Data can be observed at the most granular level: for example, Segment, Amplitude and Google Analytics have live debuggers/feeds to introspect data as it changes or arrives in the fabric. Data is also automatically improved, through inferences on how it connects and imputations of values; those improvements are documented and remediable.
Now, here are a few concrete examples of what an engineering leader could do simply (minutes or hours, not days) with such an EngOps data fabric:
- Dive into the data to craft meaningful policies and investment objectives that impact the business - and then track corresponding Key Results:
- Is onboarding new engineers going better over time, or worse? Is remoteness making onboarding less effective?
- Is the lead time per integration decreasing? Where is the bottleneck? Does each integration require changing the underlying APIs or are those durable?
- How do meetings and interviews impact code delivery?
- Automate based on a trusted, transparent metric:
- Automated deployments if the Change Failure Rate of the application is low enough
- Automatically adjust the type of under-utilized cloud instances
- Collect compliance evidence and enforce policies automatically
Clearly, you shouldn’t be focusing on building such an EngOps data fabric. It is challenging to build and not your core business. This is what we build at Faros AI: the connected engineering operations platform. We want to unlock the power of all that EngOps data for your organization. You can check out our open-source version: https://github.com/faros-ai/faros-community-edition.
(originally posted here)
Top comments (0)