At Axelerant, we’re tackling a fundamental challenge faced by modern engineering organizations: how to consistently understand, measure, and improve delivery health across diverse, distributed project teams. These challenges mainly arise due to fragmented visibility into engineering performance, inconsistent measurement standards, and difficulty linking technical execution with broader business outcomes.
Without a shared vocabulary around delivery metrics, teams often operate in silos: unable to proactively identify risks, reflect on progress with data, or align on what “good” looks like. This creates unnecessary delivery friction, slower decision-making, and uncertainty around where to focus improvement efforts.
To address this, we’re building a structured, scalable, and human-centered approach to engineering metrics. One that empowers teams, not audits them, and provides insight into both immediate execution and long-term delivery confidence.
| This is an initiative of our Global Standards and Practices (GSP) team. This cross-functional group is responsible for creating scalable frameworks for metric tracking, interpretation standards, and cross-project observability. Their goal is to support engineering and program teams in identifying early signals, fostering proactive decision-making, and linking engineering practices to business impact. |
Among the multiple focus areas under the GSP team’s umbrella, such as metrics governance, onboarding efficiency, delivery standardization, and platform stability, the integration of DevLake serves as a critical enabler for observability and data unification.
Rather than being the centerpiece, the DevLake initiative complements broader goals by offering a scalable and flexible way to operationalize engineering metrics across tools like GitHub, Jira, and Kantata (Mavenlink).
To make these more tangible, here are some of the key metrics we're focusing on:
| Goal/Outcome |
Key Metrics |
Audience |
| Optimised Onboarding & Project Learning |
Onboarding time and trend across months |
Project Manager/Program Manager |
| Minimal Technical debt |
Results of Website Health report and other items |
Technical Lead/Technical Program Manager |
| Effective Deployments (time/size/frequency) |
Release size - Number of tickets deployed on average |
Technical Lead/Technical Program Manager |
| Optimised Context Switching & Consistent way of work |
% of areas skipped in Reference baseline compared to the first milestone |
Program Manager/Technical Program Manager |
| Stable platform |
Frequency and time for which the platform was down |
Technical Lead/ Project Manager |
To support the GSP team’s vision of structured, cross-functional metrics observability, we selected Apache DevLake as the foundation for our metrics platform. DevLake provides the flexibility we need to consolidate engineering and delivery data from diverse tools into a single, extensible system.
The DevLake initiative began with an internal proof-of-concept, where the team provisioned a dedicated instance (devlake.dev.axl8.xyz) and laid the groundwork for key integrations:
A critical challenge we addressed early on was the need to distinguish production vs. feature deployments within GitHub workflows. This prompted the recommendation to use the environment field in GitHub Actions, which improves deployment classification for DevLake’s data model.
On the infrastructure side, we experimented with Docker Compose and hybrid cloud-hosted components (Grafana, MySQL), with flexibility to move toward Kubernetes if scale demands. We’ve also contributed upstream bug fixes and enhancements to the DevLake GitHub plugin ecosystem.
The team is validating source availability, ensuring consistency in Jira’s "Original Estimate" field, and documenting how Mavenlink’s Project Health metrics can be mapped to engineering signals.
By anchoring part of the GSP team’s observability efforts on DevLake, we’re enabling a scalable way to visualize engineering and delivery data. However, DevLake is just one part of a broader initiative, the team is also working on establishing metric interpretation standards, onboarding practices, change management processes, and improving platform stability through integrations with other monitoring tools like OpsGenie.
To operationalize our approach, we're actively integrating Apache DevLake into our GSP stack as our metrics platform of choice. DevLake allows us to pull, visualize, and track engineering metrics from multiple tools including GitHub, JIRA, and Mavenlink (Kantata). We've already:
These efforts are part of a scalable proof-of-concept, hosted internally at devlake.dev.axl8.xyz, that reflects our long-term goal of consolidating disparate delivery data into a unified observability layer. We’ve also made contributions upstream (e.g., GitHub plugin fixes) and provisioned access for platform engineering to extend and operationalize DevLake dashboards.
We’ve chosen DevLake for its open-source flexibility, allowing custom plugin development and easy extension across data sources. Key benefits include:
The custom Mavenlink plugin is central to this approach. It is being designed to fetch project-level metrics relevant to delivery health, including billable utilization, actuals vs. estimates (sourced from JIRA), margin and budget consumption, and project allocation data. These metrics are commonly tracked in Mavenlink’s Project Health reports and can help tie engineering efficiency directly to commercial outcomes. The team is currently validating API access, data availability, and consistency of fields like original estimates across projects to ensure seamless integration.
Our metric design and rollout are grounded in best practices like the SPACE framework to ensure we maintain a healthy balance across Satisfaction, Performance, Activity, Communication, and Efficiency. From recent internal working sessions, we’ve acknowledged gaps in coverage, particularly around:
We are actively addressing these through broader conversations and change management practices, with a strong emphasis on interpretation guidance to avoid misapplication or over-optimization of any single metric.
As a starting point, we’ve prioritized the “Stable Platform” metric for CDM projects, with active exploration into integrating Site24x7 data with JIRA issues. A set of discussions is underway to refine definitions, data sources, and what constitutes actionable trends for platform stability.
Our north star is not a dashboard, it’s a culture of clarity and ownership. With the GSP team’s efforts, we aim to:
We are laying the foundation for a metrics-led culture that empowers teams, informs leadership, and enables exceptional service delivery. GSP is not just a framework, it’s a mindset shift.
If you’re curious about our journey or want to explore building a similar approach in your own organization, let’s talk. We’re committed to learning in the open, and growing alongside our clients and partners.