Wednesday, February 4, 2026
  • Login
  • Register
Technology Tutorials & Latest News | ByteBlock
  • Home
  • Tech News
  • Tech Tutorials
    • Networking
    • Computers
    • Mobile Devices & Tablets
    • Apps & Software
    • Cloud & Servers
    • IT Careers
    • AI
  • Reviews
  • Shop
    • Electronics & Gadgets
    • Apps & Software
    • Online Courses
    • Lifetime Subscription
No Result
View All Result
Tech Insight: Tutorials, Reviews & Latest News
No Result
View All Result
Home News Google

At John Lewis Partnership, measuring developer platform value

February 4, 2026
in Google
0 0
0

Looking for patterns in this data led to us discovering additional features that would be useful for us to build. Two major examples of this were handling change, and operational readiness.

JLP’s service management processes were geared towards handling complex release processes across multiple large systems and/or teams – but we had fundamentally changed our architecture by adopting microservices. This empowered teams to release independently at will, and therefore manage the consequences of failed changes themselves. We used the data we’d collected about change failure rates and frequency of small releases to justify a different approach: allowing tenants to automatically raise and close changes as part of their CI/CD pipelines. After clearing this approach with our Service Management team, we developed a CLI tool that teams could use within their pipelines. This had the additional benefit of allowing us to capture useful data at point of release, rather than scraping more awkward data sources. The automated change “carrot” was very popular and was widely adopted, shifting the approval point left to the pull request rather than later in the release process. This reduced time wastage, change-set size and risk of collisions.

In a similar vein, with more teams operating their own services, the need for a central site-wide operations team was reduced. We could see from our metrics that teams practicing “You Build It, You Run It” had fewer incidents and were resolving them much more quickly. We used this as evidence to bring in tooling to help them respond to incidents faster, and decouple the centralized ops teams from those processes — in some cases allowing them to focus on legacy systems, and in others, removing the need for the service entirely (which resulted in significant cost savings, despite the fact that we had more individual product teams on-call). This, and supporting observability and alerting tooling, was all configured through the platform’s paved-road pipeline described in our previous article.

The DORA metrics helped us architecturally as well. Operational data shined a light on the brittleness of third-party and legacy services, thereby driving greater investment into resilience engineering, alternative solutions, and in some cases, causing us to re-evaluate our build vs. buy decisions. 

Choosing what to measure

It’s very important to choose wisely about what to measure. Experts in the field (such as Laura Tacho) influenced us to avoid vanity metrics and to be cautious about interpreting the ones we do collect. It’s also important for metrics to be meaningful to the target audience, and presented accordingly.

As an example, we communicate about cost and vulnerability with our teams, but the form this takes depends on the intended audience’s role. For example, we send new vulnerabilities or spikes in cost directly to product teams’ collaboration channels, because experience has taught us that having our engineers see these vulnerabilities results in a faster response. On the other hand, for compliance reporting or review by team leads, reports are more effective at summarising the areas that need action. Because if we know one thing, it’s that nobody wants to be a leader of the “vulnerabilities outside of policy” dashboard!

It was not unusual for us to historically look at measures such as the number or frequency of incidents. But in a world of highly automated response systems, this is a trap, as alerts can be easily duplicated. Focusing too much on a number can drive the wrong behavior — at worst, deliberately avoiding creating an incident at all! Instead, it’s much better to focus on the impact of the parent incident and how long it took to recover. Another example is reporting on the number of vulnerabilities. Imagine you have a package that is used extensively across many components in a distributed system. Disclosing that the package has a vulnerability can create a false sense of scale, when in fact patching the base image deals with the problem swiftly. Instead, it’s better to look at the speed of response than a pre-agreed policy based on severity. This is both a much more effective and reasonable metric for teams to act on, so we see better engagement.

It’s very important that you put across as much context as possible when presenting the data so that the right conclusions can be drawn — especially where those reports are seen by decision-makers. With that in mind, we combined raw metrics we could visualize with user opinion about them. This helped to bring that missing context: Is the team that’s suffering from a high change failure rate also struggling with its release processes and batch size? Is the team that’s not addressing vulnerabilities quickly also reporting that they’re spending too much time on feature development and not enough on operational matters? We reached for a different tool — DX — to help us bring this sort of information to bear. In our follow-up article, we’ll elaborate on how we did this and how it prompted us to expand the data we collected about our tenants. Stay tuned!

To learn more about shifting down with platform engineering on Google Cloud, start here.

ShareTweetShare
Previous Post

How John Lewis Partnership chose its monitoring metrics

Next Post

Choosing Antigravity or Gemini CLI

Next Post

Choosing Antigravity or Gemini CLI

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

You might also like

Choosing Antigravity or Gemini CLI

February 4, 2026

At John Lewis Partnership, measuring developer platform value

February 4, 2026

How John Lewis Partnership chose its monitoring metrics

February 4, 2026

Key insights from our inaugural survey on the ROI of AI in the public sector

February 3, 2026

Best WiFi Router For A Large Home | 2024

June 25, 2024

How to Set Up a Wireless Router as an Access Point

June 25, 2024
monotone logo block byte

Stay ahead in the tech world with Tech Insight. Explore in-depth tutorials, unbiased reviews, and the latest news on gadgets, software, and innovations. Join our community of tech enthusiasts today!

Stay Connected

  • Home
  • Tech News
  • Tech Tutorials
  • Reviews
  • Shop
  • About Us
  • Privacy Policy
  • Terms & Conditions

© 2024 Byte Block - Tech Insight: Tutorials, Reviews & Latest News. Made By Huwa.

Welcome Back!

Sign In with Google
Sign In with Linked In
OR

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Sign Up with Google
Sign Up with Linked In
OR

Fill the forms below to register

*By registering into our website, you agree to the Terms & Conditions and Privacy Policy.
All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In
  • Login
  • Sign Up
  • Cart
No Result
View All Result
  • Home
  • Tech News
  • Tech Tutorials
    • Networking
    • Computers
    • Mobile Devices & Tablets
    • Apps & Software
    • Cloud & Servers
    • IT Careers
    • AI
  • Reviews
  • Shop
    • Electronics & Gadgets
    • Apps & Software
    • Online Courses
    • Lifetime Subscription

© 2024 Byte Block - Tech Insight: Tutorials, Reviews & Latest News. Made By Huwa.

Login