Not Every Detail Matters

I was looking at a user story for one of the teams I support. The story was about improving a very particular page. Our users do see it. But only for 5-10 minutes per week, if they've started their work early. We deploy this product weekly just before working hours. Deploying currently involves taking the whole product down. Customers can sign up for noticifications so they're reminded about this downtime window.

The story was to improve the look of a page. People might see it and be confused if the stars aligned and:

  • They started work early.
  • They hadn't signed up for the notification.
  • They hadn't seen the web app with just a logo on it before.
  • They didn't try it again in a few minutes.

So I asked the ticket writer, "This doesn't impact customers much (5-10 minutes per week). Is fixing this worth the effort?"

They wrote back "I believe in 'Every detail matters.' This particular detail should take very little effort to realize, so my answer on this question is Yes."

It's possible they're right to pick this ticket up. It was a small enough effort that we might as well do it. If they're wrong, they're the one feeling the pain of explaining the ticket to the team, verifying the fix, deciding what to put in the release notes, etc. It's a safe-to-fail experiment for me as a quality coach.

But I didn't have the same mindset. I don't believe that we should fix everything we find in our app that violates my expectations. I don't think it's possible to identify one correct set of expectations and priorities that our users will share. I don't think the things that we've already fixed will stay fixed. I don't think it's possible to cover every issue with an automated test.

I think we need to let go. We need to decide what's important, and focus on that. The details of the downtime page -- the new design, and the time the team spent updating it, and the effort I'd spend having the conversation about it -- none of them mattered too much to me. We need to notice details, and also know when to turn our brains off to being bothered by them. We need to think about the risks of our tests could uncover; our goal isn't 100% test coverage. In short:

Not every detail matters.

We are limited by our attention, energy, health, meetings on our schedule, time left on this earth. Software is complex enough that it's very unlikely we'll be able to solve every issue we find. The more time we spend solving the unimportant ones, the less time we have left to look for the important ones. Or decide what is important. Or understand our users better to be able to more effectively evaluate the relative importance of such issues.

Jerry Weinberg cheekily noted the impossibility of this endeavor in his book accurately titled Perfect Software and Other Illusions About Testing. The Black Box Software Testing Course on Test Design emphasized the need for testers to balance risk vs. coverage. Its focus on scenario testing insisted we tie our testing to a user's journey through the software that was:

  • coherent
  • credible
  • motivating
  • complex
  • easy to evaluate

I know this is the right approach. It will leave time to build new features and learn new skills. It's what will make it possible for us to feel fulfilled and motivated in our work.

Now I just need to figure out how to scale this mindset.

EuroSTAR Testing Voices 2021

In June of 2021, EuroSTAR ran a free online event. Having either Maaret Pyhäjärvi and Keith Klain on the program would have been enough to add this to my calendar; having both got me there.


Maaret Pyhäjärvi: Testing Becoming Harder to be Valuable

As usual, Maaret sees a bright and exciting future for her role that I have trouble reconciling with my reality. In Maaret's vision, crowdsourced testers find the obvious bugs. She's left to skim the cream off the milk, performing the interesting, thoughtful work of understanding a complex system. She and her fellow testers are not repositories for or regurgitators of information. They share testing ideas before they become results or automated tests, with the goal of making others more productive. They tell compelling stories of the unseen: bugs that never happened, testing they never performed.

Dream big Maaret.

Panel: Different Teams, Different Testers

Veerle Verhagen hosted this panel. If you're feeling a bit exhausted, I can recommend a small dose of Veerle directly to your brain. These were my top three takeaways:

  • The best way to skill up automation is to do it on the job.
  • You can give assignments back.
  • Raise problems even if they're outside the current scope.

Keith Klain: Test as Transformation

Keith speaks from a position of connecting testing to the business strategy, which is exactly what he recommends we all do. Talk the talk of driving innovation and managing risk to get people's attention and connect what you're doing to the money. Writing a pile of cheap flaky checks (or even consistently passing ones!) may give you a false sense of security that hides the bigger risks. Strive to gather more information to soundly evaluate the risks in your products, enough to understand what would have happened if you hadn't caught it and how to prevent something similar in the future.


Thanks to the EuroSTAR team for pulling this together and (not charging for it).

This Diagram Asked More Questions Than It Answered

I made a diagram that asked more questions than it answered.

As Quality Lead for the seven engineering teams in my unit, I'm tasked with getting developers to think more holistically. I'm not an expert in any of the individual parts of the product teams are. I aim to have a bird's eye view on the whole, particularly when it comes to the testing we're doing. Each team is thinking about thorough coverage of their part; I'm looking at the through-line across the products.

So after only a few weeks on the job, when a particular behavior got the software development managers asking me "Did anyone test this end-to-end?" all I could say for sure was "I haven't!" It did get me thinking and asking them though:

  • What do you mean when you say end-to-end?
  • Did you mean an automated test, or someone trying it at least once manually?
  • Is the one path I have in mind the same one you're picturing?
  • Is it important to have some type of coverage over every possible path, or can we decide to focus on the risky ones?

I started by drawing what I had in mind. It looked like this. The colored boxes show which team owns the code. The outlined boxes show actions a user could take.

A humble beginning

I went to it show people all around the department (developers, testers, product, UX, analytics, managers) so they could tell me where I was wrong or needed more detail. (More on how that builds credibility in this post).

Each person I showed it to added more boxes, or split existing boxes into more specific actions. Some even added more teams. I approached the teams humbly, acknowledging that though I was being asked about end-to-end testing, I didn't have a good view on what that meant right now. I acknowledged that they were the experts in the their own domains. I'd reviewed roadmaps and documentation to do what I could before I spoke to them so they only had to offer corrections instead of whole explanations. And I thanked them for correcting my ignorance and blind spots as we updated the diagram together.

To our analytics expert, I said "I get asked a lot about the end-to-end flow, but I'm not sure what that means exactly. Do you have the same problem?" A wave of common struggle and understanding washed over them.

By the time 15 people had given their perspective, the diagram had exploded into this monstrosity.

A completely overwhelming mess

This diagram was hard to read. It wasn't clear (to anyone but me) where the entry and exit points where. The key was hard to reference and had too much explanation. At a glance, the main takeaway was "This is complicated." This did live up to one of my goals: get people to see that "test everything end-to-end" is not a straightforward, single path. We wouldn't test every path or promise full coverage from the start (or ever, but that's another conversation). But we could say: "There's a lot to cover here. Let's choose the most important path to start."

In showing the diagram to our sales and UX experts, and again acknowledging that this kind of diagramming was more their expertise than mine, I got nudged in the direction of business process modelling notation. I kept my teams and user actions in a way that notation didn't imagine, but putting everything in rows and columns gave my diagram an air of professionalism it didn't have before.

Something bordering on approachable

A different UX expert said they'd been too overwhelemed to try to process my overwhelming mess of a diagram, but they'd been able to read and learn from this attempt.

Our software development managers and product experts were the ones asking about the state of end-to-end testing initially. Showing them the diagram got them thinking on the exactly the paths I wanted to trigger:

  • Can we have one of these for a different product we're building?
  • What would this diagram look like if we only followed one user persona's journey?
  • What else might be included in end-to-end if we think outside the scope of the seven engineering teams in our unit?
  • How do people buy the product? How are they onboarded?
  • How do people learning how to use the product discover these steps you've outlined? How do they know which direction they want to go?
  • How do people make decisions at these decision points? How can we gain more insight into how they're doing that?

I think I probably could have helped perform some end-to-end testing with a collection of testers from the three teams I initially identifed in my first diagram, gone back to the managers and proclaimed "yes, we're end-to-end testing." But my job isn't to provide simple answers. It's to get people thinking about the whole, and asking the important questions for themselves. The journey of this diagram did exactly that.


Do you find yourself answering questions that you see as misguided? How can you guide people to ask better questions?