CVs From A Hiring Perspective

In my department at work, every tester is on their own team. I'm in a position where I look at tester resumes before we decide to screen them. Some of the resumes come in from people applying on the company website. We've also got a recruiter searching for people on LinkedIn. She asked me before we posted a new job: if we're not requiring experience with a particular tech stack, what should she search for?

I didn't have a great answer for her at first. It took me some time to think about what we'd value in this role, what would make a candidate's resume stand out.

I don't look for particular buzzwords on resumes, and no particular buzzword can eliminate you from the hiring process. But I do look for some of the same things I look for in a test report: an indicator of the depth and quality of your work, and an ability to connect your work to the value to the development team or the business.

A resume that regurgitates the calendar or job description of the candidate doesn't stand out to me:

  • Attended standup, refinement, sprint planning, sprint review
  • Made test plan, executed tests, reported tests

I want to know what was hard about it. I want to know who else was there and how you interacted with them. I want to know what you do that other people in your position don't do.

  • Facilitated hybrid standup across three time zones
  • Refined ideas into user stories with acceptance criteria that met our definition of ready
  • Spoke to key stakeholders to identify risks and incorporate them into a test plan
  • Shared test results verbally to spark conversation with developers about impact
  • Presented test plan at guild to inspire other testers to shift left

I was on a TestBash Careers panel about having a CV that gets you noticed. (I regret to inform you that my ability to think of an answer and unmute are not always fast enough to chime in!) I'm digging into the unanswered questions on the club thread, and can tell you that my fellow panelists and host are more shrewed about when or when not to reveal a personal detail on a resume. I am privileged enough to lean towards the truth.

Back at work, I told the recruiter there were a couple things should could search for: exploratory testing and integration testing. Our desire to cast as wide and inclusive a net as possible has been an investment of her time, and the time of the team members who speak to the candidates first. It's not cheap. I hope it's worth it.

Join us if you're interested.

Photo by Christina @ wocintechchat.com on Unsplash

Communicate Using Three Layers of Information

I've joked about writing Buzzfeed-style clickbait titles for JIRA tickets:

Partly, clickbait titles gave me an easy way to search the database, or type a few words in my browser bar, to pull up the right issue quickly. The JIRA ticket itself had steps to reproduce, an explanation of why it was important, and an annotated screenshot for people to easily understand if what I saw was happening on their machine.

Partly, clickbait titles gave us a convenient shorthand to help us remember whether it's this logout issue or some other logout that we're talking about.

Partly, clickbait titles made the developers interested to investigate and pick up the issue.

I was reading Giles Turnbull's The agile comms handbook last weekend and found the lure part of the three communication layers he described (lure, context, and detail) reminiscent of the clickbait JIRA ticket titles I'd written. And that I'd see these three communication layers (title, description, details) described similarly in lots of places before.


Lure, context, detail

This whole (short and great) The agile comms handbook is about how to communicate in ways that move as fast as the work does on an Agile team while still being effective for busy people to consume. In the book, Giles Turnbull describes creating different layers of information as lure, context, and detail. Lure grabs the attention of busy people and gets them interested in knowing more. Context gets people to the point where they know just enough. Detail is for people who need to know more and have the time to follow a link to another page, read a whole PDF, etc.

Why, how, what

Simon Sinek describes it as why, how, what in this TED Talk on "How great leaders inspire action". He gives the example of an advertisement for Apple, where starting with the why is much more inspiring and motivating than starting with the what. Getting people to buy into your vision will ensure that they follow along to the goal.

Title, executive summary, list of issues

When we worked together, Martin Hynie taught me to write weekly test reports with a title, executive summary, and list of issues, with the idea that the executive summary should be enough for the busy person to understand, but that less busy and more curious people would want the detail in the list of issues.

Intent, location, details

Maaret Pyhäjärvi describes the ensemble (mob) programming practice of communicating at the highest level of abstraction (intent) before being more specific about where we're going (location) or if necessary, mouse clicks and keystrokes (details). Giving keystrokes and mouse clicks to someone who knows how to operate the software is frustrating, but so is giving a high-level explanation to someone who's never used the software before. Expressing intent first can lead to better action, even if another member of the ensemble has a different action in mind than you do. Being able to identify when to jump between the levels is key for effective communication. Knowing when to jump to a different layer of communication is the skill I find hardest to build while learning to ensemble.

Title, lede, body

In the content management system I tested for New York Public Radio, article pages were broken down into title, lede, and body, presumably reflecting something every journalist learns on their first day at a newspaper. On the website, title and lede were displayed on the homepage and tag pages. You'd only see the body once you clicked in to read or listen to the story.


I read so many JIRA tickets and Slack messages that only contain the lowest/detail level of information. The person trying to bring everyone else up-to-speed on an issue does need to include all the detail. It makes sense that that's where their mind is. Giles Turnbull identifies why the detail layer of information is the default: the details already exist. Creating the other lure and context layers of information takes more work.

Being able to zoom out and answer the kinds of questions you'd expect in a refinement meeting ("Who will benefit from this? How does it work now and why does it need to change?") helps you prioritize the work. It helps the team understand how what they're doing fits into the bigger picture. Learning how to write the lure and the context is a separate technical skill that needs to be recognized and built.


How did you learn to break down your communication into different layers? Do the title, headers, and paragraphs of this blog post fit this model? Which of these breakdowns resonates most with you? Where are you practicing communicating this way?

Photo by Clark Van Der Beken on Unsplash

SoftTest 2019 in Dublin

I got the chance to present at SoftTest in Dublin, Ireland in 2019. It was the last time they were going to host the conference, but unlike most events, they knew that even in 2019. So I was even more delighted to be part of it and finally see a bit of Dublin (the original Boston, MA if you ask me).


Janet Gregory gave a talk version of a workshop I'd co-facilitate with Lisa Crispin a few weeks earlier about testing in DevOps. She connected the questions we ask and the work that we do as testers to the business risk. Existential questions like "Why am I here?" and "Are we delivering the right thing?" are encouraged. Visualizing our work and collaborating allows us to get feedback from small experiments.

In one experiment, Conor Fitzgerald and Rob Meaney tried team-based testing. REcurring issues like build failures, end-to-end testing failures, and ultimately rework motivated their teammates to embrace change. They started from automation completely devoid of risk analysis and testing tasks invisible on the board. They used their crisis to build the new habit of radiating information about testing, making both their own work and the testing work to be picked up as visible and pick-up-able as possible.

Margaret Dineen spoke about making quality engineering visible and tagible, and connecting it to business value. The strategic intent of the organization informs what valuable software you deliver, and vice versa. Identifying the Impact, Confidence, and Ease of implementation (ICE) drives the software teams to actually deliver things the user cares about. Tracking outcomes across silos only works if you're open to measuring how badly disconnected software delivery is from the needs of your users.

Joe Joyce spoke about quality in an API's lifecycle something I would come to know and understand better in the time since this conference. He focused on getting the specs right, since they're faster to change and rollback, and serve as a point of collaboration even before the code or test code has been written. I couldn't agree more.

Hugh McCamphill's talk was about knowing your automation. Making abstractions, while inherently leaky, should be done at the domain rather than the implementation level to cut down on maintenance costs. Focus on saving time building; invest in learning.

Jean-Paul Varwijk gave a talk I remembered as a sweeping history of software testing, but see now in my notes that he called it "Testing as Advantage". Testing must be an advantage if it's stuck around this long, right? Build up collaboration skills to remove the separation needed to progress. Supply information continuously. Collecting sensible metrics and using them for the right purpose will tell you way more than tests will.


What a high-quality collection of speakers and talks for such a local community. This was one of the last places I got to travel before travelling became infitinitely harder. I'm so grateful to have been learning together in the same place at the same time as so many greats.

What's You're Missing in Your Conference Abstract: Spoilers

I've had the unique honor of participating in the program committee for Agile Testing Days for 2021 and 2022. I get to see anonymized submissions for talks and workshops (not keynotes), rate the entries on a number of criteria, and provide a few sentences of feedback for the author and the conference organizers to understand how and whether the submission fits in the program. I've also helped review papers in the Ministry of Testing call for papers. Anyone can.

Almost all of them are missing one thing: spoilers. I want spoilers. I want them to tell me how they faced a problem or general trend in the market at their job and what they did about it. I want enough detail, any detail, to know what I'll be seeing beyond the first two slides.

Here's what that looks like for a talk and a workshop.


Talks

For a talk, I'm expecting a particular format:

  1. Description of a problem the author has experienced
  2. Journey or decision point that led the author to change their behavior
  3. Details about the direction going forward

Sometimes I'm missing the author's personal connection in 1. (looks like they'll summarize a book, article, someone else's work, etc.) or why a change was needed in 2.

But it's 3. that I long for. Authors tell me that they solved a problem, or had trouble solving it, but not how. "I spent three months migrating our tests from Selenium to Playwright and faced trouble." Where did you face trouble? Was it because of differences between the languages? Developers couldn't read the failing test output anymore? What happened?? Tell me about the journey of what that looked like.

Workshops

For a workshop, I want to see:

  1. Description of a skill the author was missing
  2. Journey where the author built this skill
  3. Journey participants will go on to build this skill

Workshops ideas are great with 1. and often include 2. But you know what participants will mostly care about before they get trapped in a room for hours with you? That's right, it's 3. Will this be a lecture disguished as a workshop so the author has more time to speak? Will participants be working in groups? Will this get them on the first step towards building a skill? How much will they be able to do differently at work on Monday? Give me some sense of what the activities are, how long they might take, and how much you expect people to achieve during them.


More tips on conference submissions

Anytime I submit an abstract to a conference, I find something in Rob Lambert's Blazingly Simple Guide To Submitting To Conferences that makes me reconsider what I've written. Read it if you're anywhere on the journey from "do I even have an idea for this conference?" to "you mean I'm supposed to speak on stage in front of all these people sitting before me?"

Lisi Hocke also has a soup-to-nuts overview of conference speaking that will get you feeling like you can do this.

Photo by Brandi Redd on Unsplash

Tell Your Colleagues About Your Work

At the interview for my very first job at my hometown library, the librarian recounted my work as a volunteer. In addition to reshelving the returned books, I'd put the books that were already on the shelves back in order. She asked if I had anything else to share. "No," I said. I let the librarian speak for me. My work spoke for itself.

My job at the library took no coordination, little or no communication, and was quite boring. There was always someone else at the desk to talk to patrons and handle the actual "returning the books" part. I could just listen to my iPod and shelve the books. The disappearing books and neatness of the shelves were evidence of my work that anyone could notice and evaluate at a glance.

My work in software has been quite the opposite. It's been hard to evaluate by a person who doesn't know where (or how) to look. They might notice if I'd done my job very poorly or not at all. But the difference between a job very well done and a job just, well, done has been overwhelmingly large and invisible.

I spent most days in my career thus far as a tester on a development team with a daily standup. Standup was the place I could make at least a small part of my work visible. Contributions only became clear in bigger groups if someone else championed my testing work, or if the impact of my leadership was felt more widely.

Now, I'm serving several teams. I don't have a daily standup. I've been working remotely for two years. I could put in a lot of work in my current role that no one would ever see. It's felt strange to feel like everyone is wondering how I spent an hour, to being much less accountable for my time. I have the freedom to waste a lot of time on something that isn't important or doesn't matter to anyone else.

But that's not how I want to spend my time. I want to use my time and effort effectively. There's always more to do. I want to share with my colleagues what I've done and what I'm planning on picking up next, so they can tell me if my effort is duplicated, wasted, or in need of a course-correction.


Where, who, what, and how I tell them

I have invitations to each of the standups for the teams I support. I go once every couple weeks to keep an ear out for how I can help them. Sometimes that's the right time to share a bit of what I've been working on too, if it's just for them.

I've also held myself accountable in three other meetings that span across the department:

  1. There's a weekly meeting for the whole department. There I share highlights of general interest for less than five minutes and add linked details in the shared agenda.
  2. I do a similar thing for the bi-weekly sync of all the engineering leads, tailored to their interests.

In these quick updates, I share why I picked up the work, who it serves, and what problem it solves. I've seen so many sprint demos that share the "what" but not the "who" or the "why" that I try to include all three pieces in my stories.

  1. I host a meeting every few weeks with the engineering and product managers that's all about my work. They're the ones who need to see the outcomes and impact, so it's worth investing a bit of time to make sure I'm focusing on the right things for them.

In the meeting focused on my role, I let the conversation be freer. We dig into more of the why's and "is this valuable?"'s of what I'm up to.

I've also got a 1-on-1 with my manager. I've got 5-10 minutes every two weeks to share and get advice on particularly triumphant or hairy situations not fit for a wider audience. The rest of our half hour 1-on-1 is my boss responding to what I shared and sharing information with me from the wider company context.


Why I tell them

Why do I do all of this? Why do I invest so much time in talking about the work?

Because the work does not speak for itself. Testing is not like shelving books in a library. Being able to explain my work helps me collaborate. It makes it clear what kinds of problems they can come to be with, and ensures that I'm top-of-mind when such problems do arise.

Sharing what I'm working on builds trust. It's not arrogant, bragging, or self-indulgent. It's a neccesary part of making sure I'm doing the right thing, and getting credit for it.

Photo by Shiva Prasad Gaddameedi on Unsplash

The Five Hour Tester

I ran a series of testing skills workshops at work that Joep Schuurkes and Helena Jeret-Mäe developed. I used both their teaching materials (available at FourHourTester.net) and the schedule Joep followed when he ran them at the office (splitting the exercises up into five hours) to run the series again. The five topics are:

  1. interpretation
  2. modelling
  3. test design
  4. note taking
  5. bug reporting

What

This was what got me enthusiastic about giving these workshops: the material. It was topics, concepts, blog posts, and books that I'd been studying since I discovered the wider community of testers about ten years ago. Sharing the oldies but goodies like the "Mary had a little lamb" heuristic and the Test Heuristics Cheat Sheet with a new set of people makes me excited about what they'll discover about their software with these tools in their toolbelt.

Why

The company had grown enough in the years since Joep gave the workshops that there was demand for the series again. Testers are embedded on teams by themselves and reporting to developers. The workshops offered an opportunity for testers to grow (and affirm) their skills while building relationships with the testing-minded outside their teams.

Who

I posted an invitation to the workshops in the Slack channel for the R&D department, and was surprised to see interest in attending from elsewhere in the company. Attendance ranged from about 20 participants in the first session to a low of 5 during the school vacation week. (An learning intiative launched by the People & Culture department would streamline registration if/when I offer the workshops again.)

When

The sessions were in the hour right after lunch time on Monday on five consecutive weeks. That slot didn't require me to move or miss important recurring meetings. Recording the sessions made it possible to catch-up on the ones people missed.

Where

I hosted the sessions remotely, which allowed for audio-visual parity for everyone participating in a way that hybrid sessions would not have. I recorded them so people could also watch them afterwards.

How

Each hour started with a short "lecture", the part where we'd name and notice the skill. Next came an exercise, giving participants hands-on practice in drawing attention to the testing skill. This part took about half of the hour. The other half of the hour was the individual reflection and debrief.

I wasn't sure if people could stay or actively participate for the whole hour, so I had people debrief individually and with the whole group instead of in pairs or small groups. Our conversations strayed from the testing skill into the natural things you'd expect during an experiential workshop: talking about the exercise itself, or talking about how this fits in with your work. It was delightful to see people making the connection from one lesson to another and reflecting on what they'd want to change in their day-to-day.


The feedback I collected from the participants indicates that everyone got something different out of it. Each lesson resonated with somebody, which felt like a success to me.

Now I'm curious: what testing skills do you teach to a wide audience at your company? What materials do you use?


Photo by J. Kelly Brito on Unsplash

Strengthen Your Code Review Skills

I spent my first two years at my current company getting my code reviewed and the following almost two years reviewing 3-10 merge requests per week. Our tech stack was in Python, with pytest as our test running, the requests library for API tests, and Selenium for browser tests, all hosted in our company's paid gitlab instance. All that experience shaped how (and whether I) offer feedback on the merge requests I reviewed for members of my own team and neighboring teams working in the same tech stack.

Define the relationship.

There are power dynamics at play in any relationship at work. For members of my team, they had to have a really good argument to refute one of my "suggestions" because I was their test specialist and their team lead evaluating their ability to respond to feedback and change their behavior by performance review time. No pressure!

For members of other teams, they had more power to push back. It could empower them with the knowledge I shared, but they were free to reject it.

Focus on what matters.

When I review a merge request, I start with the question "what is this code supposed to do?" If it's a merge request for my team, the JIRA ticket number in the title or the branch name would clue me in. For code from other teams, champions would use the description field to explain what the product change was and how the test code supported that. Most merge requests left me guessing a bit. I'd have to read the code contained in the tests to figure out what the test, and ultimately the product, was supposed to do.

Reading the code also got me thinking about the things I was best equipped to help the code submitter with: maintainability and "what if?" scenarios. As a tester, I could look at a list of tests that create, read, and delete a thing and ask "is update also part of the picture here?" As a code reviewer with a longer tenure at the company, I had a more informed view of whether copy and pasting would work or if a new function was needed.

We had two linters set up to run on every commit: flake8 covered style enforcement (indentation, blank lines, etc.) and vulture identified unused code. For issues of style that a machine couldn't decide, we had written guidelines to point to. I pointed to these three the most often:

  1. comments explain why the code is the way it is, not what it does (so code is clearer to read and update)
  2. setup and teardown should take place outside the test (so pytest reporting tells you there's an error instead of a failure when something's off)
  3. API tests assert the status code before any details about the response body (because the body's not going to have the right stuff in it anyway if the status code is wrong)

I would give feedback about these topics, trying to ask questions to disambiguate my observations from interpretations. I knew that the person who'd written the code had spent more time and thought steeped in the problem than I had. Questions allowed me to assume competence while gathering evidence to the contrary.

As I read other people's code, I saw lots of weird stuff: stuff I would name differently, stuff I would put in a different order, stuff that took up more or fewer lines than I would have to write the same thing. My experience living in a non-native English speaking culture served me well here: I let it go. Was the test name meaningful to them and their team? Did putting it across a couple more lines help them understand it better? Was it actually a problem with what the code did or just a personal opinion? Did they want to set the constant right before they used it instead of at the top? Go for it! It works, I can understand what they meant, and that should be the threshold. My review was not an opportunity for me to show off my Python skills. I was there to help the code submitter with their tests. I reserved the right to remain silent on unimportant matters.

Communicate well.

Praise the good!

Notice when people have done something well and praise them for it! Positive reinforcement is the best way to turn up the good on what's already happening in your code base.

Right level of abstraction

I reviewed merge requests that were 30% done that I mistook for 100% done; conversely I saw at 110% done that I would have killed at 30%. A little [WIP] label in the name of the merge request or bullet list of which tests were still missing helped me offer the code submitter the right kind of feedback at the right time.

Sometimes, the code isn't the problem, the product is. I've seen a 500 http status code returned for something the user screwed up, which should be in the 400-range. A code comment "Should this be a 400 response?" opened up a more interesting conversation about where the product was in its lifecycle and the code submitter could lobby their team to change the product's behavior.

If having the conversation about the code isn't the right approach, I tried having the meta-conversation instead. "I'm not convinced this API spec is done. Where are you in that process?"

Code quality measurement: WTFs/minute

Right format

Tone is hard in writing. I do prefer writing, because it gives me the opportunity to have several drafts, separating my WTFs-per-minute from what the code submitter receives. I just don't always hit send. Before leaving a comment on a particular line in gitlab, I ask myself: is this the right format? Have I removed any judgy adverbs like "just", "obviously", or "actually"? Would a video call, a Slack message, or a comment on the whole merge request be more likely to be embraced?

The receiving

One of the many tough things about feedback is that the receiver determines the priority of the feedback. (For all the other tough things about feedback, read What Did You Say? The Art of Giving and Receiving Feedback.) The code submitter often completely miss what I meant the first time. Even if I thought I'd delivered my feedback as well as I could have, it wasn't always accepted. Everyone extracts different information from the same situation. The feedback that provides the most information can be the hardest to accept.

It doesn't have to be like this.

Asynchronous

You may have read my first paragraph and asked yourself "why is she doing so many async code reviews?" and you'd be right! The company was scaling at a speed that I was forced to optimize for "number of minutes per day without video calls" over shared understanding.

Synchronous

Did you know that working in a pair or an ensemble can do all this feedback and knowledge-sharing stuff in a better way? See more about how an ensemble made learning happen in this slide deck.

When I was able to, having a conversation helped me make sure that I was giving the right feedback at the right time to the right person. Pairing with the code submitter got not only the mistakes fixed, but also the thought processes behind those mistakes.

Don't take my word for it.

I have the benefit of learning from smart people who are also thinking through what code reviews are and what they can be. I have yet to be free at a time when the code reading club Felienne Hermans started has met, but I look forward to joining sometime in the future. Here is a collection resources that I've already found useful:

Videos

Books

Blog posts

Quotes found on Twitter


Photo by Robin Mathlener on Unsplash

Not Every Detail Matters

I was looking at a user story for one of the teams I support. The story was about improving a very particular page. Our users do see it. But only for 5-10 minutes per week, if they've started their work early. We deploy this product weekly just before working hours. Deploying currently involves taking the whole product down. Customers can sign up for noticifications so they're reminded about this downtime window.

The story was to improve the look of a page. People might see it and be confused if the stars aligned and:

  • They started work early.
  • They hadn't signed up for the notification.
  • They hadn't seen the web app with just a logo on it before.
  • They didn't try it again in a few minutes.

So I asked the ticket writer, "This doesn't impact customers much (5-10 minutes per week). Is fixing this worth the effort?"

They wrote back "I believe in 'Every detail matters.' This particular detail should take very little effort to realize, so my answer on this question is Yes."

It's possible they're right to pick this ticket up. It was a small enough effort that we might as well do it. If they're wrong, they're the one feeling the pain of explaining the ticket to the team, verifying the fix, deciding what to put in the release notes, etc. It's a safe-to-fail experiment for me as a quality coach.

But I didn't have the same mindset. I don't believe that we should fix everything we find in our app that violates my expectations. I don't think it's possible to identify one correct set of expectations and priorities that our users will share. I don't think the things that we've already fixed will stay fixed. I don't think it's possible to cover every issue with an automated test.

I think we need to let go. We need to decide what's important, and focus on that. The details of the downtime page -- the new design, and the time the team spent updating it, and the effort I'd spend having the conversation about it -- none of them mattered too much to me. We need to notice details, and also know when to turn our brains off to being bothered by them. We need to think about the risks of our tests could uncover; our goal isn't 100% test coverage. In short:

Not every detail matters.

We are limited by our attention, energy, health, meetings on our schedule, time left on this earth. Software is complex enough that it's very unlikely we'll be able to solve every issue we find. The more time we spend solving the unimportant ones, the less time we have left to look for the important ones. Or decide what is important. Or understand our users better to be able to more effectively evaluate the relative importance of such issues.

Jerry Weinberg cheekily noted the impossibility of this endeavor in his book accurately titled Perfect Software and Other Illusions About Testing. The Black Box Software Testing Course on Test Design emphasized the need for testers to balance risk vs. coverage. Its focus on scenario testing insisted we tie our testing to a user's journey through the software that was:

  • coherent
  • credible
  • motivating
  • complex
  • easy to evaluate

I know this is the right approach. It will leave time to build new features and learn new skills. It's what will make it possible for us to feel fulfilled and motivated in our work.

Now I just need to figure out how to scale this mindset.

EuroSTAR Testing Voices 2021

In June of 2021, EuroSTAR ran a free online event. Having either Maaret Pyhäjärvi and Keith Klain on the program would have been enough to add this to my calendar; having both got me there.


Maaret Pyhäjärvi: Testing Becoming Harder to be Valuable

As usual, Maaret sees a bright and exciting future for her role that I have trouble reconciling with my reality. In Maaret's vision, crowdsourced testers find the obvious bugs. She's left to skim the cream off the milk, performing the interesting, thoughtful work of understanding a complex system. She and her fellow testers are not repositories for or regurgitators of information. They share testing ideas before they become results or automated tests, with the goal of making others more productive. They tell compelling stories of the unseen: bugs that never happened, testing they never performed.

Dream big Maaret.

Panel: Different Teams, Different Testers

Veerle Verhagen hosted this panel. If you're feeling a bit exhausted, I can recommend a small dose of Veerle directly to your brain. These were my top three takeaways:

  • The best way to skill up automation is to do it on the job.
  • You can give assignments back.
  • Raise problems even if they're outside the current scope.

Keith Klain: Test as Transformation

Keith speaks from a position of connecting testing to the business strategy, which is exactly what he recommends we all do. Talk the talk of driving innovation and managing risk to get people's attention and connect what you're doing to the money. Writing a pile of cheap flaky checks (or even consistently passing ones!) may give you a false sense of security that hides the bigger risks. Strive to gather more information to soundly evaluate the risks in your products, enough to understand what would have happened if you hadn't caught it and how to prevent something similar in the future.


Thanks to the EuroSTAR team for pulling this together and (not charging for it).

This Diagram Asked More Questions Than It Answered

I made a diagram that asked more questions than it answered.

As Quality Lead for the seven engineering teams in my unit, I'm tasked with getting developers to think more holistically. I'm not an expert in any of the individual parts of the product teams are. I aim to have a bird's eye view on the whole, particularly when it comes to the testing we're doing. Each team is thinking about thorough coverage of their part; I'm looking at the through-line across the products.

So after only a few weeks on the job, when a particular behavior got the software development managers asking me "Did anyone test this end-to-end?" all I could say for sure was "I haven't!" It did get me thinking and asking them though:

  • What do you mean when you say end-to-end?
  • Did you mean an automated test, or someone trying it at least once manually?
  • Is the one path I have in mind the same one you're picturing?
  • Is it important to have some type of coverage over every possible path, or can we decide to focus on the risky ones?

I started by drawing what I had in mind. It looked like this. The colored boxes show which team owns the code. The outlined boxes show actions a user could take.

A humble beginning

I went to it show people all around the department (developers, testers, product, UX, analytics, managers) so they could tell me where I was wrong or needed more detail. (More on how that builds credibility in this post).

Each person I showed it to added more boxes, or split existing boxes into more specific actions. Some even added more teams. I approached the teams humbly, acknowledging that though I was being asked about end-to-end testing, I didn't have a good view on what that meant right now. I acknowledged that they were the experts in the their own domains. I'd reviewed roadmaps and documentation to do what I could before I spoke to them so they only had to offer corrections instead of whole explanations. And I thanked them for correcting my ignorance and blind spots as we updated the diagram together.

To our analytics expert, I said "I get asked a lot about the end-to-end flow, but I'm not sure what that means exactly. Do you have the same problem?" A wave of common struggle and understanding washed over them.

By the time 15 people had given their perspective, the diagram had exploded into this monstrosity.

A completely overwhelming mess

This diagram was hard to read. It wasn't clear (to anyone but me) where the entry and exit points where. The key was hard to reference and had too much explanation. At a glance, the main takeaway was "This is complicated." This did live up to one of my goals: get people to see that "test everything end-to-end" is not a straightforward, single path. We wouldn't test every path or promise full coverage from the start (or ever, but that's another conversation). But we could say: "There's a lot to cover here. Let's choose the most important path to start."

In showing the diagram to our sales and UX experts, and again acknowledging that this kind of diagramming was more their expertise than mine, I got nudged in the direction of business process modelling notation. I kept my teams and user actions in a way that notation didn't imagine, but putting everything in rows and columns gave my diagram an air of professionalism it didn't have before.

Something bordering on approachable

A different UX expert said they'd been too overwhelemed to try to process my overwhelming mess of a diagram, but they'd been able to read and learn from this attempt.

Our software development managers and product experts were the ones asking about the state of end-to-end testing initially. Showing them the diagram got them thinking on the exactly the paths I wanted to trigger:

  • Can we have one of these for a different product we're building?
  • What would this diagram look like if we only followed one user persona's journey?
  • What else might be included in end-to-end if we think outside the scope of the seven engineering teams in our unit?
  • How do people buy the product? How are they onboarded?
  • How do people learning how to use the product discover these steps you've outlined? How do they know which direction they want to go?
  • How do people make decisions at these decision points? How can we gain more insight into how they're doing that?

I think I probably could have helped perform some end-to-end testing with a collection of testers from the three teams I initially identifed in my first diagram, gone back to the managers and proclaimed "yes, we're end-to-end testing." But my job isn't to provide simple answers. It's to get people thinking about the whole, and asking the important questions for themselves. The journey of this diagram did exactly that.


Do you find yourself answering questions that you see as misguided? How can you guide people to ask better questions?