Give Them the Fish, Then Teach Them to Fish

A colleague came to me with a request the other day. I didn't handle it quite how I wanted to. The request went something like this:

"I remember you were on the team for Big Scary product a couple years ago. Do you know if I can delete this List of Stuff from Big Scary product, and if I can automate that?"

I did not know. It was two years ago. Big Scary product had gotten Bigger and Scarier in the meantime.

But I knew where my team linked to our API specs from our customer-facing documentation. I applied the same principle to discover where Big Scary product API specs were. I looked at those specs and found the List of Stuff in a response body for an API call, but noticed that my colleague wouldn't have the ID the request required. So I looked at the API specs from a Bigger Scarier product. Combining a call from there would get the ID Big Scary product needed.

I was short on time, so I answered the question directly. I said it was possible, and possible to automate, and provided the links to the specs for both products. My colleague thanked me, and left the conversation able to solve their problem quickly.

I gave them the fish. What they learned from that interaction was: Elizabeth knows where to find stuff. I can come to her when I don't know how to find stuff and she will find it for me. That was the wrong lesson.

Give Them the Fish, Then Teach Them to Fish

A better lesson would have been: I know where to look for things. Elizabeth will give me the tools to know where to look, and empower me to do so. Now that I've got the access and seen it done once before, I can take a few more steps before I reach out Elizabeth the next time.

Here's what I could have done to get to get this colleague there:

  1. Explain where all the API specs live: I could have explained my thought process for finding the API specs, showed how I navigate using the headers and Ctrl + F on the page, and compare the requests and responses to what's needed.
  2. Update them about who's on the team for Big Scary product now: I could have listed a few team members names who I knew were working on Big Scary product, or pointed my colleague to the Slack channel for the whole team.
  3. Introduce colleague to a member of the team for Big Scary product: Since this colleague was a tester, I could have started a direct message with them and the tester on the team for Big Scary product, copying the question from the DM I first received.

What If I Only Teach Them to Fish?

What would have happened if I'd skipped what I'd done, and withheld the links to the API specs?

I wouldn't have been able to guarantee that my colleague was in the learning zone. From what I knew about their situation, they were accumulating a lot of data that they wanted to delete. I didn't know what other pressures were coming from the team, but the need to automate it suggested it was a bigger problem than just a few extra entries in a database.

Giving my colleague the fish, and then teaching them to fish, relieves any of that pressure to deliver, and helps open them up to learning and growing.

Tell Them What You're Doing

Some colleagues are distracted, or dense, or not able to take away meta-information from a conversation along with the information. They may stop listening after they have the answer.

Combat this by sharing your motives. Remind them that you too are busy. Explain that your goal is to empower them. Encourage them to reach out to the team working on Big Scary product, so that those team members can also get good at knowing where to look and answering colleagues' questions. Tell them you're happy to help them again, but you'll expect more details of what they tried first. Then hold them to that.

The best lesson is: I want to take a few more steps next time I have a problem, because I know I can, and Elizabeth expects more from me.

SoCraTes UK June 2021

I had a long drought between when I was last able to just attend a conference (rather than running a session or organizing) and SoCraTes UK in June. A year I think? And what a welcome rain it was.


Heloá hosted a session on meditation. I recognized many of the symptoms she described as her motivators for picking up the habit from a talk I did about introversion years ago. And the mindset she described (trying over succeeding, recognizing reactions without trying to impose a particular one) echoed back to a conversation on stoicism that Sanne Visser hosted at the first TestCraftCamp. I was completely convinced by the benefits she described ("People say I sound calmer. I'm breathing more deeply.") but I haven't built a habit around it yet. C'est la vie.

I went to two different sessions Maaret Pyhäjärvi hosted. (Does this make me a groupie?) The first, an ensemble testing session, reminded me that the most valuable exploratory testing bugs come when you understand enough about the business and the architecture to know what matters to some person who matters. The second session was about scaling workshops (and really, herself). I joined late after the lightning talks ended, but still helped plant the seed of what I and SoCraTes can do to bring more people into learning about good software.

I selfishly hosted a "help me out here" session in the afternoon. As I predicted, the testers extraordinaire Maaret and Lisi Hocke were exactly the people I needed to give me perspective on my current and evolving role at work, though the other attendees contributed as well. I came away with more questions than answers, which I'm still mulling over and digging into weeks later. I look forward to sharing more about the shape of things as they come to fruition.

Alexander (What is your last name?? Sorry!) held a session on habits you've developed or changed during the pandemic. How lovely it was to be in a small conversation trading notes about remote music lesssons and holding remote workshops. It was exactly the kind of hallway conversation I'd be looking to fall into at a flesh-and-blood conference.

I didn't write down who gave the lightning talk about saying no, but thank you. I rarely (never?) regret saying no, but I needed that extra push and specific language to have those "If you want me to pick this up, which of these things should I be putting down then?" conversations I've had lately. I have "Saying no commands respect" in my notes, and I guess I need a throw pillow of that too.


SoCraTes reinforces for me how a welcoming, inclusive open space is done. It's through explaining what an open space is for people who haven't attended. It's who's on the organizing committee. It's reminding people to take time off from the sessions. It's about ending up in the "Rose Garden" at the same time as Eva Nanyonga, who's working to improve the dispatching of home helathcare workers in Uganda, and finding out you sparked her curiosity and delight in the exploratory testing session earlier in the day. It's about providing a subsidized ticket option to make the event accessible to more people. It's in offering advice to the hosts at the start, such as:

  • ask for help facilitating
  • kick off the discussion
  • include everyone in the conversation

Thank you for holding this space. It's got me excited to host the open space that TestCraftCamp has evolved into, Friends of Good Software (FroGS Conf) in September.

Complete the Main Quest First

Recently, I made an outline for a tester (who was still onboarding) for what kinds of things to test on a new API endpoint we added. They explored, wrote a bunch of automated tests to capture their work, and came back with a list of interesting and good catches in the error responses. My first question in our debrief was: did you try a successful response? They hadn't. I sent them back to tackle that too.

Because a successful response is the first thing our product owner is going to ask about. That's what we'd want to show off at the review meeting internally to demonstrate the new API endpoint. That's the first the customer is going to try. They're going to copy the request from our OpenAPI specification, paste it in Postman (or the tool of their choice, but our customers so far have been using Postman), and see if their credentials will get them the response that matches the specification. These stakeholders share a common concern, and that's the risk we should be migitating with testing. First.

Complete the main quest first.

Complete the main quest first. Come back to the side quests.

A customer had asked for this API endpoint to be added. If we'd tested the happy path first, we would have had the option of releasing the API for the customer to use. The risk of discovering a successful request wouldn't yield a successful response was relatively low in this case, since our developers tend to try one happy path themselves.

But what if the main quest had required a lot of setup, explanations to build knowledge and context for the onboarding tester, or yielded an issue? I'd done a risk-based analysis of what all to complete as part of our definition of done for this story. But I hadn't shared my approach to completing the main quest first, so the tester did what testers do, and went on a hunt to find weird stuff.

Note down and follow-up on werid stuff; do not get distracted by it

Software will break in all sorts of ways. The more time and curiosity you have to dig into it, the more you'll discover. But are those the most important things?

In this API, the tester discovered that if you paste 10,000 characters into a field that's meant for a UUID, you get a 400 response. But did they try a regular old UUID first? What if they get a 400 response no matter what they put in that field, because the field name in the specification doesn't match what's in the code? Is trying 10,000 characters the first and biggest risk they have to face when presenting this API to a customer?

I'm not saying don't try 10,000 characters. I love that shit. But decide if it's a risk you care about first. If you don't care about the outcome, don't test it. Don't make busy-work for yourself just to fill the time.

Make side quests a concious choice

Before you start throwing 10,000 characters at your API, talk to your team. Your developer can probably tell you if they never built something to deal with that situation. Your product owner can tell you they'd rather have it to the customer sooner. Your data analyst can tell you if there's already longer stuff than that in the database, or if you should be trying Japanese instead.

Make side quests a deliberate choice. Share them to increase their value or figure out who on the team is best-suited to execute them.

Recognize when the quest is a journey, not a destination

Throwing 10,000 characters at an API may be a way to start a discussion about the speed at which responses are returned. It might be a way of showing your favorite random text generator to your fellow tester. It might be an exercise at an ensemble testing session, where everyone can practice pausing before executing an idea to describe the expected behavior first.

Quests can be valuable in ways that are not directly related to the finishing the quest.

Note: I got asked recently if I use the word charter much with non-testers. I don't. Try reading this again but replacing every mention of "quest" with "charter".

Praise the Messenger

I hear no a lot. No, that's not in scope for this story. No, it's not worth fixing that now. No, that's not risky enough for you to spend time testing. Hearing no on a good day sparks my creativity and pushes me into more valuable directions. On a bad day, it makes me wonder why I should keep going.

Saying no in a good way takes practice. I've honed this skill with wisdom from some of the best. Elisabeth Hendrickson tweeted a list of some of the different ways she has of saying no. It reminded me that the sassy replies I send to recruiters and speaking engagements in which I have no interest or connection. If you're looking for a style that builds bridges rather than burns them down, I'd recommend Elisabeth's Hendrickson's examples, not mine.

At Let's Test in Sweden in 2016, Fiona Charles gave a workshop on Learning to Say No. We practiced different ways to say no, complete with supporting arguments and steadfast determination tailored to the real-life situations participants brought to the workshop. (In a demonstration of having embraced the message of the workshop, we disbanded halfway through the allotted time. I spent the rest of the afternoon on a memorable bike ride along the Swedish coast.) The strongest revelation from the group was: we have more power than we think we do. There may be consequences to saying no, refusing a particular task, or turning down an opportunity, but it's often within our power to do so. Saying no effectively makes room you to decide how to spend your time.

At Agile Testing Days in 2018, Liz Keogh gave a keynote on how to tell people they failed (and make them feel great) that culminated in a surprising conclusion: don't. Don't tell people they failed. Use positive reinforcement to encourage the behaviors you want to see, and the others will fall away.

We had a tester leave the company recently. They'd set goals with their manager that directly opposed the test strategy I helped them shape. Bugs they reported were shot down, postponed until future stories, or grouped together as issues to be fixed eventually someday. They were facing a lot of "no," and as far as I could see from a different team, not a lot of yes. How long would you last in this situation?

I also have a tester reporting to me at the moment. They've been catching tricky things, find bugs, and preventing problems. I want them to keep doing what they're doing. I get that a developer's first reaction when they hear about a bug might not be "Oh wow, thank you so much for this pile of questions and work!" I have to give them that positive reinforcement so they keep reporting and digging into issues, so I call it out in their 1-on-1. They thank me, and says it keeps them going.

Don't shoot the messenger. Praise the messenger.

Who are you saying no to frequently? Is your no driving them away, or encouraging them? Do you need to say no?

If you were waiting for a sign - this is it.

Delivering Information vs. Delivering Meta-Information

One of the first testing skills I built was bug reporting. I practiced narrowing down the steps to reproduce. I told my developers what was happening now and what should happen instead. I learned to include where the issue was occurring, so my developers would stop closing my bugs with the Works for me resolution in Trac.

In 2013, Paul Holland taught me to tell the story of my testing. That is, meta-information about my testing. Sure, I still reported the outcomes: these things didn't work, these things did. But I also reported:

  1. how I tested the product
  2. how good that testing was

How long did it take me to test? What was difficult about my testing, made it difficult to get started, or slowed it down? What wasn't I able to test? The reason we tell those stories is to help uncover product risks, process risks, and get help from those around us to solve the issues.

This is meta-information about my testing. It's what I'm seeing as I'm doing it, one level removed from the actual testing itself. (Look up testopies [testing + autopsy] for more on gathering meta-level information about your testing.)

Building the skills around meta-information are important for doing any job well. These skills include:

  1. identifying when you're communicating at the information level, or the meta-level
  2. switching between and keep track of whether you're communicating at the information level or meta-level
  3. naming the different levels, and bringing the people you're communicating with on this level-switching journey

These recent examples stick in my mind.


Addresing the question vs. addressing the miscommunication

I was interviewing a candidate for a Software Development Manager position. For about a half hour, every question proceeded more or less like this:

  1. I'd ask a question.
  2. The candidate would speak for a minute or a few, without answering the question.
  3. I'd rephrase the question, be more specific, or add more details to help the candidate understand the question1.
  4. The candidate would speak again, again not answering the question.
  5. I'd try once or twice more, before moving on to the next question.

I realized early on that this was happening. I consider it an essential skill to be able to answer a question directly, or identify when you're having trouble doing that, as a Software Development Manager. I decided I'd reject the candidate, but realized I'd still have to get through the rest of this hour with them.

My colleague, my co-interviewer, took a different approach. He called out the miscommication problem to the candidate! He said "Elizabeth is asking you questions, and you're not answering them. Did you notice that? Why is that?" It was direct in a way that made me uncomfortable, and that I would have trouble doing with someone I just met. But he moved the conversation to where it needed to be: the meta-level. The candidate struggled to answer this question as well.

I jumped in to ease the awkwardness. I offered up my theory: the candidate's responses weren't answers, they were the candidate agreeing with the premise of the question. My colleague completely agreed, thanked me for contribution, but politely redirected the burden of this problem onto the candidate. The candidate continued to struggle without acknowledging the issue or the awkwardness for the rest of the interview. It was a clear decision for both me and my colleague: we would not be hiring this person.

Difficulty scheduling a meeting vs. telling them that

I was asking for more details on the planning of a project in standup. My developer described how hard it was to find time with a particular developer on another team. They said the project was going to take weeks to scope instead of the days they thought it could take, if only they could find time with this person. The project seemed important, already had a deadline (or really, sadline2), and waiting until the next free spot on this person's calendar wasn't working.

I asked my developer "Have you given his this feedback directly?" I suspected that only the information had been communicated: the following meeting would have to wait a week. My hunch was correct. I suggested my developer try giving this person the meta-information: about how hard they were to schedule, how this was cutting into the time we had to work on the project once it was scoped, and how that threatened the deadline. Imagine how differently this person could react and rearrange what they were doing, or share their relative priorities so we could adjust our expectations, when given this meta-information. This was last week, so I don't know yet how this story ends!

Adding testing details to the story vs. why I'm asking for them

There was a user story about being able to send metrics from our application to another application. I was picking up the testing on the story. My developer said they'd confirmed in the other application that our metrics were sent. They'd only sent a success, so I was planning on using the same setup as they did to look in the other application, but see what a failure looked like instead.

But that's not what I told my developer. What I said to them was: "Add enough details to the ticket so I can test it." It turned out, we had different ideas about what enough meant. First they added a URL. I followed the URL and it went to a blank page with a header. I said the same thing again, "Add enough details to the ticket so I can test it." They wrote down the first button they had to click on. I asked a third time. They added one more detail, which still didn't tell me enough. But I got tired of asking. I tried all the options in the application, Googled to figure out what SQL query I needed, executed it, triggered a variety of different failures, and confirmed that they were received.

Later, I explained to my developer the difference between my expectations and what they wrote. They explained how their expectations were also violated: they had to reach out to the team from the other product, figure out what to do, poke around in the product, and that the details they left me were all they received to be able to eventually figure it out. To them, just the URL going to a blank screen was enough3.

I realized then that I had left out a crucial piece of meta-information: the reason I was asking for the details. I wanted to skip that poking around time. I had ten other tasks this day, and expected this to only take 20 minutes instead of the few hours it ended up taking. I was hoping to benefit from the poking around work my developer had already done. I was expecting a lot of context, and my developer was expecting to only need to share a little bit of context.

Once I shared this with them, they understood where the gap was between my expectations and theirs.


The suggestion about scheduling difficulties, that was an easy conversation to have. The other two were quite difficult for me. They took patient, active listening. I had to keep asking myself if I'd been clear enough with my expectations. They definitely made me sweat.

Moving between information communication and meta-level communication is a skill. It takes time, failure, reflection, and practice to do it. Doing it well is a leadership skill. Crucial Conversations helped me identify when it's worth working on the relationship with a person and investing in these conversations. You don't have to be managing people (as I am currently) to be building or using this skill.

What meta-level conversations are you avoiding? Are there people where you find you're only able to communicate on an information level? Have you tried communicating with them on a meta-level? What would happen if you told them that's what you were trying?


  1. In high-context cultures like the United States and the Netherlands, the deliverer assumes the burden of the miscommunication, not the receiver. There's more about this in The Culture Map

  2. A deadline is when somebody dies. Most things we call deadlines at work are just sadlines, in that people are alive but just sad when they're missed. Read more about it in Liz Keogh's blog post

  3. My colleague is coming from a low-context culture. I'm coming from a high-context culture. You can read more examples of this in The Culture Map, but it's the kind of non-fiction book that could have been a blog post, so just read the rest of this blog post instead? 

The Mental Load of One Meeting

I facilitated a meeting today. It was scheduled for 45 minutes on my agenda, for me and some other tech and team leads in my unit. We covered what we needed to move forward planning an epic with a set of relatively straightforward stories, both in execution and work distribution among the teams.

For some of the people attending today, they showed up without preparing, and didn't remember what the topic was. I filled them in. Because this is what the meeting preparation looked like from my side:

  1. I was in a planning meeting two weeks ago with a variety of people from the unit. I picked up the implication that that follow-up (combination solo thinking/document review followed by meeting) was needed soon with a few tech and team leads. I checked the out of office schedule (that I created and keep updated as people mention when they'll be out) and noticed that two weeks was an achievable time span for "soon". I mentioned this while we were still in that planning meeting.

  2. Within the next 48 hours, two people from the planning meeting asked when I'd scheduled the follow-up. I explained that I was giving the other leads a chance to schedule it first.

  3. When I scheduled the follow-up, I made the title of the Outlook invitation "review the comments you've already added to the document" so the purpose and expectations were both clear and impossible to miss. I added the document to review to the body of the invitation. I let the two inquiring minds know that this occurred. I found a couple of 45-minute blocks when all four people present at the planning meeting, plus the one person that had been forgotten, were available simultaneously. I chose the one on Tuesday, giving the person who was out for a week Monday to catch up.

  4. I added a Slackbot reminder for Monday afternoon for the person who's out the previous week. They're on my team and typically have trouble saying no when someone tries to pull their focus. We've found that Slackbots help.

  5. The meeting time arrives. The Slackbot reminder person and I have reviewed the document. The others haven't. I check in and ask whether we should hold this meeting after they're available to do so. Silence. I move on, facilitating the discussion, taking notes, and keeping us on track.


I am not the only person capable of scheduling a meeting. I am not the person best equipped to review this document. It is not my responsibility to come up with systems for my peers to remember to do their work. I can let this fail, just not follow-up, and I do with smaller, safer-to-fail experiments. But it doesn't solve the problem.

I can't do a better job of explaining the burden of carrying the mental load than Emma does in her comic. But to summarize: a lot of work is the work of noticing what work needs doing. It's the difference between "Let me know if you need help" and finding a way to actually help. It's the difference between having 45 minutes blocked on your calendar, and everything else that had to happen to make it successful.

What I wonder now is: how much is this skill of carrying the mental load noticed? How much is it valued? How do we interview for this skill? How do I train my fellow team members to have it? How do I teach the ownership of a team and a project? How do I get people to ask each other about the next step in picking up the work instead of coming to me to be the dispatcher? How do I make myself, instead of irreplacable, completely replaceable?

The Long Haul

I've been both the lead of my team and a tester on that team for a year now. Getting answers, adapting to change, and identifying solutions have completely different time horizons in each of these roles.


Tester experiments

Testing experiments can run quickly. A testing experiment might look like this:

  1. Call an API
  2. Inspect the output
  3. Compare that to the specification
  4. Question whether either or both need changing
  5. Talk to a developer about the changes
  6. Update the specification and/or tests

All of that happens within minutes or hours, or days if schedules are extremely incompatible and asynchronous communication is failing. I can be confident in the experiment's outcome. I can weigh the relative merits of caring vs. not caring about a particular error response code or required field, and leave my work at work. The next time I see a particular error response, I know what to look for and where changes might be needed. A failing test is evidence that something used to work in a particular way.

Team lead experiments

Team lead experiments take longer. A team lead experiment might look something more like this:

  1. Team members complain that it's hard to get their ideas in during refinement.
  2. I mention this to the talking-dominant team member at a 1-on-1.
  3. Talking-dominant team member dominates following refinement.
  4. I remind talking-dominant team member in Slack about our previous conversation.
  5. Talking-dominant team member spills over their allotted time during big unit meeting.
  6. I bring both of these instances in our 1-on-1, sharing the consequences (they're the single point of failure, other team members aren't heard) of their actions.
  7. Talking-dominant team member does it again.
  8. I ask team member what I can do to help them change their behavior, given that we are both adults in control of our own behavior. They agree that change is their responsibility. We agree that setting their microphone on mute at the start of the meeting would help.
  9. Talking-dominant team member dominates some of the following refinement, until I remind them to mute, after which other team members have time to think and contribute too.
  10. I ask talking-dominant team member to set up a Slackbot to send them a reminder to mute their microphone each week before the meeting.
  11. Other people are able to contribute at the following refinement.

This took place over months. We're not to a point where we have a solution that works every time. I went in with a different hypothesis each time, not knowing when I'd hit on the right one:

2. I think the talking-dominant team member isn't aware of their behavior.
4. I think the team member has forgotten our first conversation.
6. I think the team member doesn't understand the impact of their behavior.
8. I think the team member hasn't found a tool or a trigger to change their habit.
10. I think the team member needs both a tool and a trigger to change their habit.

Any of the first four experiments taken by itself looks like a failure. The talking-dominant team member prevents other team members from contributing effectively. It takes me time as a leader to come up with a different hypothesis, try something else, and discover where to go from there. And this was a relatively straightforward issue to assess. Imagine how long it might take to find an effective response to a problem with more variables and more consequences.


I'm also thinking not just about the experiments themselves, but how they might come across to the wider team. For the testing experiment, I could present my results in standup the next day as "I tested it, everything's good" but it's more valuable for everyone if I tell a bit more of the story. In the team lead experiment, I can imagine my team member telling my boss "Elizabeth told me to be quiet" or me telling my boss "The talking-dominant team member is giving room for others to contribute." Telling a slightly longer story of the journey displays my value as a team lead in a better light.

What experiments are you running right now? Is something that looks or feels like a failure getting you closer to a solution? How long is your time horizon?

Questions from Exploratory Week on Writing Exploratory Testing Charters

The Ministry of Testing hosted a week all about exploratory testing. I had the honor and privilege to help shepherd a small group of testers on the path of writing charters for their exploration. The most interesting part for me is where people had questions. It helps me figure out what sunk in, what could use more explanation, and helps me know that I've answered at least one person's burning question. Here are some of the ones I remember from the live Q&A at the end:

Q. Do you use the word charter?

A. Basically no. I've only heard testers who've specifically dug into this topic use the word charter. Almost all of the people I collaborate with on a daily basis (developers, product owner, UX, managers, other testers) do not have this as part of their experience. Most of my colleagues are not working in their first language. As a native speaker, I need to have more than one word to describe any particular phenomenon in case the first one doesn't resonate, or isn't understandable in my accent. (Everyone has an accent.) I've called charters:

  • questions
  • missions
  • paths
  • plans
  • goals
  • investigations
  • journeys

It's less important to use the word charter than it is to get across the intent: you're going on an exploration, in a particular direction, with specific set of tools, and you hope to come away with more information on this topic than when you set out. Sharing your charters helps you get feedback about where to look more deeply, more broadly, and where not to look at all.

Q. Where do you bring charters up?

A. Where don't I bring charters up? It bleeds into conversations I have about my work. Sharing my work and getting feedback about it is what ensures I'm providing valuable work for my team in the right direction. I tend to discover points of interest for my developers once or twice a day when development is starting, and more often when testing is at its peak, which often escalates to pairing. Here are some other moments in time where I share charters:

Standup

It's how I explain what I tested yesterday, what pieces I might have time for today, and what directions I haven't or won't have time for before we want to release the story. Sharing where I'm looking prevents me from being the one gatekeeper on quality for our product. "I've successfully called the API as an admin user and a regular user. Today I'm going to dig into what happens with the non-required fields." will solicit a completely different type of feedback than "I have an hour or two left on this story."

Refinement

Any clues I can give my team about what I'll be looking into, what kind of test data I might set up, and what tools I'll be using to test a particular feature will help them figure out the whole scope of the story. "I'm going to try names at the character limit to see how they wrap on the page." helps us all figure out that we need to talk about our expectations for a character limit, we need to talk to UX about what should happen when you try to input something too long, I need to test what happens on the API side for the same field, and we might need a frontend dev to help us with the wrapping or truncation depending on what UX decides.

Testing starts executing

This is the point in time where there's enough built that I can add test execution to the setup and planning I've already been doing on a story. It might be that the API spec is published, it might be that the application has one happy path built. The developers are still going, but there's somewhere for me to start. Depending on the size and complexity of the story, I'll reflect for myself, or share my ideas with someone else on the team. If it involves an integration with another team, I'd reach out to them too.

Q. Isn't that just a test case?

A. After almost two hours of technical difficulties and explaining things, I have to say I did not write the most elegant charter as an example during the workshop. You got me! I'm glad that this workshop participant has a good feel for what is too specific or too broad. I find this so hard to explain because so much of that depends on the context.

But it wasn't terribly important to me to get the level of detail correct. Charters are a place to reflect on your testing and spark conversation. This charter did exactly that.


You can find other questions there weren't time for during the workshop on the Ministry of Testing club. The slides from the workshop are on github.

My First Agile Testing Days: Four Years Later

After years of getting rejected, the talk I submitted to Agile Testing Days in 2017 finally got me accepted. It was such a privilege and an honor to meet so many of the people I'd only know from the internet. Reading through my notes now, much of what I wrote down has become engrained into how I go about my work everyday. What a blessing it was to encounter the people I needed to learn from in my career at the time I needed to learn from them. I'm glad to see how routine their wisdom has become for me.


  • Lisi Hocke gave a talk about growth. I've since seen more of her talks and prepared a workshop with her; I see how she embodies this in her approach to the world.
  • Gitte Klitgaard reminded me that believing in people will allow them to be better. Stay curious about why people are doing what they do before placing judgment.
  • Kim Knup spoke about a zero bug policy (not any bugs in your backlog). I was not ready to hear this message at the time, having come from places with years-old products and tens or hundreds of bugs in the backlog. But now I see exactly what she was describing: the psychological relief that comes from less time in JIRA and fewer meetings about priorities.
  • Katrina Clokie pulled up Noah Sussman's reimagining of the testing pyrmaid (lots of small tests, fewer large ones) upside-down as a bug filter. And suddenly, it clicked for me.
  • Alex Schladebeck and Huib Schoots challenged me to think on a meta-level about the testing I was doing, to name the skills and techniques I was using. It would help me as I spent the following years sharing exploratory testing skills with other testers.
  • Emily Webber spoke about building trust on teams. I find myself recommending the team manual she developed to somebody about once a month, plus she helped spark an idea for a future conference talk I developed.
  • Liz Keogh gave me the words I needed to build a safe-to-fail environment for my team members, where failure is an expected, inevitable part of complex systems.

I realized in compiling this blog post that I'd already written about Agile Testing Days just one month after I'd attended in 2017. At that point, I was looking specifically at regression testing advice, which is what I was in the thick of at work. What am I in the thick of now, and when will it become clear to me?

Recently Encountered Logical Fallacies

I was on a panel about critical thinking for the Ministry of Testing last week. One of my fellow panelists and commendable ranter Maaike Brinkoff brought up ad hominem (personal) attacks as one example of a failure of critical thinking. It's one of many logical fallacies that are worth exploring further.

Equipping yourself with the name for a thing helps you recognize it when it appears. (Lara Hogan wrote recently about applying the skill, of being able to name the problem in the room, to defuse tense meetings.) These are some of the fallacies I've across recently when I've been debriefing testing sessions, facilitating refinement sessions, and reviewing conference submissions.

Affirming the consequent

Affirming the consequent is applying a conditional without the conditionality, or assuming something happened because you see a result.

  1. If P (I run the pipeline) then Q (the latest build will be available on the test environment)
  2. Q (the latest build is available on the test environment)
  3. Therefore P (I ran the pipeline)

We can't assume the converse: if Q, then P. Just because the latest build is on the test environment doesn't mean I ran the pipeline. Maybe someone else ran the pipeline, or put the build there manually. Maybe there haven't been any changes since yesterday, and the build from yesterday is still the latest one.

Fallacy of composition

This assumes that something that applies to one member of a class applies to them all.

  1. Y is part of X (Stephanie is an admin user)
  2. Y has property P (Stephanie can see this page)
  3. X has property P (any admin user can see this page)

We can't assume that what's true for one member of a class applies to all of them. What happens if Stephanie can be assigned more than one role, a more restrictive/regular user role in addition to the admin role? Can she still see it? What if Stephanie being able to see the page has nothing to do with her status as an admin user?

Post hoc ergo propter hoc (correlation without causation)

This one is easiest to see when others are debreifing their testing to me, but I've also learned to catch for myself.

  1. Event A occurred (I clicked the button)
  2. Then event B occurred (a whole bunch of log messages appeared)
  3. Therefore A caused B (clicking the button caused a whole bunch of log messages)

We can't assume that events that occur in a particular sequence in time are necessarily causal. Did clicking the button trigger the log messages? What do the log messages say? Did you read them? Who else could be using this environment? Does the same thing happen every time you click the button, or when you run the application in a different environment?

Argument from repetition

When someone says the same thing enough times, or brings up the same unimportant issue in a refinement meeting week after week, it can become easier to address the issue rather than convincing them yet again why it's not a priority. I've been facilitating refinement meetings every week for my teams for the past two years. I only have a finite amount of energy that is not always worth expending by refuting the case for a small edge case week after week.


Shoutout to my logic professor Dan Cohen at Colby College, who had us memorize and distinguish logical fallacies as part of his brilliant Logic and Argumentation course, and pointing out that an ease and comfort with truth tables would translate well to a computer science course. Special thanks to Joep Schuurkes for his philosophical and technological opinions on this piece.