Cutting People Off

Impatience is a virtue.

If impatience is solely your own, sorry, but you're the asshole. But if impatience is shared, saving your colleagues from a tiresome conversation will make their day.

Notice that a topic should come to a close

When you listen actively, you'll notice when something has already been said. It is much easier (particularly when remote) to give up, zone out, and think about something else. Don't be that person.

Engage with your colleagues! Save yourself and others from the perpetual purgatory that is an ineffective meeting. Pay attention.

Decide whether you are the right person to close a topic

There will be settings where you are the right person to decide if a topic should come to a close: in a small group of relative equals, when you're the appointed facilitator, or you're in some other position of power relative to the individuals or the subject matter. Recognize when you're not in the right position to change what's happening in the moment, and skip to Follow-up for more.

If the group already has expectations about what is or isn't on topic, your interruption should be enough. If it doesn't, or you want to take this opportunity to set a new one, interrupt with a meta-question.

How to deliver this message

I'm not always in the best position to decide whether now is the right time for a topic, so I tend to deliver topic-closing messages as questions:

  • I agree with your point about Thing B, but can we come back to Thing D?
  • We agreed that Person A is going to follow-up with Person C, is there anything more that we need to discuss about Thing B right now?
  • We could discuss Thing B more in this group, but since we're missing Person C's crucial input, should we?
  • I've captured what Person A said here in the notes. Was there anything I missed?
  • I think Person A already said Thing B, shall we move on?
  • It sounds like we're still discussing Thing B after we just agreed not to, am I understanding that correctly?
  • Can we leave it there for now?

If you're not sure if it's the right time for a question, try a meta-question:

  • Is now the right time to decide if we should keep talking about Thing B?
  • Are we going to be able to come to Decision D today?
  • Did we decide on a next step towards Thing B, or is that what you were describing?

Give the group a chance to decide, but don't be afraid to hold them to their decision. These are not questions:

  • We've agreed to that. Let's move on.
  • That's all we needed Person A for, let's let them go.
  • That's all I have for you, I'll let you go.
  • Thank you for your input/time.
  • I understand now.
  • Got it, thanks.


Get feedback on your behavior. This is how you learn.

A retrospective or 1-on-1 would be a good place to find out if the balance was right between gathering/sharing information and staying on topic. Asking someone to watch out for this particular behavior ahead of time will allow them to give you better feedback afterwards.

A colleague once declared me the "queen of cutting people off" because I did so very politely. I have a compliment stickie note from "ruling refinement with an iron fist." We should all be so lucky to have our work appreciated this way.

For more on meta-information, see this post. For more on setting agendas and preparing for meetings to make them effective, see this deck.

That "I Did It!" Feeling

I moved into a different role at work this week. I handed off my former team to the new team lead with a final 1-on-1 (and coincidentally performance review) for each team member. Each of them has a different variety of skills, motivators driving them, and awareness of either. This blog post focuses on just one team member.

"I did it!"

One team member is really driven by that "I did it!" feeling. They're early in their career. Both the product and the company are new to them. They spend a lot of their time pairing, asking for help, or floudering while wondering if they should be pairing or asking for help. Every 1-on-1, they'd report feeling that they hadn't learned or accomplished anything. They weren't getting that "I did it!" feeling.

But they were doing a great job. They were making progress in all the different technologies our team uses (Mendix, Docker, OpenAPI, pytest, gitlab pipelines, etc.), learning as they went. They were able to accept feedback to course-correct when necessary. They knew they were learning a lot, but this alone wasn't motivating enough for them.

Forces within our control

As a team lead, part of my job was to create focus for my team. There was a cloud of possibilities and priorities the people around and above us struggled to make clear. I wanted to create an environment where my team members could still get that "I did it!" feeling anyway. This Liz and Mollie comic captures it nicely.

A great manager holds umbrella to protect team from ridiculous requests, unclear priorities, massive uncertainty, unneccessary meetings, last-minute chaos; and foster clear expectations, defined roles, work-life balance, stable achieveable goals@lizandmollie

Amid this uncertainty, my team member requested clear steps for what they should be doing next and how to get promoted. I started by sending them the job description they hadn't seen for his own job. This helped set clear expectations and define their role. I spent time in our 1-on-1's finding out more about what was on their mind or dragging their attention away from work.

I took the time to reinforce the importance of a work-life balance. We started every refinement meeting with a review of upcoming time off, complete with peer pressure from me to take more of it. This allowed us to only refine the amount of work we could accomplish in the upcoming period, and set expectations for what wouldn't be done. This helped scope and clarify each team member's job.

I tried to give all my team members the "I did it!" feeling by talking about what we accomplished at the smallest scale in standup, a slightly larger scale in retro, and on the largest scale in the meeting with the whole unit. But that wasn't helping this particular team member.

The thing that finally gave them that "I did it!" feeling was: a Trello board for their own personal career development, with To Do, Doing, and Done columns.

To Do

We identified a clear, actionable step to take for a few technologies, job description bullet points, and conversations we'd already been having in our 1-on-1's. Some items would be accomplished during the course of our regular work on user stories. I set a clear expectation about the other items: they were for work time - downtime while waiting for a response, crafting days, etc. They were not for personal time.


I explained that it's better to limit the number of items in this column at a time. Deciding what to leave aside allows you to focus on what's in front of you. My team member want to be an expert in all our of different technologies at once. I reset this expectation: get a little better, one at a time.


I gave them homework to fill in the Done column. They took time to list things they had learned and accomplished in the previous months. Scrolling through the Done list got them pretty close to that "I did it!" feeling. Taking a moment to reflect during our 1-on-1's helped give them that feeling. But they weren't getting that feeling right away. They needed to celebrate their accomplishments as they were happening, to keep up the motivation and momentum.

I did what was possibly my best management move for this person: I threw confetti.

Trello has a feature where if you add the confetti ball emoji 🎊 to the title of a column, moving an item to that column throws a little confetti around the item. It's very cute, and it finally gave my team member that "I did it!" feeling.

Setting expectations around the feeling

In the handoff to the new team lead, I explained this need my team member had, the ways I'd tried to meet it, and the confetti ball that finally worked. I pointed out that the need for the "I did it!" feeling can be found in other ways. The important thing for the team lead is not a particular action, but checking in with the team member about the feeling. I wanted to leave them space to take a different approach, so I used the "Mary had a little lamb" heuristic to explain what a different approach should include.

I did it!

The team member wanted to point to something they did. Without pairing, without asking a bunch of questions, they wanted to point to something and know that they were able to accomplish it themselves.

I did it!

The thing had to be done. While some skills and knowledge transfer could be months or years in the making, they needed something to come to a close.

I did it!

The new team lead and the team member get to decide together what's on the list, what it is. Growth and comfort in skills may not be immediately visible to the invidual in the day-to-day grind. Setting aside time for individual reflection or recognition at the 1-on-1 would help.

I did it !

This is the confetti ball piece of the puzzle. The celebration. It may feel silly, or gimicky, but it finally got this person that satisfaction they were looking for out of their job.


  • When managing, do you dig into what a person needs to have clear expectations, defined roles, work-life balance, and stable achieveable goals?
  • When a team member asks you for an outcome, do you think about why they're asking you for that?
  • When you do handoffs, do you describe the actions you took or the needs they were serving?

Give Them the Fish, Then Teach Them to Fish

A colleague came to me with a request the other day. I didn't handle it quite how I wanted to. The request went something like this:

"I remember you were on the team for Big Scary product a couple years ago. Do you know if I can delete this List of Stuff from Big Scary product, and if I can automate that?"

I did not know. It was two years ago. Big Scary product had gotten Bigger and Scarier in the meantime.

But I knew where my team linked to our API specs from our customer-facing documentation. I applied the same principle to discover where Big Scary product API specs were. I looked at those specs and found the List of Stuff in a response body for an API call, but noticed that my colleague wouldn't have the ID the request required. So I looked at the API specs from a Bigger Scarier product. Combining a call from there would get the ID Big Scary product needed.

I was short on time, so I answered the question directly. I said it was possible, and possible to automate, and provided the links to the specs for both products. My colleague thanked me, and left the conversation able to solve their problem quickly.

I gave them the fish. What they learned from that interaction was: Elizabeth knows where to find stuff. I can come to her when I don't know how to find stuff and she will find it for me. That was the wrong lesson.

Give Them the Fish, Then Teach Them to Fish

A better lesson would have been: I know where to look for things. Elizabeth will give me the tools to know where to look, and empower me to do so. Now that I've got the access and seen it done once before, I can take a few more steps before I reach out Elizabeth the next time.

Here's what I could have done to get to get this colleague there:

  1. Explain where all the API specs live: I could have explained my thought process for finding the API specs, showed how I navigate using the headers and Ctrl + F on the page, and compare the requests and responses to what's needed.
  2. Update them about who's on the team for Big Scary product now: I could have listed a few team members names who I knew were working on Big Scary product, or pointed my colleague to the Slack channel for the whole team.
  3. Introduce colleague to a member of the team for Big Scary product: Since this colleague was a tester, I could have started a direct message with them and the tester on the team for Big Scary product, copying the question from the DM I first received.

What If I Only Teach Them to Fish?

What would have happened if I'd skipped what I'd done, and withheld the links to the API specs?

I wouldn't have been able to guarantee that my colleague was in the learning zone. From what I knew about their situation, they were accumulating a lot of data that they wanted to delete. I didn't know what other pressures were coming from the team, but the need to automate it suggested it was a bigger problem than just a few extra entries in a database.

Giving my colleague the fish, and then teaching them to fish, relieves any of that pressure to deliver, and helps open them up to learning and growing.

Tell Them What You're Doing

Some colleagues are distracted, or dense, or not able to take away meta-information from a conversation along with the information. They may stop listening after they have the answer.

Combat this by sharing your motives. Remind them that you too are busy. Explain that your goal is to empower them. Encourage them to reach out to the team working on Big Scary product, so that those team members can also get good at knowing where to look and answering colleagues' questions. Tell them you're happy to help them again, but you'll expect more details of what they tried first. Then hold them to that.

The best lesson is: I want to take a few more steps next time I have a problem, because I know I can, and Elizabeth expects more from me.

SoCraTes UK June 2021

I had a long drought between when I was last able to just attend a conference (rather than running a session or organizing) and SoCraTes UK in June. A year I think? And what a welcome rain it was.

Heloá hosted a session on meditation. I recognized many of the symptoms she described as her motivators for picking up the habit from a talk I did about introversion years ago. And the mindset she described (trying over succeeding, recognizing reactions without trying to impose a particular one) echoed back to a conversation on stoicism that Sanne Visser hosted at the first TestCraftCamp. I was completely convinced by the benefits she described ("People say I sound calmer. I'm breathing more deeply.") but I haven't built a habit around it yet. C'est la vie.

I went to two different sessions Maaret Pyhäjärvi hosted. (Does this make me a groupie?) The first, an ensemble testing session, reminded me that the most valuable exploratory testing bugs come when you understand enough about the business and the architecture to know what matters to some person who matters. The second session was about scaling workshops (and really, herself). I joined late after the lightning talks ended, but still helped plant the seed of what I and SoCraTes can do to bring more people into learning about good software.

I selfishly hosted a "help me out here" session in the afternoon. As I predicted, the testers extraordinaire Maaret and Lisi Hocke were exactly the people I needed to give me perspective on my current and evolving role at work, though the other attendees contributed as well. I came away with more questions than answers, which I'm still mulling over and digging into weeks later. I look forward to sharing more about the shape of things as they come to fruition.

Alexander (What is your last name?? Sorry!) held a session on habits you've developed or changed during the pandemic. How lovely it was to be in a small conversation trading notes about remote music lesssons and holding remote workshops. It was exactly the kind of hallway conversation I'd be looking to fall into at a flesh-and-blood conference.

I didn't write down who gave the lightning talk about saying no, but thank you. I rarely (never?) regret saying no, but I needed that extra push and specific language to have those "If you want me to pick this up, which of these things should I be putting down then?" conversations I've had lately. I have "Saying no commands respect" in my notes, and I guess I need a throw pillow of that too.

SoCraTes reinforces for me how a welcoming, inclusive open space is done. It's through explaining what an open space is for people who haven't attended. It's who's on the organizing committee. It's reminding people to take time off from the sessions. It's about ending up in the "Rose Garden" at the same time as Eva Nanyonga, who's working to improve the dispatching of home helathcare workers in Uganda, and finding out you sparked her curiosity and delight in the exploratory testing session earlier in the day. It's about providing a subsidized ticket option to make the event accessible to more people. It's in offering advice to the hosts at the start, such as:

  • ask for help facilitating
  • kick off the discussion
  • include everyone in the conversation

Thank you for holding this space. It's got me excited to host the open space that TestCraftCamp has evolved into, Friends of Good Software (FroGS Conf) in September.

Complete the Main Quest First

Recently, I made an outline for a tester (who was still onboarding) for what kinds of things to test on a new API endpoint we added. They explored, wrote a bunch of automated tests to capture their work, and came back with a list of interesting and good catches in the error responses. My first question in our debrief was: did you try a successful response? They hadn't. I sent them back to tackle that too.

Because a successful response is the first thing our product owner is going to ask about. That's what we'd want to show off at the review meeting internally to demonstrate the new API endpoint. That's the first the customer is going to try. They're going to copy the request from our OpenAPI specification, paste it in Postman (or the tool of their choice, but our customers so far have been using Postman), and see if their credentials will get them the response that matches the specification. These stakeholders share a common concern, and that's the risk we should be migitating with testing. First.

Complete the main quest first.

Complete the main quest first. Come back to the side quests.

A customer had asked for this API endpoint to be added. If we'd tested the happy path first, we would have had the option of releasing the API for the customer to use. The risk of discovering a successful request wouldn't yield a successful response was relatively low in this case, since our developers tend to try one happy path themselves.

But what if the main quest had required a lot of setup, explanations to build knowledge and context for the onboarding tester, or yielded an issue? I'd done a risk-based analysis of what all to complete as part of our definition of done for this story. But I hadn't shared my approach to completing the main quest first, so the tester did what testers do, and went on a hunt to find weird stuff.

Note down and follow-up on werid stuff; do not get distracted by it

Software will break in all sorts of ways. The more time and curiosity you have to dig into it, the more you'll discover. But are those the most important things?

In this API, the tester discovered that if you paste 10,000 characters into a field that's meant for a UUID, you get a 400 response. But did they try a regular old UUID first? What if they get a 400 response no matter what they put in that field, because the field name in the specification doesn't match what's in the code? Is trying 10,000 characters the first and biggest risk they have to face when presenting this API to a customer?

I'm not saying don't try 10,000 characters. I love that shit. But decide if it's a risk you care about first. If you don't care about the outcome, don't test it. Don't make busy-work for yourself just to fill the time.

Make side quests a concious choice

Before you start throwing 10,000 characters at your API, talk to your team. Your developer can probably tell you if they never built something to deal with that situation. Your product owner can tell you they'd rather have it to the customer sooner. Your data analyst can tell you if there's already longer stuff than that in the database, or if you should be trying Japanese instead.

Make side quests a deliberate choice. Share them to increase their value or figure out who on the team is best-suited to execute them.

Recognize when the quest is a journey, not a destination

Throwing 10,000 characters at an API may be a way to start a discussion about the speed at which responses are returned. It might be a way of showing your favorite random text generator to your fellow tester. It might be an exercise at an ensemble testing session, where everyone can practice pausing before executing an idea to describe the expected behavior first.

Quests can be valuable in ways that are not directly related to the finishing the quest.

Note: I got asked recently if I use the word charter much with non-testers. I don't. Try reading this again but replacing every mention of "quest" with "charter".

Praise the Messenger

I hear no a lot. No, that's not in scope for this story. No, it's not worth fixing that now. No, that's not risky enough for you to spend time testing. Hearing no on a good day sparks my creativity and pushes me into more valuable directions. On a bad day, it makes me wonder why I should keep going.

Saying no in a good way takes practice. I've honed this skill with wisdom from some of the best. Elisabeth Hendrickson tweeted a list of some of the different ways she has of saying no. It reminded me that the sassy replies I send to recruiters and speaking engagements in which I have no interest or connection. If you're looking for a style that builds bridges rather than burns them down, I'd recommend Elisabeth's Hendrickson's examples, not mine.

At Let's Test in Sweden in 2016, Fiona Charles gave a workshop on Learning to Say No. We practiced different ways to say no, complete with supporting arguments and steadfast determination tailored to the real-life situations participants brought to the workshop. (In a demonstration of having embraced the message of the workshop, we disbanded halfway through the allotted time. I spent the rest of the afternoon on a memorable bike ride along the Swedish coast.) The strongest revelation from the group was: we have more power than we think we do. There may be consequences to saying no, refusing a particular task, or turning down an opportunity, but it's often within our power to do so. Saying no effectively makes room you to decide how to spend your time.

At Agile Testing Days in 2018, Liz Keogh gave a keynote on how to tell people they failed (and make them feel great) that culminated in a surprising conclusion: don't. Don't tell people they failed. Use positive reinforcement to encourage the behaviors you want to see, and the others will fall away.

We had a tester leave the company recently. They'd set goals with their manager that directly opposed the test strategy I helped them shape. Bugs they reported were shot down, postponed until future stories, or grouped together as issues to be fixed eventually someday. They were facing a lot of "no," and as far as I could see from a different team, not a lot of yes. How long would you last in this situation?

I also have a tester reporting to me at the moment. They've been catching tricky things, find bugs, and preventing problems. I want them to keep doing what they're doing. I get that a developer's first reaction when they hear about a bug might not be "Oh wow, thank you so much for this pile of questions and work!" I have to give them that positive reinforcement so they keep reporting and digging into issues, so I call it out in their 1-on-1. They thank me, and says it keeps them going.

Don't shoot the messenger. Praise the messenger.

Who are you saying no to frequently? Is your no driving them away, or encouraging them? Do you need to say no?

If you were waiting for a sign - this is it.

Delivering Information vs. Delivering Meta-Information

One of the first testing skills I built was bug reporting. I practiced narrowing down the steps to reproduce. I told my developers what was happening now and what should happen instead. I learned to include where the issue was occurring, so my developers would stop closing my bugs with the Works for me resolution in Trac.

In 2013, Paul Holland taught me to tell the story of my testing. That is, meta-information about my testing. Sure, I still reported the outcomes: these things didn't work, these things did. But I also reported:

  1. how I tested the product
  2. how good that testing was

How long did it take me to test? What was difficult about my testing, made it difficult to get started, or slowed it down? What wasn't I able to test? The reason we tell those stories is to help uncover product risks, process risks, and get help from those around us to solve the issues.

This is meta-information about my testing. It's what I'm seeing as I'm doing it, one level removed from the actual testing itself. (Look up testopies [testing + autopsy] for more on gathering meta-level information about your testing.)

Building the skills around meta-information are important for doing any job well. These skills include:

  1. identifying when you're communicating at the information level, or the meta-level
  2. switching between and keep track of whether you're communicating at the information level or meta-level
  3. naming the different levels, and bringing the people you're communicating with on this level-switching journey

These recent examples stick in my mind.

Addresing the question vs. addressing the miscommunication

I was interviewing a candidate for a Software Development Manager position. For about a half hour, every question proceeded more or less like this:

  1. I'd ask a question.
  2. The candidate would speak for a minute or a few, without answering the question.
  3. I'd rephrase the question, be more specific, or add more details to help the candidate understand the question1.
  4. The candidate would speak again, again not answering the question.
  5. I'd try once or twice more, before moving on to the next question.

I realized early on that this was happening. I consider it an essential skill to be able to answer a question directly, or identify when you're having trouble doing that, as a Software Development Manager. I decided I'd reject the candidate, but realized I'd still have to get through the rest of this hour with them.

My colleague, my co-interviewer, took a different approach. He called out the miscommication problem to the candidate! He said "Elizabeth is asking you questions, and you're not answering them. Did you notice that? Why is that?" It was direct in a way that made me uncomfortable, and that I would have trouble doing with someone I just met. But he moved the conversation to where it needed to be: the meta-level. The candidate struggled to answer this question as well.

I jumped in to ease the awkwardness. I offered up my theory: the candidate's responses weren't answers, they were the candidate agreeing with the premise of the question. My colleague completely agreed, thanked me for contribution, but politely redirected the burden of this problem onto the candidate. The candidate continued to struggle without acknowledging the issue or the awkwardness for the rest of the interview. It was a clear decision for both me and my colleague: we would not be hiring this person.

Difficulty scheduling a meeting vs. telling them that

I was asking for more details on the planning of a project in standup. My developer described how hard it was to find time with a particular developer on another team. They said the project was going to take weeks to scope instead of the days they thought it could take, if only they could find time with this person. The project seemed important, already had a deadline (or really, sadline2), and waiting until the next free spot on this person's calendar wasn't working.

I asked my developer "Have you given his this feedback directly?" I suspected that only the information had been communicated: the following meeting would have to wait a week. My hunch was correct. I suggested my developer try giving this person the meta-information: about how hard they were to schedule, how this was cutting into the time we had to work on the project once it was scoped, and how that threatened the deadline. Imagine how differently this person could react and rearrange what they were doing, or share their relative priorities so we could adjust our expectations, when given this meta-information. This was last week, so I don't know yet how this story ends!

Adding testing details to the story vs. why I'm asking for them

There was a user story about being able to send metrics from our application to another application. I was picking up the testing on the story. My developer said they'd confirmed in the other application that our metrics were sent. They'd only sent a success, so I was planning on using the same setup as they did to look in the other application, but see what a failure looked like instead.

But that's not what I told my developer. What I said to them was: "Add enough details to the ticket so I can test it." It turned out, we had different ideas about what enough meant. First they added a URL. I followed the URL and it went to a blank page with a header. I said the same thing again, "Add enough details to the ticket so I can test it." They wrote down the first button they had to click on. I asked a third time. They added one more detail, which still didn't tell me enough. But I got tired of asking. I tried all the options in the application, Googled to figure out what SQL query I needed, executed it, triggered a variety of different failures, and confirmed that they were received.

Later, I explained to my developer the difference between my expectations and what they wrote. They explained how their expectations were also violated: they had to reach out to the team from the other product, figure out what to do, poke around in the product, and that the details they left me were all they received to be able to eventually figure it out. To them, just the URL going to a blank screen was enough3.

I realized then that I had left out a crucial piece of meta-information: the reason I was asking for the details. I wanted to skip that poking around time. I had ten other tasks this day, and expected this to only take 20 minutes instead of the few hours it ended up taking. I was hoping to benefit from the poking around work my developer had already done. I was expecting a lot of context, and my developer was expecting to only need to share a little bit of context.

Once I shared this with them, they understood where the gap was between my expectations and theirs.

The suggestion about scheduling difficulties, that was an easy conversation to have. The other two were quite difficult for me. They took patient, active listening. I had to keep asking myself if I'd been clear enough with my expectations. They definitely made me sweat.

Moving between information communication and meta-level communication is a skill. It takes time, failure, reflection, and practice to do it. Doing it well is a leadership skill. Crucial Conversations helped me identify when it's worth working on the relationship with a person and investing in these conversations. You don't have to be managing people (as I am currently) to be building or using this skill.

What meta-level conversations are you avoiding? Are there people where you find you're only able to communicate on an information level? Have you tried communicating with them on a meta-level? What would happen if you told them that's what you were trying?

  1. In high-context cultures like the United States and the Netherlands, the deliverer assumes the burden of the miscommunication, not the receiver. There's more about this in The Culture Map

  2. A deadline is when somebody dies. Most things we call deadlines at work are just sadlines, in that people are alive but just sad when they're missed. Read more about it in Liz Keogh's blog post

  3. My colleague is coming from a low-context culture. I'm coming from a high-context culture. You can read more examples of this in The Culture Map, but it's the kind of non-fiction book that could have been a blog post, so just read the rest of this blog post instead? 

The Mental Load of One Meeting

I facilitated a meeting today. It was scheduled for 45 minutes on my agenda, for me and some other tech and team leads in my unit. We covered what we needed to move forward planning an epic with a set of relatively straightforward stories, both in execution and work distribution among the teams.

For some of the people attending today, they showed up without preparing, and didn't remember what the topic was. I filled them in. Because this is what the meeting preparation looked like from my side:

  1. I was in a planning meeting two weeks ago with a variety of people from the unit. I picked up the implication that that follow-up (combination solo thinking/document review followed by meeting) was needed soon with a few tech and team leads. I checked the out of office schedule (that I created and keep updated as people mention when they'll be out) and noticed that two weeks was an achievable time span for "soon". I mentioned this while we were still in that planning meeting.

  2. Within the next 48 hours, two people from the planning meeting asked when I'd scheduled the follow-up. I explained that I was giving the other leads a chance to schedule it first.

  3. When I scheduled the follow-up, I made the title of the Outlook invitation "review the comments you've already added to the document" so the purpose and expectations were both clear and impossible to miss. I added the document to review to the body of the invitation. I let the two inquiring minds know that this occurred. I found a couple of 45-minute blocks when all four people present at the planning meeting, plus the one person that had been forgotten, were available simultaneously. I chose the one on Tuesday, giving the person who was out for a week Monday to catch up.

  4. I added a Slackbot reminder for Monday afternoon for the person who's out the previous week. They're on my team and typically have trouble saying no when someone tries to pull their focus. We've found that Slackbots help.

  5. The meeting time arrives. The Slackbot reminder person and I have reviewed the document. The others haven't. I check in and ask whether we should hold this meeting after they're available to do so. Silence. I move on, facilitating the discussion, taking notes, and keeping us on track.

I am not the only person capable of scheduling a meeting. I am not the person best equipped to review this document. It is not my responsibility to come up with systems for my peers to remember to do their work. I can let this fail, just not follow-up, and I do with smaller, safer-to-fail experiments. But it doesn't solve the problem.

I can't do a better job of explaining the burden of carrying the mental load than Emma does in her comic. But to summarize: a lot of work is the work of noticing what work needs doing. It's the difference between "Let me know if you need help" and finding a way to actually help. It's the difference between having 45 minutes blocked on your calendar, and everything else that had to happen to make it successful.

What I wonder now is: how much is this skill of carrying the mental load noticed? How much is it valued? How do we interview for this skill? How do I train my fellow team members to have it? How do I teach the ownership of a team and a project? How do I get people to ask each other about the next step in picking up the work instead of coming to me to be the dispatcher? How do I make myself, instead of irreplacable, completely replaceable?

The Long Haul

I've been both the lead of my team and a tester on that team for a year now. Getting answers, adapting to change, and identifying solutions have completely different time horizons in each of these roles.

Tester experiments

Testing experiments can run quickly. A testing experiment might look like this:

  1. Call an API
  2. Inspect the output
  3. Compare that to the specification
  4. Question whether either or both need changing
  5. Talk to a developer about the changes
  6. Update the specification and/or tests

All of that happens within minutes or hours, or days if schedules are extremely incompatible and asynchronous communication is failing. I can be confident in the experiment's outcome. I can weigh the relative merits of caring vs. not caring about a particular error response code or required field, and leave my work at work. The next time I see a particular error response, I know what to look for and where changes might be needed. A failing test is evidence that something used to work in a particular way.

Team lead experiments

Team lead experiments take longer. A team lead experiment might look something more like this:

  1. Team members complain that it's hard to get their ideas in during refinement.
  2. I mention this to the talking-dominant team member at a 1-on-1.
  3. Talking-dominant team member dominates following refinement.
  4. I remind talking-dominant team member in Slack about our previous conversation.
  5. Talking-dominant team member spills over their allotted time during big unit meeting.
  6. I bring both of these instances in our 1-on-1, sharing the consequences (they're the single point of failure, other team members aren't heard) of their actions.
  7. Talking-dominant team member does it again.
  8. I ask team member what I can do to help them change their behavior, given that we are both adults in control of our own behavior. They agree that change is their responsibility. We agree that setting their microphone on mute at the start of the meeting would help.
  9. Talking-dominant team member dominates some of the following refinement, until I remind them to mute, after which other team members have time to think and contribute too.
  10. I ask talking-dominant team member to set up a Slackbot to send them a reminder to mute their microphone each week before the meeting.
  11. Other people are able to contribute at the following refinement.

This took place over months. We're not to a point where we have a solution that works every time. I went in with a different hypothesis each time, not knowing when I'd hit on the right one:

2. I think the talking-dominant team member isn't aware of their behavior.
4. I think the team member has forgotten our first conversation.
6. I think the team member doesn't understand the impact of their behavior.
8. I think the team member hasn't found a tool or a trigger to change their habit.
10. I think the team member needs both a tool and a trigger to change their habit.

Any of the first four experiments taken by itself looks like a failure. The talking-dominant team member prevents other team members from contributing effectively. It takes me time as a leader to come up with a different hypothesis, try something else, and discover where to go from there. And this was a relatively straightforward issue to assess. Imagine how long it might take to find an effective response to a problem with more variables and more consequences.

I'm also thinking not just about the experiments themselves, but how they might come across to the wider team. For the testing experiment, I could present my results in standup the next day as "I tested it, everything's good" but it's more valuable for everyone if I tell a bit more of the story. In the team lead experiment, I can imagine my team member telling my boss "Elizabeth told me to be quiet" or me telling my boss "The talking-dominant team member is giving room for others to contribute." Telling a slightly longer story of the journey displays my value as a team lead in a better light.

What experiments are you running right now? Is something that looks or feels like a failure getting you closer to a solution? How long is your time horizon?

Questions from Exploratory Week on Writing Exploratory Testing Charters

The Ministry of Testing hosted a week all about exploratory testing. I had the honor and privilege to help shepherd a small group of testers on the path of writing charters for their exploration. The most interesting part for me is where people had questions. It helps me figure out what sunk in, what could use more explanation, and helps me know that I've answered at least one person's burning question. Here are some of the ones I remember from the live Q&A at the end:

Q. Do you use the word charter?

A. Basically no. I've only heard testers who've specifically dug into this topic use the word charter. Almost all of the people I collaborate with on a daily basis (developers, product owner, UX, managers, other testers) do not have this as part of their experience. Most of my colleagues are not working in their first language. As a native speaker, I need to have more than one word to describe any particular phenomenon in case the first one doesn't resonate, or isn't understandable in my accent. (Everyone has an accent.) I've called charters:

  • questions
  • missions
  • paths
  • plans
  • goals
  • investigations
  • journeys

It's less important to use the word charter than it is to get across the intent: you're going on an exploration, in a particular direction, with specific set of tools, and you hope to come away with more information on this topic than when you set out. Sharing your charters helps you get feedback about where to look more deeply, more broadly, and where not to look at all.

Q. Where do you bring charters up?

A. Where don't I bring charters up? It bleeds into conversations I have about my work. Sharing my work and getting feedback about it is what ensures I'm providing valuable work for my team in the right direction. I tend to discover points of interest for my developers once or twice a day when development is starting, and more often when testing is at its peak, which often escalates to pairing. Here are some other moments in time where I share charters:


It's how I explain what I tested yesterday, what pieces I might have time for today, and what directions I haven't or won't have time for before we want to release the story. Sharing where I'm looking prevents me from being the one gatekeeper on quality for our product. "I've successfully called the API as an admin user and a regular user. Today I'm going to dig into what happens with the non-required fields." will solicit a completely different type of feedback than "I have an hour or two left on this story."


Any clues I can give my team about what I'll be looking into, what kind of test data I might set up, and what tools I'll be using to test a particular feature will help them figure out the whole scope of the story. "I'm going to try names at the character limit to see how they wrap on the page." helps us all figure out that we need to talk about our expectations for a character limit, we need to talk to UX about what should happen when you try to input something too long, I need to test what happens on the API side for the same field, and we might need a frontend dev to help us with the wrapping or truncation depending on what UX decides.

Testing starts executing

This is the point in time where there's enough built that I can add test execution to the setup and planning I've already been doing on a story. It might be that the API spec is published, it might be that the application has one happy path built. The developers are still going, but there's somewhere for me to start. Depending on the size and complexity of the story, I'll reflect for myself, or share my ideas with someone else on the team. If it involves an integration with another team, I'd reach out to them too.

Q. Isn't that just a test case?

A. After almost two hours of technical difficulties and explaining things, I have to say I did not write the most elegant charter as an example during the workshop. You got me! I'm glad that this workshop participant has a good feel for what is too specific or too broad. I find this so hard to explain because so much of that depends on the context.

But it wasn't terribly important to me to get the level of detail correct. Charters are a place to reflect on your testing and spark conversation. This charter did exactly that.

You can find other questions there weren't time for during the workshop on the Ministry of Testing club. The slides from the workshop are on github.