TestBash Manchester 2019, The Last One

I didn't know in September of 2019 that TestBash Manchester was the last TestBash I'd be attending for a while. I've revisited my notes from the workshop Joep Schuurkes and I ran about test reporting several times since then: for a video series, an Ask Me Anything, a forum thread, a 99-minute workshop, and a blog post. I'm just revisiting my notes now from the talks I was able to attend in the couple days after our workshop.


My notes from Pierre Vincent's talk on observability read like a wishlist of features I'd already been asking for in the app I was testing: unit testing, centralized logging, trace ids for integration debugging, etc. The app was also in a private beta at the time, so data collected from production would be filled with more anomalies than patterns. I'm still discovering how much more influence I have in my new role as Quality Lead to present the impact and influence the improvement of testability features.

Dan Smart and Yong He spoke about failure. The quote "Hey failors, how's the failing?" captures the essence of their talk: expect failure, and celebrate it, together. I get psyched anytime distinguishes between a fixed and a growth mindset as they did, which I still find best described in this Marginalian (formerly Brain Pickings) piece.

I see in my notes from Conor Fitzgerald's talk on Kanban that kan = visual and ban = card. In the two years I spent running a Kanban team in the meantime, I can't remember how many times a week (a day? a minute of standup?) I asked "should we visualize that on the board?" Two of my big legacies on my former team were reinforced by Conor's talk: 1) eliminating of context-switching, and 2) not waiting until the retro to make changes.

"What does it mean to be responsible for quality?" asks Past Elizabeth to Present Elizabeth from the notes on Gary Fleming's continuous testing talk. It doesn't have a straightforward answer, and exploring this is part of what my job gets to be now. Some of his examples (separating deployment from release, example mapping) are what I get to inspire my whole department to consider as part of their strategy.

Saskia Coplans's talk on security testing really stuck with me. Her ability to make the unnamable company she consulted for a joke every time she mentioned it was a level of comedy I can only dream of aspiring to in a talk. Familiarity with the STRIDE model and the OWASP Top 10 gives me a leg up in thinking about how to identify and mitigate risk in our software.

Areti Panou's talk about a deployment pipeline resonates more deeply now, after two years of running and maintaining a pipeline, than it did at the time, when a pipeline was just a glimmer in the eye of a teammate. I held an expectation setting and reaffirmation workshop about one pipeline in my department last week. Areti's expectations that a pipeline should have a clear purpose, failure criteria, and fix deadlines could help fix the bystander effect I've experienced myself.

The incomparable and unstoppable Lisi Hocke gave a talk about becoming more code-confident that still influences how I approach goals and objectives. Specifically: it's ok to re-evaluate if goals should still apply, and to establish pause or exit criteria to know when to give up. While I can be strong in saying no to what others expect, giving up on something I expect of myself can still be a struggle.

Bill Matthews's talk on technical risks with AI prompted me to add a "write about these times when you tested a machine learning application" card to the backlog for this blog. I wonder if I'll get around to writing that, since it would be hard to explain it better than Bill did that day. He talked about how training data reinforces stereotypes, and how understanding the domain is crucial to determining what's a random failure vs. what's a systematic failure.

Louise Gibbs gave a talk on starting her automation journey with a record and playback tool. That's also what got me excited about automation originally, and I'm etnerally grateful to have had the right people steer me towards tests at a lower application level before UI auomation became the only tool in my toolbelt.

Suman Bala's introduction to Charles Proxy was a memorable one. She'd hooked up her phone to the projected screen without turning her notifications off, so we got to see all the tweets streaming in in real time! If you're just diving into Charles Proxy, the recording of this talk is a great place to start.

Dominic Kua's talk on bash commands, Wim Selles's talk on Appium, and Henrik Stene's talks on consumer-driven contracts definitely fell into the "these people really know their tool" category. If they were tools I was using, I'd certainly consult their tips and advice.

Emily Bache's talk shared the ideas from the State of DevOps reports and ultimately the Accelerate book. As a team lead and co-host for a testing ensemble, I was able to help empower people across teams and help build a culture of psychological safety. In my new role as Quality Lead, I'm just starting to collect the DORA metrics to help me decide where I should focus my efforts within the department.


What a memorable group of people, location, and journey it was to TestBash Manchester 2019. I hope the upcoming TestBash UK is in the cards for me this coming year, and not only because I still dream about this Indian food I had on the way in and out of Manchester.

Map Out Your Stakeholders

Test reporting is part of a feedback loop. It's the beginning of a conversation, not the end. Knowing who you're having that conversation with allows you to provide those individuals better information for their context.

If you find a big nasty bug, you might report it differently if your audience is a developer on your team who you work with everyday, a developer on another team who you haven't met, or the Head of Product looking to give an important demo. Reporting on the breath, depth, focus, and impediments to your testing can help your audience guide your upcoming testing.

Joep Schuurkes and I had an activity as part of workshop on test reporting at TestBash Manchester 2019. I believe he articulated the key idea: if your test reporting depends on your audience, you have to know who your audience is. We had participants map out (with paper and markers) who the stakeholders were for their testing. Some people drew org charts, other drew mind maps.

In the test reporting workshop I held yesterday, we used a Miro board to map out our stakeholders. As examples, I made an overview of how I was thinking about my recent team.

And a version of Dan Ashby's Layers of Influence model, the "shallot" of influence, if you will.

While these are stated with people's roles, doing this for yourself using people's actual names (or names + roles) will help you think about who they are and what they listening for.

Identifying the audience for your test report allows you to tailor it to the risks they care about. If you're not sure how to tailor the report, present them with something and find out if that's what they want. Even better, share with them that you're trying to figure out how to make your work most effective for them.

More things to read:

Unblocking Your Test Strategy

In my new role as Quality Lead for my department, I get to figure out how to infuse everybody's work with "quality", and also figure out what that means exactly.

One of my colleagues made it easy for me on my second day by coming with a relatively concrete problem: they wanted an acceptance environment for their team. Their team (henceforth: Eager Team) integrated with chronically overloaded and busy team (henceforth: Busy Team), so they wanted an environment where they could test their stuff together before it went into production. They wanted me to help set that up.

I started my conversation with Eager Team Lead by taking one step back: why did they want this environment? They'd proposed a solution, but I wanted to spend at least a few minutes digging into the problem space with them to hear more about why they wanted this.


Come up with dream scenario

I asked Eager Team Lead what their dream setup would be for their test automation, and why that was the dream.

Eager Team and Busy Team already had a test environment hooked up to one another. But they both threw whatever they were in the middle of on that environment. Eager Team couldn't count on a stable, usable version of Busy Team's software, and vice versa. Eager Team wanted a place to see what would happen against the production version of Busy Team's code. They wanted to automate all the things they could, and have a place to run that automation.

Identify (and confirm they are indeed) constraints

Unfortunately Busy Team was busy. They wouldn't be able to make setting up an environment for Eager Team a priority in the next few months. I had that impression, and so did Eager Team Lead. They were, after all, Busy Team. But I wanted to make sure that the busyness of Busy Team was a constraint. I took on the action point to follow up with Boss Person about how we could both (1) check that Busy Team was indeed too busy, and (2) how to get this request on Busy Team's long list for the future.

I also dispelled one of assumptions underlying Eager Team Lead's dream setup: it was important to test everything, in an automated way, in the ideal environment, or else testing wouldn't be valuable. I explained that it's impossible to test everything. Testing in an automated way would be more likely to reveal known unknowns than the unknown unknowns their team was interested in. And that it wasn't all-or-nothing - every little bit would help.

Choose achieveable pieces within constraints

Rather than killing the dream, I identified a valuable first step in the direction of the dream. Eager Team would write down, in English to start, 3-5 things that they want to test using both their software and Busy Team's. They'd show those to their product owner to make sure they were things customers cared about. From there, we could look at whether to build automation, and if so, where to run it. There was that test environment already. We had production, could we use feature flags? Could we keep the data only visible to our employees internally?

I knew I'd hit a nerve when Eager Team Lead said "Oh, we can just start iterating over this!" Because of course, the software itself is not the only thing you can build in an iterative way. Your test automation can also mitigate risk, confirm assumptions, and provide value along the way.


So how'd it go? I confirmed Busy Team's busyness, and got more details on how and when to add this request to their list. I'm following up with Eager Team next week to see where they are in identifying valuable scenarios, or if I should jump in there too.

But wow, what a feeling to be able to lift the weight of "I need a thing I don't know how to build and don't think I can ever get" off someone's shoulders and replace it with "I know what to do next and it's achievable."

Stay tuned for more quality leading to come. 😎

Cutting People Off

Impatience is a virtue.

If impatience is solely your own, sorry, but you're the asshole. But if impatience is shared, saving your colleagues from a tiresome conversation will make their day.

Notice that a topic should come to a close

When you listen actively, you'll notice when something has already been said. It is much easier (particularly when remote) to give up, zone out, and think about something else. Don't be that person.

Engage with your colleagues! Save yourself and others from the perpetual purgatory that is an ineffective meeting. Pay attention.

Decide whether you are the right person to close a topic

There will be settings where you are the right person to decide if a topic should come to a close: in a small group of relative equals, when you're the appointed facilitator, or you're in some other position of power relative to the individuals or the subject matter. Recognize when you're not in the right position to change what's happening in the moment, and skip to Follow-up for more.

If the group already has expectations about what is or isn't on topic, your interruption should be enough. If it doesn't, or you want to take this opportunity to set a new one, interrupt with a meta-question.

How to deliver this message

I'm not always in the best position to decide whether now is the right time for a topic, so I tend to deliver topic-closing messages as questions:

  • I agree with your point about Thing B, but can we come back to Thing D?
  • We agreed that Person A is going to follow-up with Person C, is there anything more that we need to discuss about Thing B right now?
  • We could discuss Thing B more in this group, but since we're missing Person C's crucial input, should we?
  • I've captured what Person A said here in the notes. Was there anything I missed?
  • I think Person A already said Thing B, shall we move on?
  • It sounds like we're still discussing Thing B after we just agreed not to, am I understanding that correctly?
  • Can we leave it there for now?

If you're not sure if it's the right time for a question, try a meta-question:

  • Is now the right time to decide if we should keep talking about Thing B?
  • Are we going to be able to come to Decision D today?
  • Did we decide on a next step towards Thing B, or is that what you were describing?

Give the group a chance to decide, but don't be afraid to hold them to their decision. These are not questions:

  • We've agreed to that. Let's move on.
  • That's all we needed Person A for, let's let them go.
  • That's all I have for you, I'll let you go.
  • Thank you for your input/time.
  • I understand now.
  • Got it, thanks.

Follow-up

Get feedback on your behavior. This is how you learn.

A retrospective or 1-on-1 would be a good place to find out if the balance was right between gathering/sharing information and staying on topic. Asking someone to watch out for this particular behavior ahead of time will allow them to give you better feedback afterwards.


A colleague once declared me the "queen of cutting people off" because I did so very politely. I have a compliment stickie note from "ruling refinement with an iron fist." We should all be so lucky to have our work appreciated this way.

For more on meta-information, see this post. For more on setting agendas and preparing for meetings to make them effective, see this deck.

That "I Did It!" Feeling

I moved into a different role at work this week. I handed off my former team to the new team lead with a final 1-on-1 (and coincidentally performance review) for each team member. Each of them has a different variety of skills, motivators driving them, and awareness of either. This blog post focuses on just one team member.

"I did it!"

One team member is really driven by that "I did it!" feeling. They're early in their career. Both the product and the company are new to them. They spend a lot of their time pairing, asking for help, or floudering while wondering if they should be pairing or asking for help. Every 1-on-1, they'd report feeling that they hadn't learned or accomplished anything. They weren't getting that "I did it!" feeling.

But they were doing a great job. They were making progress in all the different technologies our team uses (Mendix, Docker, OpenAPI, pytest, gitlab pipelines, etc.), learning as they went. They were able to accept feedback to course-correct when necessary. They knew they were learning a lot, but this alone wasn't motivating enough for them.

Forces within our control

As a team lead, part of my job was to create focus for my team. There was a cloud of possibilities and priorities the people around and above us struggled to make clear. I wanted to create an environment where my team members could still get that "I did it!" feeling anyway. This Liz and Mollie comic captures it nicely.

A great manager holds umbrella to protect team from ridiculous requests, unclear priorities, massive uncertainty, unneccessary meetings, last-minute chaos; and foster clear expectations, defined roles, work-life balance, stable achieveable goals@lizandmollie

Amid this uncertainty, my team member requested clear steps for what they should be doing next and how to get promoted. I started by sending them the job description they hadn't seen for his own job. This helped set clear expectations and define their role. I spent time in our 1-on-1's finding out more about what was on their mind or dragging their attention away from work.

I took the time to reinforce the importance of a work-life balance. We started every refinement meeting with a review of upcoming time off, complete with peer pressure from me to take more of it. This allowed us to only refine the amount of work we could accomplish in the upcoming period, and set expectations for what wouldn't be done. This helped scope and clarify each team member's job.

I tried to give all my team members the "I did it!" feeling by talking about what we accomplished at the smallest scale in standup, a slightly larger scale in retro, and on the largest scale in the meeting with the whole unit. But that wasn't helping this particular team member.

The thing that finally gave them that "I did it!" feeling was: a Trello board for their own personal career development, with To Do, Doing, and Done columns.

To Do

We identified a clear, actionable step to take for a few technologies, job description bullet points, and conversations we'd already been having in our 1-on-1's. Some items would be accomplished during the course of our regular work on user stories. I set a clear expectation about the other items: they were for work time - downtime while waiting for a response, crafting days, etc. They were not for personal time.

Doing

I explained that it's better to limit the number of items in this column at a time. Deciding what to leave aside allows you to focus on what's in front of you. My team member want to be an expert in all our of different technologies at once. I reset this expectation: get a little better, one at a time.

Done

I gave them homework to fill in the Done column. They took time to list things they had learned and accomplished in the previous months. Scrolling through the Done list got them pretty close to that "I did it!" feeling. Taking a moment to reflect during our 1-on-1's helped give them that feeling. But they weren't getting that feeling right away. They needed to celebrate their accomplishments as they were happening, to keep up the motivation and momentum.

I did what was possibly my best management move for this person: I threw confetti.

Trello has a feature where if you add the confetti ball emoji 🎊 to the title of a column, moving an item to that column throws a little confetti around the item. It's very cute, and it finally gave my team member that "I did it!" feeling.

Setting expectations around the feeling

In the handoff to the new team lead, I explained this need my team member had, the ways I'd tried to meet it, and the confetti ball that finally worked. I pointed out that the need for the "I did it!" feeling can be found in other ways. The important thing for the team lead is not a particular action, but checking in with the team member about the feeling. I wanted to leave them space to take a different approach, so I used the "Mary had a little lamb" heuristic to explain what a different approach should include.

I did it!

The team member wanted to point to something they did. Without pairing, without asking a bunch of questions, they wanted to point to something and know that they were able to accomplish it themselves.

I did it!

The thing had to be done. While some skills and knowledge transfer could be months or years in the making, they needed something to come to a close.

I did it!

The new team lead and the team member get to decide together what's on the list, what it is. Growth and comfort in skills may not be immediately visible to the invidual in the day-to-day grind. Setting aside time for individual reflection or recognition at the 1-on-1 would help.

I did it !

This is the confetti ball piece of the puzzle. The celebration. It may feel silly, or gimicky, but it finally got this person that satisfaction they were looking for out of their job.


Reflection

  • When managing, do you dig into what a person needs to have clear expectations, defined roles, work-life balance, and stable achieveable goals?
  • When a team member asks you for an outcome, do you think about why they're asking you for that?
  • When you do handoffs, do you describe the actions you took or the needs they were serving?

Give Them the Fish, Then Teach Them to Fish

A colleague came to me with a request the other day. I didn't handle it quite how I wanted to. The request went something like this:

"I remember you were on the team for Big Scary product a couple years ago. Do you know if I can delete this List of Stuff from Big Scary product, and if I can automate that?"

I did not know. It was two years ago. Big Scary product had gotten Bigger and Scarier in the meantime.

But I knew where my team linked to our API specs from our customer-facing documentation. I applied the same principle to discover where Big Scary product API specs were. I looked at those specs and found the List of Stuff in a response body for an API call, but noticed that my colleague wouldn't have the ID the request required. So I looked at the API specs from a Bigger Scarier product. Combining a call from there would get the ID Big Scary product needed.

I was short on time, so I answered the question directly. I said it was possible, and possible to automate, and provided the links to the specs for both products. My colleague thanked me, and left the conversation able to solve their problem quickly.

I gave them the fish. What they learned from that interaction was: Elizabeth knows where to find stuff. I can come to her when I don't know how to find stuff and she will find it for me. That was the wrong lesson.

Give Them the Fish, Then Teach Them to Fish

A better lesson would have been: I know where to look for things. Elizabeth will give me the tools to know where to look, and empower me to do so. Now that I've got the access and seen it done once before, I can take a few more steps before I reach out Elizabeth the next time.

Here's what I could have done to get to get this colleague there:

  1. Explain where all the API specs live: I could have explained my thought process for finding the API specs, showed how I navigate using the headers and Ctrl + F on the page, and compare the requests and responses to what's needed.
  2. Update them about who's on the team for Big Scary product now: I could have listed a few team members names who I knew were working on Big Scary product, or pointed my colleague to the Slack channel for the whole team.
  3. Introduce colleague to a member of the team for Big Scary product: Since this colleague was a tester, I could have started a direct message with them and the tester on the team for Big Scary product, copying the question from the DM I first received.

What If I Only Teach Them to Fish?

What would have happened if I'd skipped what I'd done, and withheld the links to the API specs?

I wouldn't have been able to guarantee that my colleague was in the learning zone. From what I knew about their situation, they were accumulating a lot of data that they wanted to delete. I didn't know what other pressures were coming from the team, but the need to automate it suggested it was a bigger problem than just a few extra entries in a database.

Giving my colleague the fish, and then teaching them to fish, relieves any of that pressure to deliver, and helps open them up to learning and growing.

Tell Them What You're Doing

Some colleagues are distracted, or dense, or not able to take away meta-information from a conversation along with the information. They may stop listening after they have the answer.

Combat this by sharing your motives. Remind them that you too are busy. Explain that your goal is to empower them. Encourage them to reach out to the team working on Big Scary product, so that those team members can also get good at knowing where to look and answering colleagues' questions. Tell them you're happy to help them again, but you'll expect more details of what they tried first. Then hold them to that.

The best lesson is: I want to take a few more steps next time I have a problem, because I know I can, and Elizabeth expects more from me.

SoCraTes UK June 2021

I had a long drought between when I was last able to just attend a conference (rather than running a session or organizing) and SoCraTes UK in June. A year I think? And what a welcome rain it was.


Heloá hosted a session on meditation. I recognized many of the symptoms she described as her motivators for picking up the habit from a talk I did about introversion years ago. And the mindset she described (trying over succeeding, recognizing reactions without trying to impose a particular one) echoed back to a conversation on stoicism that Sanne Visser hosted at the first TestCraftCamp. I was completely convinced by the benefits she described ("People say I sound calmer. I'm breathing more deeply.") but I haven't built a habit around it yet. C'est la vie.

I went to two different sessions Maaret Pyhäjärvi hosted. (Does this make me a groupie?) The first, an ensemble testing session, reminded me that the most valuable exploratory testing bugs come when you understand enough about the business and the architecture to know what matters to some person who matters. The second session was about scaling workshops (and really, herself). I joined late after the lightning talks ended, but still helped plant the seed of what I and SoCraTes can do to bring more people into learning about good software.

I selfishly hosted a "help me out here" session in the afternoon. As I predicted, the testers extraordinaire Maaret and Lisi Hocke were exactly the people I needed to give me perspective on my current and evolving role at work, though the other attendees contributed as well. I came away with more questions than answers, which I'm still mulling over and digging into weeks later. I look forward to sharing more about the shape of things as they come to fruition.

Alexander (What is your last name?? Sorry!) held a session on habits you've developed or changed during the pandemic. How lovely it was to be in a small conversation trading notes about remote music lesssons and holding remote workshops. It was exactly the kind of hallway conversation I'd be looking to fall into at a flesh-and-blood conference.

I didn't write down who gave the lightning talk about saying no, but thank you. I rarely (never?) regret saying no, but I needed that extra push and specific language to have those "If you want me to pick this up, which of these things should I be putting down then?" conversations I've had lately. I have "Saying no commands respect" in my notes, and I guess I need a throw pillow of that too.


SoCraTes reinforces for me how a welcoming, inclusive open space is done. It's through explaining what an open space is for people who haven't attended. It's who's on the organizing committee. It's reminding people to take time off from the sessions. It's about ending up in the "Rose Garden" at the same time as Eva Nanyonga, who's working to improve the dispatching of home helathcare workers in Uganda, and finding out you sparked her curiosity and delight in the exploratory testing session earlier in the day. It's about providing a subsidized ticket option to make the event accessible to more people. It's in offering advice to the hosts at the start, such as:

  • ask for help facilitating
  • kick off the discussion
  • include everyone in the conversation

Thank you for holding this space. It's got me excited to host the open space that TestCraftCamp has evolved into, Friends of Good Software (FroGS Conf) in September.

Complete the Main Quest First

Recently, I made an outline for a tester (who was still onboarding) for what kinds of things to test on a new API endpoint we added. They explored, wrote a bunch of automated tests to capture their work, and came back with a list of interesting and good catches in the error responses. My first question in our debrief was: did you try a successful response? They hadn't. I sent them back to tackle that too.

Because a successful response is the first thing our product owner is going to ask about. That's what we'd want to show off at the review meeting internally to demonstrate the new API endpoint. That's the first the customer is going to try. They're going to copy the request from our OpenAPI specification, paste it in Postman (or the tool of their choice, but our customers so far have been using Postman), and see if their credentials will get them the response that matches the specification. These stakeholders share a common concern, and that's the risk we should be migitating with testing. First.

Complete the main quest first.

Complete the main quest first. Come back to the side quests.

A customer had asked for this API endpoint to be added. If we'd tested the happy path first, we would have had the option of releasing the API for the customer to use. The risk of discovering a successful request wouldn't yield a successful response was relatively low in this case, since our developers tend to try one happy path themselves.

But what if the main quest had required a lot of setup, explanations to build knowledge and context for the onboarding tester, or yielded an issue? I'd done a risk-based analysis of what all to complete as part of our definition of done for this story. But I hadn't shared my approach to completing the main quest first, so the tester did what testers do, and went on a hunt to find weird stuff.

Note down and follow-up on werid stuff; do not get distracted by it

Software will break in all sorts of ways. The more time and curiosity you have to dig into it, the more you'll discover. But are those the most important things?

In this API, the tester discovered that if you paste 10,000 characters into a field that's meant for a UUID, you get a 400 response. But did they try a regular old UUID first? What if they get a 400 response no matter what they put in that field, because the field name in the specification doesn't match what's in the code? Is trying 10,000 characters the first and biggest risk they have to face when presenting this API to a customer?

I'm not saying don't try 10,000 characters. I love that shit. But decide if it's a risk you care about first. If you don't care about the outcome, don't test it. Don't make busy-work for yourself just to fill the time.

Make side quests a concious choice

Before you start throwing 10,000 characters at your API, talk to your team. Your developer can probably tell you if they never built something to deal with that situation. Your product owner can tell you they'd rather have it to the customer sooner. Your data analyst can tell you if there's already longer stuff than that in the database, or if you should be trying Japanese instead.

Make side quests a deliberate choice. Share them to increase their value or figure out who on the team is best-suited to execute them.

Recognize when the quest is a journey, not a destination

Throwing 10,000 characters at an API may be a way to start a discussion about the speed at which responses are returned. It might be a way of showing your favorite random text generator to your fellow tester. It might be an exercise at an ensemble testing session, where everyone can practice pausing before executing an idea to describe the expected behavior first.

Quests can be valuable in ways that are not directly related to the finishing the quest.

Note: I got asked recently if I use the word charter much with non-testers. I don't. Try reading this again but replacing every mention of "quest" with "charter".

Praise the Messenger

I hear no a lot. No, that's not in scope for this story. No, it's not worth fixing that now. No, that's not risky enough for you to spend time testing. Hearing no on a good day sparks my creativity and pushes me into more valuable directions. On a bad day, it makes me wonder why I should keep going.

Saying no in a good way takes practice. I've honed this skill with wisdom from some of the best. Elisabeth Hendrickson tweeted a list of some of the different ways she has of saying no. It reminded me that the sassy replies I send to recruiters and speaking engagements in which I have no interest or connection. If you're looking for a style that builds bridges rather than burns them down, I'd recommend Elisabeth's Hendrickson's examples, not mine.

At Let's Test in Sweden in 2016, Fiona Charles gave a workshop on Learning to Say No. We practiced different ways to say no, complete with supporting arguments and steadfast determination tailored to the real-life situations participants brought to the workshop. (In a demonstration of having embraced the message of the workshop, we disbanded halfway through the allotted time. I spent the rest of the afternoon on a memorable bike ride along the Swedish coast.) The strongest revelation from the group was: we have more power than we think we do. There may be consequences to saying no, refusing a particular task, or turning down an opportunity, but it's often within our power to do so. Saying no effectively makes room you to decide how to spend your time.

At Agile Testing Days in 2018, Liz Keogh gave a keynote on how to tell people they failed (and make them feel great) that culminated in a surprising conclusion: don't. Don't tell people they failed. Use positive reinforcement to encourage the behaviors you want to see, and the others will fall away.

We had a tester leave the company recently. They'd set goals with their manager that directly opposed the test strategy I helped them shape. Bugs they reported were shot down, postponed until future stories, or grouped together as issues to be fixed eventually someday. They were facing a lot of "no," and as far as I could see from a different team, not a lot of yes. How long would you last in this situation?

I also have a tester reporting to me at the moment. They've been catching tricky things, find bugs, and preventing problems. I want them to keep doing what they're doing. I get that a developer's first reaction when they hear about a bug might not be "Oh wow, thank you so much for this pile of questions and work!" I have to give them that positive reinforcement so they keep reporting and digging into issues, so I call it out in their 1-on-1. They thank me, and says it keeps them going.

Don't shoot the messenger. Praise the messenger.

Who are you saying no to frequently? Is your no driving them away, or encouraging them? Do you need to say no?

If you were waiting for a sign - this is it.

Delivering Information vs. Delivering Meta-Information

One of the first testing skills I built was bug reporting. I practiced narrowing down the steps to reproduce. I told my developers what was happening now and what should happen instead. I learned to include where the issue was occurring, so my developers would stop closing my bugs with the Works for me resolution in Trac.

In 2013, Paul Holland taught me to tell the story of my testing. That is, meta-information about my testing. Sure, I still reported the outcomes: these things didn't work, these things did. But I also reported:

  1. how I tested the product
  2. how good that testing was

How long did it take me to test? What was difficult about my testing, made it difficult to get started, or slowed it down? What wasn't I able to test? The reason we tell those stories is to help uncover product risks, process risks, and get help from those around us to solve the issues.

This is meta-information about my testing. It's what I'm seeing as I'm doing it, one level removed from the actual testing itself. (Look up testopies [testing + autopsy] for more on gathering meta-level information about your testing.)

Building the skills around meta-information are important for doing any job well. These skills include:

  1. identifying when you're communicating at the information level, or the meta-level
  2. switching between and keep track of whether you're communicating at the information level or meta-level
  3. naming the different levels, and bringing the people you're communicating with on this level-switching journey

These recent examples stick in my mind.


Addresing the question vs. addressing the miscommunication

I was interviewing a candidate for a Software Development Manager position. For about a half hour, every question proceeded more or less like this:

  1. I'd ask a question.
  2. The candidate would speak for a minute or a few, without answering the question.
  3. I'd rephrase the question, be more specific, or add more details to help the candidate understand the question1.
  4. The candidate would speak again, again not answering the question.
  5. I'd try once or twice more, before moving on to the next question.

I realized early on that this was happening. I consider it an essential skill to be able to answer a question directly, or identify when you're having trouble doing that, as a Software Development Manager. I decided I'd reject the candidate, but realized I'd still have to get through the rest of this hour with them.

My colleague, my co-interviewer, took a different approach. He called out the miscommication problem to the candidate! He said "Elizabeth is asking you questions, and you're not answering them. Did you notice that? Why is that?" It was direct in a way that made me uncomfortable, and that I would have trouble doing with someone I just met. But he moved the conversation to where it needed to be: the meta-level. The candidate struggled to answer this question as well.

I jumped in to ease the awkwardness. I offered up my theory: the candidate's responses weren't answers, they were the candidate agreeing with the premise of the question. My colleague completely agreed, thanked me for contribution, but politely redirected the burden of this problem onto the candidate. The candidate continued to struggle without acknowledging the issue or the awkwardness for the rest of the interview. It was a clear decision for both me and my colleague: we would not be hiring this person.

Difficulty scheduling a meeting vs. telling them that

I was asking for more details on the planning of a project in standup. My developer described how hard it was to find time with a particular developer on another team. They said the project was going to take weeks to scope instead of the days they thought it could take, if only they could find time with this person. The project seemed important, already had a deadline (or really, sadline2), and waiting until the next free spot on this person's calendar wasn't working.

I asked my developer "Have you given his this feedback directly?" I suspected that only the information had been communicated: the following meeting would have to wait a week. My hunch was correct. I suggested my developer try giving this person the meta-information: about how hard they were to schedule, how this was cutting into the time we had to work on the project once it was scoped, and how that threatened the deadline. Imagine how differently this person could react and rearrange what they were doing, or share their relative priorities so we could adjust our expectations, when given this meta-information. This was last week, so I don't know yet how this story ends!

Adding testing details to the story vs. why I'm asking for them

There was a user story about being able to send metrics from our application to another application. I was picking up the testing on the story. My developer said they'd confirmed in the other application that our metrics were sent. They'd only sent a success, so I was planning on using the same setup as they did to look in the other application, but see what a failure looked like instead.

But that's not what I told my developer. What I said to them was: "Add enough details to the ticket so I can test it." It turned out, we had different ideas about what enough meant. First they added a URL. I followed the URL and it went to a blank page with a header. I said the same thing again, "Add enough details to the ticket so I can test it." They wrote down the first button they had to click on. I asked a third time. They added one more detail, which still didn't tell me enough. But I got tired of asking. I tried all the options in the application, Googled to figure out what SQL query I needed, executed it, triggered a variety of different failures, and confirmed that they were received.

Later, I explained to my developer the difference between my expectations and what they wrote. They explained how their expectations were also violated: they had to reach out to the team from the other product, figure out what to do, poke around in the product, and that the details they left me were all they received to be able to eventually figure it out. To them, just the URL going to a blank screen was enough3.

I realized then that I had left out a crucial piece of meta-information: the reason I was asking for the details. I wanted to skip that poking around time. I had ten other tasks this day, and expected this to only take 20 minutes instead of the few hours it ended up taking. I was hoping to benefit from the poking around work my developer had already done. I was expecting a lot of context, and my developer was expecting to only need to share a little bit of context.

Once I shared this with them, they understood where the gap was between my expectations and theirs.


The suggestion about scheduling difficulties, that was an easy conversation to have. The other two were quite difficult for me. They took patient, active listening. I had to keep asking myself if I'd been clear enough with my expectations. They definitely made me sweat.

Moving between information communication and meta-level communication is a skill. It takes time, failure, reflection, and practice to do it. Doing it well is a leadership skill. Crucial Conversations helped me identify when it's worth working on the relationship with a person and investing in these conversations. You don't have to be managing people (as I am currently) to be building or using this skill.

What meta-level conversations are you avoiding? Are there people where you find you're only able to communicate on an information level? Have you tried communicating with them on a meta-level? What would happen if you told them that's what you were trying?


  1. In high-context cultures like the United States and the Netherlands, the deliverer assumes the burden of the miscommunication, not the receiver. There's more about this in The Culture Map

  2. A deadline is when somebody dies. Most things we call deadlines at work are just sadlines, in that people are alive but just sad when they're missed. Read more about it in Liz Keogh's blog post

  3. My colleague is coming from a low-context culture. I'm coming from a high-context culture. You can read more examples of this in The Culture Map, but it's the kind of non-fiction book that could have been a blog post, so just read the rest of this blog post instead?