Try Asking Different Questions

I'll never know everything, but I love asking questions to get to know more. Obviously, the same question applied in different contexts will yield different results. A couple questions that worked exactly as I'd wanted with engineering teams really fell flat with UX and product. And a couple meta-level questions allowed openings I wouldn't have imagined.

When I asked the wrong question

Joining standups

Part of the charter for my role is to help improve processes and communication around the department. I'd attended each of the engineering team daily standup meetings, and I wanted to do the same for our designers. My intention was something everyone would agree to, but how I asked made it difficult for the designer to see that:

  • "Do designers all meet once a day? I've joined all the other standups; may I join yours?""

This might have given an impression that I was trying to inflict help or take over, rather than listen and see if my expertise was needed. Even following up with more details about wanting not to duplicate their work didn't seem to clarify my intention.

Ultimately, speaking to another colleague and starting with my intention helped me understand what space I could be included in. There was a weekly meeting the engineering and product managers attended, where deeper design sessions were planned for that week. I would have never known to ask:

  • "Can I attend your weekly meeting with the managers?"

because I didn't know there was a weekly meeting. Sharing my intention might have gotten me there sooner.

End-to-end flow

I'd made a big, complicated diagram of how our users moved through our connected products. I'd imagined it could trigger discussions about all sorts of things, but the two things I wanted it to give the most perspective on were:

  • "Are we testing the end-to-end flow?"
  • "Are we capturing metrics on the end-to-end flow?"

Describing the specific actions a user could take and how the products were interconnected showed that, for the development teams, it was hard to know what the ends were. I showed the diagram to most of the development teams and asked the question:

  • "What could users do that I haven't captured?"

They each thought of things inside the little bubble of their work. But asking the same question to a product owner yielded completely different results: more questions, such as:

  • "Who are our users? Where do I see them here?"
  • "Can we separate out one flow for one use case for one user?"
  • "How are people onboarded? Where do our users start?"

Those last two were completely left of my diagram, since they were outside the scope of what our development teams would build or test for a particular user story. But indeed, the product owners were right that onboarding was part of the end-to-end flow, and thus should be included in how we're looking at the user's journey.

After a few more discussions, I identified an even better question for the product owners:

  • "How are you thinking about the whole journey a user takes?"

When I asked the right question

Is this the right agenda?

I started a meeting with a group of about 12 people with the question:

  • "Is this the right agenda?"

I wanted to make sure the needs that I could see from my position matched what was most important to them. There were five items on the agenda. When we got to the third one, someone determined that there was another way we could approach it, and we changed the agenda right there. Offering the opportunity at the beginning allowed for the team members to take ownership of their time.

What fell through the cracks?

Retrospectives I've attended and facilitated have often had a "What went wrong?" kind of section. I asked a different question in a session about test strategy, hoping to uncover something slightly different:

  • "What fell through the cracks?"

Leaving enough silence let people ponder their work in the recent weeks. People named a few product bugs logged to start. After even more silence, some process bugs came to light as well. It was interesting to see how a slight change in the phrasing of the question uncovered things the retrospectives hadn't.

What questions do you tend to ask to the same people in the same way? What might you discover if you change the way you're asking the question? Can you leave more room for thought and contemplation in your conversations?

Agile Testing Days 2021

Amazingly, a week physically away from work and present with other human software tester beings was refreshing. And I didn't contract COVID — shoutout to vaccines! Note to self: bring honey, because your voice will be tired from talking through an N95 mask in a loud room.

Thank you so much Agile Testing Days for the honor and privilege of serving on your program committee this year, and the straight-up spoiling that comes with attending a conference and not speaking. I've said for years that I enjoy the learning more than the being-in-the-spotlight. This year's edition allowed me to do just that.

1. Dagmar Monett - Coming to terms with intelligence in machines

Nobody can agree on what intelligence is; it's context-dependent and culture-bound. Human-level AI is not inevitable!

2. Klaartje van Zwoll - How therapy made me a better teamplayer

All needs are valid! By making your needs specific, it's easier for others to meet them. Journaling can help close the gap between when you experience a thing and when you analyze it.

Boundaries are high-quality information that people need to love you best. If someone crosses a boundary: specify the behavior, tell them the story of how it made you feel, and describe both the behavior you'd prefer and how that would improve things.

After you say no, sit in the discomfort of the silence instead of offering excuses. Klaartje also did a great explanation of ask culture vs. guess culture which I've filed as American/Dutch vs. British/Belgian in my head.

3. Alex Schladebeck - Unit Testing and TDD from the tester perspective

Lots of decisions get made when we're writing code that we never talk about. Writing unit tests for legacy code is hard! Being curious (instead of incredulous) gets you (and your pair) to learn more and have better conversations.

4. Maryam Umar - The Power of Coaching for Leading Test Teams

Questions Maryam asked the audience that are worth a bit of reflection:

  • What are my career goals?
  • What do I want to change personally?
  • What do I value? What do I enjoy?
  • What outcomes are within my control?

5. Gitte Klitgaard - The choice is yours

Choices still happen even if they're not active choices. Options expire. When you make a commitment at the last responsible moment, remember the responsible part. There's space between the stimulus and the response where you can choose how you react. Be the captain of your own ship! We have so much more influence than we think. We can't have everything at once.

6. Raj Subrameyer - It is time for Toxic Leaders to come out of their closet

Masculinity is toxic. Moving on.

7. Jutta Eckstein - Agile Comes with a Responsibility for Sustainability.

Software consumes energy: take responsibility. Change your definition of done. Shift the question we ask in and around the product. Testers are the right people to start doing that!

8. Zeb Ford-Reitz - What's a Quality Dojo?

Zeb's quality dojo: low-risk, low-commitment, high-safety, and long-running. The learning is the product. (The product is the friends you make along the way?) If something is unclear, you need to ask. If you're hoping for a particular outcome, it's not so much an experiment as a bet.

9. João Proença - Limitless within our boundaries

Having a lot of choices is not necessarily better than only having a few choices. Making decisions all the time will lead to decision fatigue. Embrace the constraints that life gives you. Charters and time-boxes are constraints for exploratory testing. Set up the right constraints to be successful:

  • What is the goal?
  • What are the risks you're mitigating?

10. Lena Wilberg - Delivering fast & slow

Be aware of what is the worst that can happen. Know Fiona Charles's 10 Commandments for Ethical Software Testers. Know your personal and career risk tolerance.

11. Bruce Hughes - How to be an Ally to Non-binary Folk in Tech

Never again should you have to explain or justify your existence. Listening is a beautiful skill. Labels are for communicating with other people. You don't owe anyone your time!

12. Lisi Hocke - Growing an Experiment-driven Quality Culture

Include a hypotehsis in your experiments. Identify exit criteria, whether or not you succeed. Are the teams ready, eager, and committed? What is the goal? Tackle the unknown, automate the known. Build on people's curiosity. Metrics work locally, temporarily, in context, at the grassroots level. What information do we need for each context? Raise awareness about options.

13. Dr. Karen Holland - Food for Thought

Mental healthy people can cope with the normal stresses of life. Healthy diets allow us to cope better. Deficiences cannot be fixed by food alone.

14. Vera Baum - The Tester's Learning Toolkit

Experts produce extraordinary results over a long period of time, but only come about after deliberate practice for four hours a day for ten years. Experts should analyze intuitive reactions. Becoming an expert is not everyone's goal! Only generic knowledge is transferrable. Training wheels are holding you back — you can learn from your failures. Reflection is key.

15. Vernon Richards - What does the 'Coach' in 'Quality Coach' mean?

Reward structures can promote anti-patterns. Be comfortable with silence. Stay in the present. Notice how they're saying something, and how they're feeling about what they're saying. Decide when to give them the answer.

I have the urge to name-drop all my friends here, but also, who cares? This isn't a popularity contest. Just look at Twitter to see who was there. I'm grateful to everyone who I was able to give an suspicious eyebrow across the table to, everyone who shared commentary during sessions, everyone whose conversations caused me to skip a session, everyone who patiently waited for me to poorly construct and pronounce Dutch sentences at them, everyone who thought it was fine for me to have a mask on, and most especially, whoever left a piano unlocked. Thank you, and please leave the grand piano unlocked next year.

I also have the urge to apologize for all the exclamation points in these notes, but I regret to inform you that (1) they do reflect my actual enthusiasm over hearing these message delivered directly from someone's mouth into my ears, and (2) I look forward to the day when our writing can reflect how we'd like to communicate instead of how the patriarchy expects us to.

TestBash Manchester 2019, The Last One

I didn't know in September of 2019 that TestBash Manchester was the last TestBash I'd be attending for a while. I've revisited my notes from the workshop Joep Schuurkes and I ran about test reporting several times since then: for a video series, an Ask Me Anything, a forum thread, a 99-minute workshop, and a blog post. I'm just revisiting my notes now from the talks I was able to attend in the couple days after our workshop.

My notes from Pierre Vincent's talk on observability read like a wishlist of features I'd already been asking for in the app I was testing: unit testing, centralized logging, trace ids for integration debugging, etc. The app was also in a private beta at the time, so data collected from production would be filled with more anomalies than patterns. I'm still discovering how much more influence I have in my new role as Quality Lead to present the impact and influence the improvement of testability features.

Dan Smart and Yong He spoke about failure. The quote "Hey failors, how's the failing?" captures the essence of their talk: expect failure, and celebrate it, together. I get psyched anytime distinguishes between a fixed and a growth mindset as they did, which I still find best described in this Marginalian (formerly Brain Pickings) piece.

I see in my notes from Conor Fitzgerald's talk on Kanban that kan = visual and ban = card. In the two years I spent running a Kanban team in the meantime, I can't remember how many times a week (a day? a minute of standup?) I asked "should we visualize that on the board?" Two of my big legacies on my former team were reinforced by Conor's talk: 1) eliminating of context-switching, and 2) not waiting until the retro to make changes.

"What does it mean to be responsible for quality?" asks Past Elizabeth to Present Elizabeth from the notes on Gary Fleming's continuous testing talk. It doesn't have a straightforward answer, and exploring this is part of what my job gets to be now. Some of his examples (separating deployment from release, example mapping) are what I get to inspire my whole department to consider as part of their strategy.

Saskia Coplans's talk on security testing really stuck with me. Her ability to make the unnamable company she consulted for a joke every time she mentioned it was a level of comedy I can only dream of aspiring to in a talk. Familiarity with the STRIDE model and the OWASP Top 10 gives me a leg up in thinking about how to identify and mitigate risk in our software.

Areti Panou's talk about a deployment pipeline resonates more deeply now, after two years of running and maintaining a pipeline, than it did at the time, when a pipeline was just a glimmer in the eye of a teammate. I held an expectation setting and reaffirmation workshop about one pipeline in my department last week. Areti's expectations that a pipeline should have a clear purpose, failure criteria, and fix deadlines could help fix the bystander effect I've experienced myself.

The incomparable and unstoppable Lisi Hocke gave a talk about becoming more code-confident that still influences how I approach goals and objectives. Specifically: it's ok to re-evaluate if goals should still apply, and to establish pause or exit criteria to know when to give up. While I can be strong in saying no to what others expect, giving up on something I expect of myself can still be a struggle.

Bill Matthews's talk on technical risks with AI prompted me to add a "write about these times when you tested a machine learning application" card to the backlog for this blog. I wonder if I'll get around to writing that, since it would be hard to explain it better than Bill did that day. He talked about how training data reinforces stereotypes, and how understanding the domain is crucial to determining what's a random failure vs. what's a systematic failure.

Louise Gibbs gave a talk on starting her automation journey with a record and playback tool. That's also what got me excited about automation originally, and I'm etnerally grateful to have had the right people steer me towards tests at a lower application level before UI auomation became the only tool in my toolbelt.

Suman Bala's introduction to Charles Proxy was a memorable one. She'd hooked up her phone to the projected screen without turning her notifications off, so we got to see all the tweets streaming in in real time! If you're just diving into Charles Proxy, the recording of this talk is a great place to start.

Dominic Kua's talk on bash commands, Wim Selles's talk on Appium, and Henrik Stene's talks on consumer-driven contracts definitely fell into the "these people really know their tool" category. If they were tools I was using, I'd certainly consult their tips and advice.

Emily Bache's talk shared the ideas from the State of DevOps reports and ultimately the Accelerate book. As a team lead and co-host for a testing ensemble, I was able to help empower people across teams and help build a culture of psychological safety. In my new role as Quality Lead, I'm just starting to collect the DORA metrics to help me decide where I should focus my efforts within the department.

What a memorable group of people, location, and journey it was to TestBash Manchester 2019. I hope the upcoming TestBash UK is in the cards for me this coming year, and not only because I still dream about this Indian food I had on the way in and out of Manchester.

Map Out Your Stakeholders

Test reporting is part of a feedback loop. It's the beginning of a conversation, not the end. Knowing who you're having that conversation with allows you to provide those individuals better information for their context.

If you find a big nasty bug, you might report it differently if your audience is a developer on your team who you work with everyday, a developer on another team who you haven't met, or the Head of Product looking to give an important demo. Reporting on the breath, depth, focus, and impediments to your testing can help your audience guide your upcoming testing.

Joep Schuurkes and I had an activity as part of workshop on test reporting at TestBash Manchester 2019. I believe he articulated the key idea: if your test reporting depends on your audience, you have to know who your audience is. We had participants map out (with paper and markers) who the stakeholders were for their testing. Some people drew org charts, other drew mind maps.

In the test reporting workshop I held yesterday, we used a Miro board to map out our stakeholders. As examples, I made an overview of how I was thinking about my recent team.

And a version of Dan Ashby's Layers of Influence model, the "shallot" of influence, if you will.

While these are stated with people's roles, doing this for yourself using people's actual names (or names + roles) will help you think about who they are and what they listening for.

Identifying the audience for your test report allows you to tailor it to the risks they care about. If you're not sure how to tailor the report, present them with something and find out if that's what they want. Even better, share with them that you're trying to figure out how to make your work most effective for them.

More things to read:

Unblocking Your Test Strategy

In my new role as Quality Lead for my department, I get to figure out how to infuse everybody's work with "quality", and also figure out what that means exactly.

One of my colleagues made it easy for me on my second day by coming with a relatively concrete problem: they wanted an acceptance environment for their team. Their team (henceforth: Eager Team) integrated with chronically overloaded and busy team (henceforth: Busy Team), so they wanted an environment where they could test their stuff together before it went into production. They wanted me to help set that up.

I started my conversation with Eager Team Lead by taking one step back: why did they want this environment? They'd proposed a solution, but I wanted to spend at least a few minutes digging into the problem space with them to hear more about why they wanted this.

Come up with dream scenario

I asked Eager Team Lead what their dream setup would be for their test automation, and why that was the dream.

Eager Team and Busy Team already had a test environment hooked up to one another. But they both threw whatever they were in the middle of on that environment. Eager Team couldn't count on a stable, usable version of Busy Team's software, and vice versa. Eager Team wanted a place to see what would happen against the production version of Busy Team's code. They wanted to automate all the things they could, and have a place to run that automation.

Identify (and confirm they are indeed) constraints

Unfortunately Busy Team was busy. They wouldn't be able to make setting up an environment for Eager Team a priority in the next few months. I had that impression, and so did Eager Team Lead. They were, after all, Busy Team. But I wanted to make sure that the busyness of Busy Team was a constraint. I took on the action point to follow up with Boss Person about how we could both (1) check that Busy Team was indeed too busy, and (2) how to get this request on Busy Team's long list for the future.

I also dispelled one of assumptions underlying Eager Team Lead's dream setup: it was important to test everything, in an automated way, in the ideal environment, or else testing wouldn't be valuable. I explained that it's impossible to test everything. Testing in an automated way would be more likely to reveal known unknowns than the unknown unknowns their team was interested in. And that it wasn't all-or-nothing - every little bit would help.

Choose achieveable pieces within constraints

Rather than killing the dream, I identified a valuable first step in the direction of the dream. Eager Team would write down, in English to start, 3-5 things that they want to test using both their software and Busy Team's. They'd show those to their product owner to make sure they were things customers cared about. From there, we could look at whether to build automation, and if so, where to run it. There was that test environment already. We had production, could we use feature flags? Could we keep the data only visible to our employees internally?

I knew I'd hit a nerve when Eager Team Lead said "Oh, we can just start iterating over this!" Because of course, the software itself is not the only thing you can build in an iterative way. Your test automation can also mitigate risk, confirm assumptions, and provide value along the way.

So how'd it go? I confirmed Busy Team's busyness, and got more details on how and when to add this request to their list. I'm following up with Eager Team next week to see where they are in identifying valuable scenarios, or if I should jump in there too.

But wow, what a feeling to be able to lift the weight of "I need a thing I don't know how to build and don't think I can ever get" off someone's shoulders and replace it with "I know what to do next and it's achievable."

Stay tuned for more quality leading to come.

Cutting People Off

Impatience is a virtue.

If impatience is solely your own, sorry, but you're the asshole. But if impatience is shared, saving your colleagues from a tiresome conversation will make their day.

Notice that a topic should come to a close

When you listen actively, you'll notice when something has already been said. It is much easier (particularly when remote) to give up, zone out, and think about something else. Don't be that person.

Engage with your colleagues! Save yourself and others from the perpetual purgatory that is an ineffective meeting. Pay attention.

Decide whether you are the right person to close a topic

There will be settings where you are the right person to decide if a topic should come to a close: in a small group of relative equals, when you're the appointed facilitator, or you're in some other position of power relative to the individuals or the subject matter. Recognize when you're not in the right position to change what's happening in the moment, and skip to Follow-up for more.

If the group already has expectations about what is or isn't on topic, your interruption should be enough. If it doesn't, or you want to take this opportunity to set a new one, interrupt with a meta-question.

How to deliver this message

I'm not always in the best position to decide whether now is the right time for a topic, so I tend to deliver topic-closing messages as questions:

  • I agree with your point about Thing B, but can we come back to Thing D?
  • We agreed that Person A is going to follow-up with Person C, is there anything more that we need to discuss about Thing B right now?
  • We could discuss Thing B more in this group, but since we're missing Person C's crucial input, should we?
  • I've captured what Person A said here in the notes. Was there anything I missed?
  • I think Person A already said Thing B, shall we move on?
  • It sounds like we're still discussing Thing B after we just agreed not to, am I understanding that correctly?
  • Can we leave it there for now?

If you're not sure if it's the right time for a question, try a meta-question:

  • Is now the right time to decide if we should keep talking about Thing B?
  • Are we going to be able to come to Decision D today?
  • Did we decide on a next step towards Thing B, or is that what you were describing?

Give the group a chance to decide, but don't be afraid to hold them to their decision. These are not questions:

  • We've agreed to that. Let's move on.
  • That's all we needed Person A for, let's let them go.
  • That's all I have for you, I'll let you go.
  • Thank you for your input/time.
  • I understand now.
  • Got it, thanks.


Get feedback on your behavior. This is how you learn.

A retrospective or 1-on-1 would be a good place to find out if the balance was right between gathering/sharing information and staying on topic. Asking someone to watch out for this particular behavior ahead of time will allow them to give you better feedback afterwards.

A colleague once declared me the "queen of cutting people off" because I did so very politely. I have a compliment stickie note from "ruling refinement with an iron fist." We should all be so lucky to have our work appreciated this way.

For more on meta-information, see this post. For more on setting agendas and preparing for meetings to make them effective, see this deck.

That "I Did It!" Feeling

I moved into a different role at work this week. I handed off my former team to the new team lead with a final 1-on-1 (and coincidentally performance review) for each team member. Each of them has a different variety of skills, motivators driving them, and awareness of either. This blog post focuses on just one team member.

"I did it!"

One team member is really driven by that "I did it!" feeling. They're early in their career. Both the product and the company are new to them. They spend a lot of their time pairing, asking for help, or floudering while wondering if they should be pairing or asking for help. Every 1-on-1, they'd report feeling that they hadn't learned or accomplished anything. They weren't getting that "I did it!" feeling.

But they were doing a great job. They were making progress in all the different technologies our team uses (Mendix, Docker, OpenAPI, pytest, gitlab pipelines, etc.), learning as they went. They were able to accept feedback to course-correct when necessary. They knew they were learning a lot, but this alone wasn't motivating enough for them.

Forces within our control

As a team lead, part of my job was to create focus for my team. There was a cloud of possibilities and priorities the people around and above us struggled to make clear. I wanted to create an environment where my team members could still get that "I did it!" feeling anyway. This Liz and Mollie comic captures it nicely.

A great manager holds umbrella to protect team from ridiculous requests, unclear priorities, massive uncertainty, unneccessary meetings, last-minute chaos; and foster clear expectations, defined roles, work-life balance, stable achieveable goals@lizandmollie

Amid this uncertainty, my team member requested clear steps for what they should be doing next and how to get promoted. I started by sending them the job description they hadn't seen for his own job. This helped set clear expectations and define their role. I spent time in our 1-on-1's finding out more about what was on their mind or dragging their attention away from work.

I took the time to reinforce the importance of a work-life balance. We started every refinement meeting with a review of upcoming time off, complete with peer pressure from me to take more of it. This allowed us to only refine the amount of work we could accomplish in the upcoming period, and set expectations for what wouldn't be done. This helped scope and clarify each team member's job.

I tried to give all my team members the "I did it!" feeling by talking about what we accomplished at the smallest scale in standup, a slightly larger scale in retro, and on the largest scale in the meeting with the whole unit. But that wasn't helping this particular team member.

The thing that finally gave them that "I did it!" feeling was: a Trello board for their own personal career development, with To Do, Doing, and Done columns.

To Do

We identified a clear, actionable step to take for a few technologies, job description bullet points, and conversations we'd already been having in our 1-on-1's. Some items would be accomplished during the course of our regular work on user stories. I set a clear expectation about the other items: they were for work time - downtime while waiting for a response, crafting days, etc. They were not for personal time.


I explained that it's better to limit the number of items in this column at a time. Deciding what to leave aside allows you to focus on what's in front of you. My team member want to be an expert in all our of different technologies at once. I reset this expectation: get a little better, one at a time.


I gave them homework to fill in the Done column. They took time to list things they had learned and accomplished in the previous months. Scrolling through the Done list got them pretty close to that "I did it!" feeling. Taking a moment to reflect during our 1-on-1's helped give them that feeling. But they weren't getting that feeling right away. They needed to celebrate their accomplishments as they were happening, to keep up the motivation and momentum.

I did what was possibly my best management move for this person: I threw confetti.

Trello has a feature where if you add the confetti ball emoji to the title of a column, moving an item to that column throws a little confetti around the item. It's very cute, and it finally gave my team member that "I did it!" feeling.

Setting expectations around the feeling

In the handoff to the new team lead, I explained this need my team member had, the ways I'd tried to meet it, and the confetti ball that finally worked. I pointed out that the need for the "I did it!" feeling can be found in other ways. The important thing for the team lead is not a particular action, but checking in with the team member about the feeling. I wanted to leave them space to take a different approach, so I used the "Mary had a little lamb" heuristic to explain what a different approach should include.

I did it!

The team member wanted to point to something they did. Without pairing, without asking a bunch of questions, they wanted to point to something and know that they were able to accomplish it themselves.

I did it!

The thing had to be done. While some skills and knowledge transfer could be months or years in the making, they needed something to come to a close.

I did it!

The new team lead and the team member get to decide together what's on the list, what it is. Growth and comfort in skills may not be immediately visible to the invidual in the day-to-day grind. Setting aside time for individual reflection or recognition at the 1-on-1 would help.

I did it !

This is the confetti ball piece of the puzzle. The celebration. It may feel silly, or gimicky, but it finally got this person that satisfaction they were looking for out of their job.


  • When managing, do you dig into what a person needs to have clear expectations, defined roles, work-life balance, and stable achieveable goals?
  • When a team member asks you for an outcome, do you think about why they're asking you for that?
  • When you do handoffs, do you describe the actions you took or the needs they were serving?

Give Them the Fish, Then Teach Them to Fish

A colleague came to me with a request the other day. I didn't handle it quite how I wanted to. The request went something like this:

"I remember you were on the team for Big Scary product a couple years ago. Do you know if I can delete this List of Stuff from Big Scary product, and if I can automate that?"

I did not know. It was two years ago. Big Scary product had gotten Bigger and Scarier in the meantime.

But I knew where my team linked to our API specs from our customer-facing documentation. I applied the same principle to discover where Big Scary product API specs were. I looked at those specs and found the List of Stuff in a response body for an API call, but noticed that my colleague wouldn't have the ID the request required. So I looked at the API specs from a Bigger Scarier product. Combining a call from there would get the ID Big Scary product needed.

I was short on time, so I answered the question directly. I said it was possible, and possible to automate, and provided the links to the specs for both products. My colleague thanked me, and left the conversation able to solve their problem quickly.

I gave them the fish. What they learned from that interaction was: Elizabeth knows where to find stuff. I can come to her when I don't know how to find stuff and she will find it for me. That was the wrong lesson.

Give Them the Fish, Then Teach Them to Fish

A better lesson would have been: I know where to look for things. Elizabeth will give me the tools to know where to look, and empower me to do so. Now that I've got the access and seen it done once before, I can take a few more steps before I reach out Elizabeth the next time.

Here's what I could have done to get to get this colleague there:

  1. Explain where all the API specs live: I could have explained my thought process for finding the API specs, showed how I navigate using the headers and Ctrl + F on the page, and compare the requests and responses to what's needed.
  2. Update them about who's on the team for Big Scary product now: I could have listed a few team members names who I knew were working on Big Scary product, or pointed my colleague to the Slack channel for the whole team.
  3. Introduce colleague to a member of the team for Big Scary product: Since this colleague was a tester, I could have started a direct message with them and the tester on the team for Big Scary product, copying the question from the DM I first received.

What If I Only Teach Them to Fish?

What would have happened if I'd skipped what I'd done, and withheld the links to the API specs?

I wouldn't have been able to guarantee that my colleague was in the learning zone. From what I knew about their situation, they were accumulating a lot of data that they wanted to delete. I didn't know what other pressures were coming from the team, but the need to automate it suggested it was a bigger problem than just a few extra entries in a database.

Giving my colleague the fish, and then teaching them to fish, relieves any of that pressure to deliver, and helps open them up to learning and growing.

Tell Them What You're Doing

Some colleagues are distracted, or dense, or not able to take away meta-information from a conversation along with the information. They may stop listening after they have the answer.

Combat this by sharing your motives. Remind them that you too are busy. Explain that your goal is to empower them. Encourage them to reach out to the team working on Big Scary product, so that those team members can also get good at knowing where to look and answering colleagues' questions. Tell them you're happy to help them again, but you'll expect more details of what they tried first. Then hold them to that.

The best lesson is: I want to take a few more steps next time I have a problem, because I know I can, and Elizabeth expects more from me.

SoCraTes UK June 2021

I had a long drought between when I was last able to just attend a conference (rather than running a session or organizing) and SoCraTes UK in June. A year I think? And what a welcome rain it was.

Heloá hosted a session on meditation. I recognized many of the symptoms she described as her motivators for picking up the habit from a talk I did about introversion years ago. And the mindset she described (trying over succeeding, recognizing reactions without trying to impose a particular one) echoed back to a conversation on stoicism that Sanne Visser hosted at the first TestCraftCamp. I was completely convinced by the benefits she described ("People say I sound calmer. I'm breathing more deeply.") but I haven't built a habit around it yet. C'est la vie.

I went to two different sessions Maaret Pyhäjärvi hosted. (Does this make me a groupie?) The first, an ensemble testing session, reminded me that the most valuable exploratory testing bugs come when you understand enough about the business and the architecture to know what matters to some person who matters. The second session was about scaling workshops (and really, herself). I joined late after the lightning talks ended, but still helped plant the seed of what I and SoCraTes can do to bring more people into learning about good software.

I selfishly hosted a "help me out here" session in the afternoon. As I predicted, the testers extraordinaire Maaret and Lisi Hocke were exactly the people I needed to give me perspective on my current and evolving role at work, though the other attendees contributed as well. I came away with more questions than answers, which I'm still mulling over and digging into weeks later. I look forward to sharing more about the shape of things as they come to fruition.

Alexander (What is your last name?? Sorry!) held a session on habits you've developed or changed during the pandemic. How lovely it was to be in a small conversation trading notes about remote music lesssons and holding remote workshops. It was exactly the kind of hallway conversation I'd be looking to fall into at a flesh-and-blood conference.

I didn't write down who gave the lightning talk about saying no, but thank you. I rarely (never?) regret saying no, but I needed that extra push and specific language to have those "If you want me to pick this up, which of these things should I be putting down then?" conversations I've had lately. I have "Saying no commands respect" in my notes, and I guess I need a throw pillow of that too.

SoCraTes reinforces for me how a welcoming, inclusive open space is done. It's through explaining what an open space is for people who haven't attended. It's who's on the organizing committee. It's reminding people to take time off from the sessions. It's about ending up in the "Rose Garden" at the same time as Eva Nanyonga, who's working to improve the dispatching of home helathcare workers in Uganda, and finding out you sparked her curiosity and delight in the exploratory testing session earlier in the day. It's about providing a subsidized ticket option to make the event accessible to more people. It's in offering advice to the hosts at the start, such as:

  • ask for help facilitating
  • kick off the discussion
  • include everyone in the conversation

Thank you for holding this space. It's got me excited to host the open space that TestCraftCamp has evolved into, Friends of Good Software (FroGS Conf) in September.

Complete the Main Quest First

Recently, I made an outline for a tester (who was still onboarding) for what kinds of things to test on a new API endpoint we added. They explored, wrote a bunch of automated tests to capture their work, and came back with a list of interesting and good catches in the error responses. My first question in our debrief was: did you try a successful response? They hadn't. I sent them back to tackle that too.

Because a successful response is the first thing our product owner is going to ask about. That's what we'd want to show off at the review meeting internally to demonstrate the new API endpoint. That's the first the customer is going to try. They're going to copy the request from our OpenAPI specification, paste it in Postman (or the tool of their choice, but our customers so far have been using Postman), and see if their credentials will get them the response that matches the specification. These stakeholders share a common concern, and that's the risk we should be migitating with testing. First.

Complete the main quest first.

Complete the main quest first. Come back to the side quests.

A customer had asked for this API endpoint to be added. If we'd tested the happy path first, we would have had the option of releasing the API for the customer to use. The risk of discovering a successful request wouldn't yield a successful response was relatively low in this case, since our developers tend to try one happy path themselves.

But what if the main quest had required a lot of setup, explanations to build knowledge and context for the onboarding tester, or yielded an issue? I'd done a risk-based analysis of what all to complete as part of our definition of done for this story. But I hadn't shared my approach to completing the main quest first, so the tester did what testers do, and went on a hunt to find weird stuff.

Note down and follow-up on werid stuff; do not get distracted by it

Software will break in all sorts of ways. The more time and curiosity you have to dig into it, the more you'll discover. But are those the most important things?

In this API, the tester discovered that if you paste 10,000 characters into a field that's meant for a UUID, you get a 400 response. But did they try a regular old UUID first? What if they get a 400 response no matter what they put in that field, because the field name in the specification doesn't match what's in the code? Is trying 10,000 characters the first and biggest risk they have to face when presenting this API to a customer?

I'm not saying don't try 10,000 characters. I love that shit. But decide if it's a risk you care about first. If you don't care about the outcome, don't test it. Don't make busy-work for yourself just to fill the time.

Make side quests a concious choice

Before you start throwing 10,000 characters at your API, talk to your team. Your developer can probably tell you if they never built something to deal with that situation. Your product owner can tell you they'd rather have it to the customer sooner. Your data analyst can tell you if there's already longer stuff than that in the database, or if you should be trying Japanese instead.

Make side quests a deliberate choice. Share them to increase their value or figure out who on the team is best-suited to execute them.

Recognize when the quest is a journey, not a destination

Throwing 10,000 characters at an API may be a way to start a discussion about the speed at which responses are returned. It might be a way of showing your favorite random text generator to your fellow tester. It might be an exercise at an ensemble testing session, where everyone can practice pausing before executing an idea to describe the expected behavior first.

Quests can be valuable in ways that are not directly related to the finishing the quest.

Note: I got asked recently if I use the word charter much with non-testers. I don't. Try reading this again but replacing every mention of "quest" with "charter".