Praise the Messenger

I hear no a lot. No, that's not in scope for this story. No, it's not worth fixing that now. No, that's not risky enough for you to spend time testing. Hearing no on a good day sparks my creativity and pushes me into more valuable directions. On a bad day, it makes me wonder why I should keep going.

Saying no in a good way takes practice. I've honed this skill with wisdom from some of the best. Elisabeth Hendrickson tweeted a list of some of the different ways she has of saying no. It reminded me that the sassy replies I send to recruiters and speaking engagements in which I have no interest or connection. If you're looking for a style that builds bridges rather than burns them down, I'd recommend Elisabeth's Hendrickson's examples, not mine.

At Let's Test in Sweden in 2016, Fiona Charles gave a workshop on Learning to Say No. We practiced different ways to say no, complete with supporting arguments and steadfast determination tailored to the real-life situations participants brought to the workshop. (In a demonstration of having embraced the message of the workshop, we disbanded halfway through the allotted time. I spent the rest of the afternoon on a memorable bike ride along the Swedish coast.) The strongest revelation from the group was: we have more power than we think we do. There may be consequences to saying no, refusing a particular task, or turning down an opportunity, but it's often within our power to do so. Saying no effectively makes room you to decide how to spend your time.

At Agile Testing Days in 2018, Liz Keogh gave a keynote on how to tell people they failed (and make them feel great) that culminated in a surprising conclusion: don't. Don't tell people they failed. Use positive reinforcement to encourage the behaviors you want to see, and the others will fall away.

We had a tester leave the company recently. They'd set goals with their manager that directly opposed the test strategy I helped them shape. Bugs they reported were shot down, postponed until future stories, or grouped together as issues to be fixed eventually someday. They were facing a lot of "no," and as far as I could see from a different team, not a lot of yes. How long would you last in this situation?

I also have a tester reporting to me at the moment. They've been catching tricky things, find bugs, and preventing problems. I want them to keep doing what they're doing. I get that a developer's first reaction when they hear about a bug might not be "Oh wow, thank you so much for this pile of questions and work!" I have to give them that positive reinforcement so they keep reporting and digging into issues, so I call it out in their 1-on-1. They thank me, and says it keeps them going.

Don't shoot the messenger. Praise the messenger.

Who are you saying no to frequently? Is your no driving them away, or encouraging them? Do you need to say no?

If you were waiting for a sign - this is it.

Delivering Information vs. Delivering Meta-Information

One of the first testing skills I built was bug reporting. I practiced narrowing down the steps to reproduce. I told my developers what was happening now and what should happen instead. I learned to include where the issue was occurring, so my developers would stop closing my bugs with the Works for me resolution in Trac.

In 2013, Paul Holland taught me to tell the story of my testing. That is, meta-information about my testing. Sure, I still reported the outcomes: these things didn't work, these things did. But I also reported:

  1. how I tested the product
  2. how good that testing was

How long did it take me to test? What was difficult about my testing, made it difficult to get started, or slowed it down? What wasn't I able to test? The reason we tell those stories is to help uncover product risks, process risks, and get help from those around us to solve the issues.

This is meta-information about my testing. It's what I'm seeing as I'm doing it, one level removed from the actual testing itself. (Look up testopies [testing + autopsy] for more on gathering meta-level information about your testing.)

Building the skills around meta-information are important for doing any job well. These skills include:

  1. identifying when you're communicating at the information level, or the meta-level
  2. switching between and keep track of whether you're communicating at the information level or meta-level
  3. naming the different levels, and bringing the people you're communicating with on this level-switching journey

These recent examples stick in my mind.


Addresing the question vs. addressing the miscommunication

I was interviewing a candidate for a Software Development Manager position. For about a half hour, every question proceeded more or less like this:

  1. I'd ask a question.
  2. The candidate would speak for a minute or a few, without answering the question.
  3. I'd rephrase the question, be more specific, or add more details to help the candidate understand the question1.
  4. The candidate would speak again, again not answering the question.
  5. I'd try once or twice more, before moving on to the next question.

I realized early on that this was happening. I consider it an essential skill to be able to answer a question directly, or identify when you're having trouble doing that, as a Software Development Manager. I decided I'd reject the candidate, but realized I'd still have to get through the rest of this hour with them.

My colleague, my co-interviewer, took a different approach. He called out the miscommication problem to the candidate! He said "Elizabeth is asking you questions, and you're not answering them. Did you notice that? Why is that?" It was direct in a way that made me uncomfortable, and that I would have trouble doing with someone I just met. But he moved the conversation to where it needed to be: the meta-level. The candidate struggled to answer this question as well.

I jumped in to ease the awkwardness. I offered up my theory: the candidate's responses weren't answers, they were the candidate agreeing with the premise of the question. My colleague completely agreed, thanked me for contribution, but politely redirected the burden of this problem onto the candidate. The candidate continued to struggle without acknowledging the issue or the awkwardness for the rest of the interview. It was a clear decision for both me and my colleague: we would not be hiring this person.

Difficulty scheduling a meeting vs. telling them that

I was asking for more details on the planning of a project in standup. My developer described how hard it was to find time with a particular developer on another team. They said the project was going to take weeks to scope instead of the days they thought it could take, if only they could find time with this person. The project seemed important, already had a deadline (or really, sadline2), and waiting until the next free spot on this person's calendar wasn't working.

I asked my developer "Have you given his this feedback directly?" I suspected that only the information had been communicated: the following meeting would have to wait a week. My hunch was correct. I suggested my developer try giving this person the meta-information: about how hard they were to schedule, how this was cutting into the time we had to work on the project once it was scoped, and how that threatened the deadline. Imagine how differently this person could react and rearrange what they were doing, or share their relative priorities so we could adjust our expectations, when given this meta-information. This was last week, so I don't know yet how this story ends!

Adding testing details to the story vs. why I'm asking for them

There was a user story about being able to send metrics from our application to another application. I was picking up the testing on the story. My developer said they'd confirmed in the other application that our metrics were sent. They'd only sent a success, so I was planning on using the same setup as they did to look in the other application, but see what a failure looked like instead.

But that's not what I told my developer. What I said to them was: "Add enough details to the ticket so I can test it." It turned out, we had different ideas about what enough meant. First they added a URL. I followed the URL and it went to a blank page with a header. I said the same thing again, "Add enough details to the ticket so I can test it." They wrote down the first button they had to click on. I asked a third time. They added one more detail, which still didn't tell me enough. But I got tired of asking. I tried all the options in the application, Googled to figure out what SQL query I needed, executed it, triggered a variety of different failures, and confirmed that they were received.

Later, I explained to my developer the difference between my expectations and what they wrote. They explained how their expectations were also violated: they had to reach out to the team from the other product, figure out what to do, poke around in the product, and that the details they left me were all they received to be able to eventually figure it out. To them, just the URL going to a blank screen was enough3.

I realized then that I had left out a crucial piece of meta-information: the reason I was asking for the details. I wanted to skip that poking around time. I had ten other tasks this day, and expected this to only take 20 minutes instead of the few hours it ended up taking. I was hoping to benefit from the poking around work my developer had already done. I was expecting a lot of context, and my developer was expecting to only need to share a little bit of context.

Once I shared this with them, they understood where the gap was between my expectations and theirs.


The suggestion about scheduling difficulties, that was an easy conversation to have. The other two were quite difficult for me. They took patient, active listening. I had to keep asking myself if I'd been clear enough with my expectations. They definitely made me sweat.

Moving between information communication and meta-level communication is a skill. It takes time, failure, reflection, and practice to do it. Doing it well is a leadership skill. Crucial Conversations helped me identify when it's worth working on the relationship with a person and investing in these conversations. You don't have to be managing people (as I am currently) to be building or using this skill.

What meta-level conversations are you avoiding? Are there people where you find you're only able to communicate on an information level? Have you tried communicating with them on a meta-level? What would happen if you told them that's what you were trying?


  1. In high-context cultures like the United States and the Netherlands, the deliverer assumes the burden of the miscommunication, not the receiver. There's more about this in The Culture Map

  2. A deadline is when somebody dies. Most things we call deadlines at work are just sadlines, in that people are alive but just sad when they're missed. Read more about it in Liz Keogh's blog post

  3. My colleague is coming from a low-context culture. I'm coming from a high-context culture. You can read more examples of this in The Culture Map, but it's the kind of non-fiction book that could have been a blog post, so just read the rest of this blog post instead? 

The Mental Load of One Meeting

I facilitated a meeting today. It was scheduled for 45 minutes on my agenda, for me and some other tech and team leads in my unit. We covered what we needed to move forward planning an epic with a set of relatively straightforward stories, both in execution and work distribution among the teams.

For some of the people attending today, they showed up without preparing, and didn't remember what the topic was. I filled them in. Because this is what the meeting preparation looked like from my side:

  1. I was in a planning meeting two weeks ago with a variety of people from the unit. I picked up the implication that that follow-up (combination solo thinking/document review followed by meeting) was needed soon with a few tech and team leads. I checked the out of office schedule (that I created and keep updated as people mention when they'll be out) and noticed that two weeks was an achievable time span for "soon". I mentioned this while we were still in that planning meeting.

  2. Within the next 48 hours, two people from the planning meeting asked when I'd scheduled the follow-up. I explained that I was giving the other leads a chance to schedule it first.

  3. When I scheduled the follow-up, I made the title of the Outlook invitation "review the comments you've already added to the document" so the purpose and expectations were both clear and impossible to miss. I added the document to review to the body of the invitation. I let the two inquiring minds know that this occurred. I found a couple of 45-minute blocks when all four people present at the planning meeting, plus the one person that had been forgotten, were available simultaneously. I chose the one on Tuesday, giving the person who was out for a week Monday to catch up.

  4. I added a Slackbot reminder for Monday afternoon for the person who's out the previous week. They're on my team and typically have trouble saying no when someone tries to pull their focus. We've found that Slackbots help.

  5. The meeting time arrives. The Slackbot reminder person and I have reviewed the document. The others haven't. I check in and ask whether we should hold this meeting after they're available to do so. Silence. I move on, facilitating the discussion, taking notes, and keeping us on track.


I am not the only person capable of scheduling a meeting. I am not the person best equipped to review this document. It is not my responsibility to come up with systems for my peers to remember to do their work. I can let this fail, just not follow-up, and I do with smaller, safer-to-fail experiments. But it doesn't solve the problem.

I can't do a better job of explaining the burden of carrying the mental load than Emma does in her comic. But to summarize: a lot of work is the work of noticing what work needs doing. It's the difference between "Let me know if you need help" and finding a way to actually help. It's the difference between having 45 minutes blocked on your calendar, and everything else that had to happen to make it successful.

What I wonder now is: how much is this skill of carrying the mental load noticed? How much is it valued? How do we interview for this skill? How do I train my fellow team members to have it? How do I teach the ownership of a team and a project? How do I get people to ask each other about the next step in picking up the work instead of coming to me to be the dispatcher? How do I make myself, instead of irreplacable, completely replaceable?

The Long Haul

I've been both the lead of my team and a tester on that team for a year now. Getting answers, adapting to change, and identifying solutions have completely different time horizons in each of these roles.


Tester experiments

Testing experiments can run quickly. A testing experiment might look like this:

  1. Call an API
  2. Inspect the output
  3. Compare that to the specification
  4. Question whether either or both need changing
  5. Talk to a developer about the changes
  6. Update the specification and/or tests

All of that happens within minutes or hours, or days if schedules are extremely incompatible and asynchronous communication is failing. I can be confident in the experiment's outcome. I can weigh the relative merits of caring vs. not caring about a particular error response code or required field, and leave my work at work. The next time I see a particular error response, I know what to look for and where changes might be needed. A failing test is evidence that something used to work in a particular way.

Team lead experiments

Team lead experiments take longer. A team lead experiment might look something more like this:

  1. Team members complain that it's hard to get their ideas in during refinement.
  2. I mention this to the talking-dominant team member at a 1-on-1.
  3. Talking-dominant team member dominates following refinement.
  4. I remind talking-dominant team member in Slack about our previous conversation.
  5. Talking-dominant team member spills over their allotted time during big unit meeting.
  6. I bring both of these instances in our 1-on-1, sharing the consequences (they're the single point of failure, other team members aren't heard) of their actions.
  7. Talking-dominant team member does it again.
  8. I ask team member what I can do to help them change their behavior, given that we are both adults in control of our own behavior. They agree that change is their responsibility. We agree that setting their microphone on mute at the start of the meeting would help.
  9. Talking-dominant team member dominates some of the following refinement, until I remind them to mute, after which other team members have time to think and contribute too.
  10. I ask talking-dominant team member to set up a Slackbot to send them a reminder to mute their microphone each week before the meeting.
  11. Other people are able to contribute at the following refinement.

This took place over months. We're not to a point where we have a solution that works every time. I went in with a different hypothesis each time, not knowing when I'd hit on the right one:

2. I think the talking-dominant team member isn't aware of their behavior.
4. I think the team member has forgotten our first conversation.
6. I think the team member doesn't understand the impact of their behavior.
8. I think the team member hasn't found a tool or a trigger to change their habit.
10. I think the team member needs both a tool and a trigger to change their habit.

Any of the first four experiments taken by itself looks like a failure. The talking-dominant team member prevents other team members from contributing effectively. It takes me time as a leader to come up with a different hypothesis, try something else, and discover where to go from there. And this was a relatively straightforward issue to assess. Imagine how long it might take to find an effective response to a problem with more variables and more consequences.


I'm also thinking not just about the experiments themselves, but how they might come across to the wider team. For the testing experiment, I could present my results in standup the next day as "I tested it, everything's good" but it's more valuable for everyone if I tell a bit more of the story. In the team lead experiment, I can imagine my team member telling my boss "Elizabeth told me to be quiet" or me telling my boss "The talking-dominant team member is giving room for others to contribute." Telling a slightly longer story of the journey displays my value as a team lead in a better light.

What experiments are you running right now? Is something that looks or feels like a failure getting you closer to a solution? How long is your time horizon?

Questions from Exploratory Week on Writing Exploratory Testing Charters

The Ministry of Testing hosted a week all about exploratory testing. I had the honor and privilege to help shepherd a small group of testers on the path of writing charters for their exploration. The most interesting part for me is where people had questions. It helps me figure out what sunk in, what could use more explanation, and helps me know that I've answered at least one person's burning question. Here are some of the ones I remember from the live Q&A at the end:

Q. Do you use the word charter?

A. Basically no. I've only heard testers who've specifically dug into this topic use the word charter. Almost all of the people I collaborate with on a daily basis (developers, product owner, UX, managers, other testers) do not have this as part of their experience. Most of my colleagues are not working in their first language. As a native speaker, I need to have more than one word to describe any particular phenomenon in case the first one doesn't resonate, or isn't understandable in my accent. (Everyone has an accent.) I've called charters:

  • questions
  • missions
  • paths
  • plans
  • goals
  • investigations
  • journeys

It's less important to use the word charter than it is to get across the intent: you're going on an exploration, in a particular direction, with specific set of tools, and you hope to come away with more information on this topic than when you set out. Sharing your charters helps you get feedback about where to look more deeply, more broadly, and where not to look at all.

Q. Where do you bring charters up?

A. Where don't I bring charters up? It bleeds into conversations I have about my work. Sharing my work and getting feedback about it is what ensures I'm providing valuable work for my team in the right direction. I tend to discover points of interest for my developers once or twice a day when development is starting, and more often when testing is at its peak, which often escalates to pairing. Here are some other moments in time where I share charters:

Standup

It's how I explain what I tested yesterday, what pieces I might have time for today, and what directions I haven't or won't have time for before we want to release the story. Sharing where I'm looking prevents me from being the one gatekeeper on quality for our product. "I've successfully called the API as an admin user and a regular user. Today I'm going to dig into what happens with the non-required fields." will solicit a completely different type of feedback than "I have an hour or two left on this story."

Refinement

Any clues I can give my team about what I'll be looking into, what kind of test data I might set up, and what tools I'll be using to test a particular feature will help them figure out the whole scope of the story. "I'm going to try names at the character limit to see how they wrap on the page." helps us all figure out that we need to talk about our expectations for a character limit, we need to talk to UX about what should happen when you try to input something too long, I need to test what happens on the API side for the same field, and we might need a frontend dev to help us with the wrapping or truncation depending on what UX decides.

Testing starts executing

This is the point in time where there's enough built that I can add test execution to the setup and planning I've already been doing on a story. It might be that the API spec is published, it might be that the application has one happy path built. The developers are still going, but there's somewhere for me to start. Depending on the size and complexity of the story, I'll reflect for myself, or share my ideas with someone else on the team. If it involves an integration with another team, I'd reach out to them too.

Q. Isn't that just a test case?

A. After almost two hours of technical difficulties and explaining things, I have to say I did not write the most elegant charter as an example during the workshop. You got me! I'm glad that this workshop participant has a good feel for what is too specific or too broad. I find this so hard to explain because so much of that depends on the context.

But it wasn't terribly important to me to get the level of detail correct. Charters are a place to reflect on your testing and spark conversation. This charter did exactly that.


You can find other questions there weren't time for during the workshop on the Ministry of Testing club. The slides from the workshop are on github.

My First Agile Testing Days: Four Years Later

After years of getting rejected, the talk I submitted to Agile Testing Days in 2017 finally got me accepted. It was such a privilege and an honor to meet so many of the people I'd only know from the internet. Reading through my notes now, much of what I wrote down has become engrained into how I go about my work everyday. What a blessing it was to encounter the people I needed to learn from in my career at the time I needed to learn from them. I'm glad to see how routine their wisdom has become for me.


  • Lisi Hocke gave a talk about growth. I've since seen more of her talks and prepared a workshop with her; I see how she embodies this in her approach to the world.
  • Gitte Klitgaard reminded me that believing in people will allow them to be better. Stay curious about why people are doing what they do before placing judgment.
  • Kim Knup spoke about a zero bug policy (not any bugs in your backlog). I was not ready to hear this message at the time, having come from places with years-old products and tens or hundreds of bugs in the backlog. But now I see exactly what she was describing: the psychological relief that comes from less time in JIRA and fewer meetings about priorities.
  • Katrina Clokie pulled up Noah Sussman's reimagining of the testing pyrmaid (lots of small tests, fewer large ones) upside-down as a bug filter. And suddenly, it clicked for me.
  • Alex Schladebeck and Huib Schoots challenged me to think on a meta-level about the testing I was doing, to name the skills and techniques I was using. It would help me as I spent the following years sharing exploratory testing skills with other testers.
  • Emily Webber spoke about building trust on teams. I find myself recommending the team manual she developed to somebody about once a month, plus she helped spark an idea for a future conference talk I developed.
  • Liz Keogh gave me the words I needed to build a safe-to-fail environment for my team members, where failure is an expected, inevitable part of complex systems.

I realized in compiling this blog post that I'd already written about Agile Testing Days just one month after I'd attended in 2017. At that point, I was looking specifically at regression testing advice, which is what I was in the thick of at work. What am I in the thick of now, and when will it become clear to me?

Recently Encountered Logical Fallacies

I was on a panel about critical thinking for the Ministry of Testing last week. One of my fellow panelists and commendable ranter Maaike Brinkoff brought up ad hominem (personal) attacks as one example of a failure of critical thinking. It's one of many logical fallacies that are worth exploring further.

Equipping yourself with the name for a thing helps you recognize it when it appears. (Lara Hogan wrote recently about applying the skill, of being able to name the problem in the room, to defuse tense meetings.) These are some of the fallacies I've across recently when I've been debriefing testing sessions, facilitating refinement sessions, and reviewing conference submissions.

Affirming the consequent

Affirming the consequent is applying a conditional without the conditionality, or assuming something happened because you see a result.

  1. If P (I run the pipeline) then Q (the latest build will be available on the test environment)
  2. Q (the latest build is available on the test environment)
  3. Therefore P (I ran the pipeline)

We can't assume the converse: if Q, then P. Just because the latest build is on the test environment doesn't mean I ran the pipeline. Maybe someone else ran the pipeline, or put the build there manually. Maybe there haven't been any changes since yesterday, and the build from yesterday is still the latest one.

Fallacy of composition

This assumes that something that applies to one member of a class applies to them all.

  1. Y is part of X (Stephanie is an admin user)
  2. Y has property P (Stephanie can see this page)
  3. X has property P (any admin user can see this page)

We can't assume that what's true for one member of a class applies to all of them. What happens if Stephanie can be assigned more than one role, a more restrictive/regular user role in addition to the admin role? Can she still see it? What if Stephanie being able to see the page has nothing to do with her status as an admin user?

Post hoc ergo propter hoc (correlation without causation)

This one is easiest to see when others are debreifing their testing to me, but I've also learned to catch for myself.

  1. Event A occurred (I clicked the button)
  2. Then event B occurred (a whole bunch of log messages appeared)
  3. Therefore A caused B (clicking the button caused a whole bunch of log messages)

We can't assume that events that occur in a particular sequence in time are necessarily causal. Did clicking the button trigger the log messages? What do the log messages say? Did you read them? Who else could be using this environment? Does the same thing happen every time you click the button, or when you run the application in a different environment?

Argument from repetition

When someone says the same thing enough times, or brings up the same unimportant issue in a refinement meeting week after week, it can become easier to address the issue rather than convincing them yet again why it's not a priority. I've been facilitating refinement meetings every week for my teams for the past two years. I only have a finite amount of energy that is not always worth expending by refuting the case for a small edge case week after week.


Shoutout to my logic professor Dan Cohen at Colby College, who had us memorize and distinguish logical fallacies as part of his brilliant Logic and Argumentation course, and pointing out that an ease and comfort with truth tables would translate well to a computer science course. Special thanks to Joep Schuurkes for his philosophical and technological opinions on this piece.

TestCraftCamp: Spice Up Your Relationship (with Your Project)

TestCraftCamp, the unconference I co-organize, had its third installment yesterday. I'm glad I had enough energy to attend more sessions this time, though of course the "too many good things happen at once" problem remained, as it does with any valuable conference. I was able to join a discussion Maaret Pyhäjärvi (who is so often a driving force behind gatherings like these) about low and high value work, a discussion Joep Schuurkes led on daily writing practices, part of a session Veerle Verhagen hosted on testing without touching (before I realized I'd spent too long looking at the product under test already), as well as finding some of my brethren in between the sessions.


I took notes in a sprawling mind map for another session Veerle hosted, which she pitched as "spicing up your long-term relationship! (with your project)." Besides the memorable pitch, I was excited to see this topic on the schedule for a couple reasons:

First, I've been working on my current product for two years. Some new people have joined the project recently, and having to explain and document the intricacies of our product keeps giving me new reasons to question the decisions we've made as the product has been developed.

Second, I don't think this it's something we typically acknowledge: our skills can get stale and our motivation can wane when we've developed a certain level of comfort and mastery with a particular team and product. Even without changing jobs or teams, it can be worthwhile to shake things up.

On my project, adding permanent designers and engineers to the team is shaking things up. The hive mind in the session came up with many other ways to get new people involved in testing, if temporarily:

  • hold a bug hunt or an ensemble
  • teach an intern
  • swap products with another tester
  • bring your product to a meetup
  • pay for crowdsourced testing

I collected a sprawling list of people's ideas for how you as an individual can gain perspective and change the way you're looking at your work. Taking notes on a multi-faceted conversation is hard, and in reviewing them, I realize they fall better into these groups:

  • identify the assumptions you've built over time (which aren't true? which can be discarded? what's hard to talk about? how can I change perspective?)
  • think about risks (nightmare headlines, investigate competitors, talk to users, riskstorming)
  • use different testing personas (extreme conditions, soap operas, testing tours, dogfooding/drinking your own champagne, this)
  • step back (take a break, go on vacation, switch projects)

As a counterpoint, the group in the session did acknowledge that reaching a point of comfort and mastery can be a good thing. (See chapter 4 of Jerry Weinberg's Becoming a Technical Leader for more on plateaus and ravines.) After months or years on a project, you'll know where to look first for what your developers typically miss. You'll know what's not worth testing. In Veerle's case, the application has been getting 5-star reviews, which provides one positive angle on product quality.


Thanks so much to the facilitators, participants, and other organizers of TestCraftCamp. It's so fulfilling to get to the end of a long day and hear that we created a "safe environment," people who were only planning to stay for the morning lasted all day, and people were able to take breaks when they needed to. I'm glad we were able to fill others back up with energy.

TestBash NYC 2015: A Push in the Right Direction

In 2015 when TestBash came to the United States for the first time, it was to New York. I was living in the city, but I was stuck at a job that wouldn't pay for a ticket. "Ask Rosie if you can volunteer," my mentor/sponsor/benefactor Martin Hynie suggested. Rosie, who I knew only as the lady who'd mailed me Ministry of Testing stickers from England for...no reason at all, obviously let me in.

People ask "can we do it?" instead of asking "should we do it?"

~Keith Klain

I got to crash the speakers dinner. I got to go to Selena Delesie's workshop about leadership and change. At the end of the day, she praised my active participation and thanked me for being in her workshop. I was already confident in my testing skills, but she helped me see myself as a potential leader. In a follow-up coaching session I had with Selena about negotiating a higher salary, she asked me why I wanted more money. I'd recently moved into an apartment by myself, and couldn't think of what I would do with more money. It's something I think about with every job change, every growth in title and responsibility. Realizing I didn't want or need more money was a crucial step on the path to life-changing relocation.

The common denominator in all your dysfunctional relationships is you.

~Keith Klain

During a break between talks the following day, I snuck on stage wearing "the" Ministry of Testing tutu. During another, I wrote a bunch of notes to give a 99-second talk about leaving a closing comment on a story (which I completely forgot about, before later writing on this topic for the Dojo). While waiting on stage behind dozens of people waiting to give their 99-second talks, I improvised one about moonwalking instead.

The 99-second talk I didn't give

I met Helena Jeret-Mäe, Maaret Pyhäjärvi, Dan Ashby, who along with the conference friendships I was just beginning to foster, gave me a vision of what my career could be, and where it could go at a time when I knew I needed something different. Helena saw my talk at Let's Test the following spring, and gave me valuable critical feedback that helped shaped future talks. Maaret introduced me to strong-style pairing, which changed the way I worked with my colleagues to this day. Dan had me on his podcast, reinforcing for me the success of my talk at the following year's TestBash USA.

Write down when you receive a compliment. Maybe it's true.

~Helena Jeret-Mäe


Why am I revisiting my notes from this conference six years later? I might be feeling a bit nostalgic for the seeing-people-in-person events a year into pandemic-induced isolation. I'm also in the middle of reading "Becoming a Technical Leader" by Jerry Weinberg. One of the questions asks you to read an autobiography of someone you admire. It turns out none of the people I shared TestBash NYC with have published autobiographies...yet.

When You Can't Help Much, Help A Little

Check that you have time

At my current job, everyone in the R&D department gets a budget of two days per month to spend "crafting," or researching, building, testing, etc. what interests them. Most often people use this time to bring work they're passionate about up from the bottom of the backlog to be worked on now.

I was in the middle of adding a security scanning tool to our CI pipeline when I happened upon something interesting to test: a new web application. An old application that had been widely-used around the company, and as it turns out, with our customers too, had gone down. It allowed you to share a piece of text with a link. The text was only visible once. It wouldn't be viewable subsequent times you followed the link, so the application was good for sending passwords around securely.

The person who'd built the application had left the company. In inquiring about who/how we might maintain it now, one of the customer support leads mentioned in a public Slack channel that he'd built a replacement. "Great!" I thought. "This is the one day I have an hour to test it." I was waiting for code review feedback on my pipeline scan, so it was perfect timing.

Check that the feedback will be heard

It's a waste of time and energy (with the latter being in shorter supply these days) to test something if nobody's going to do anything with the results of your testing. More on that in this post. So I checked with the customer support person first. After taking a quick look at the new application and confirming that it seemed to work, but could use some tweaks, I asked the customer support person in a direct message if he was ready for usability and accessibility feedback.

Me: Hi there, I have a bit of usability and accessibility feedback about the secrets app. Is it at a stage where this feedback would be useful?

Him: Yes, definitely

Me: Great, I'll send you a Paper doc this afternoon.

Collect the feedback in the same way it will be presented, and present feedback in a way that the audience is comfortable with

As much as I love making mindmaps, I decided that might not be the best format in this case. I doubted the customer support lead would bother to download a mind mapping application, company security restrictions prevented me from sharing a web-based one, and I'd probably want to walk him through a mind map I produced. But that felt like too much trouble for something small, plus I didn't want another Zoom call on a day otherwise free from them.

Instead, I used my company's go-to document tool of choice: Dropbox Paper. I might not enjoy it as much, but I knew it was a way he was used to receiving and collaborating asynchronously. I confirmed that format with him to be sure, and then I got to testing.

Share your oracles (reasons why you have feedback)

Once I opened a one-time link the application created, there was a page with an animated GIF, the piece of text that was shared, and a button to Copy Value. My immediate testing notes were something like:

  • Remove GIF/make it stop rotating
  • Move button to the top
  • Make font bigger

This customer support lead might have just taken than feedback and made the changes. But since I don't have an existing relationship where I provide feedback about his work, and developing applications is not his normal line of work, I provided more details:

  • Animated GIFs (that can't be turned off) can trigger people with epilepsy or motion disorders (vertigo for example). Here's the W3C guideline on this.
  • Move the Copy Value button above the text you're sharing so it's visible even if the text is ~5000 characters long.
  • Increase the font size from the current 14px to 16px for vision-impaired peoeple. The ADA and typography geeks recommend 16, Apple has 17 as their font size.

Now the customer support lead understands why I'm asking for these changes. He can make different decisions than exactly what I've suggested while still addressing the problem I'm reporting. Plus he'll know more for next time he's building something.

Acknowledge the limits of the situation

Somewhat to my surprise, the customer support lead started implementing my feedback right away! He took what I said into account, including removing the animated GIF entirely. He appreciated my feedback and wanted me to look at the application again the next day. Unfortunately, the next day was back to a regular work day filled with priorities, pressure, meetings, etc. I told him I wasn't going to have time to test it again. He went ahead and launched a functioning product.

Move on

While testing the product, another colleague noticed that the application had a larger architecture and setup than strictly what was needed for this particular use-case. Also, the application didn't provide an API for applications instead of humans to be sending around secret links to other humans. I defended the customer support lead: he was doing his best to solve his problem with the tools and skills he had. It wasn't the right time or place to come in at the end of a project that was about to provide value to people (myself included) and announce "this was not the optimal way to build this tool." You don't have to share all the feedback you collect.

Summary

The next time you're wondering if you should parachute in to test something new, consider these steps:

  • check that you have time
  • check that the feedback will be heard
  • collect the feedback in the same way it will be presented
  • present feedback in a way that the audience is comfortable with
  • share your oracles (reasons why you have feedback)
  • acknowledge the limits of the situation
  • move on