Half-Life For Your Backlog

This summer, I helped a team think about the important work that we wanted to tackle. Then vacations happened. Priorities shifted. The product finally went live to a bigger set of potential customers. And those stories we'd written remained in the backlog. When we opened one at refinement this week, a developer joked that the items in the backlog should have a half-life.

Everyone else laughed. I insisted they were on to something.

Backlogs are toxic

Every product team I've been on has items (user stories, bugs, even epics) that are old. Maybe the person who wrote them has since left the company. Maybe the bug doesn't have clear enough reproduction steps or impact statement to ever prioritize it. Maybe the dream feature from two years ago already exists but the earliest description of its possibilities remains.

Nobody needs this garbage. Nobody has the context anymore. And most importantly, nobody will notice if these items disappear from your backlog.

What would happen if, instead of treating every item ever logged in JIRA as a precious pearl, you treated it like the biohazard that it is? Get rid of it!

Evidence of a toxic backlog

I know a team's backlog needs a half-life when:

  • Comments appear on the JIRA items ahead of refinement asking "do we still need this?"
  • Items belong to an epic, sprint, or some other gathering place that is already closed.
  • Items belong to many sprints but have never been in progress.
  • People are scared to edit or remove items from the backlog.

Lean software development stress just-in-time refinement to prevent the decay, waste, and stress caused by building up a big backlog that never gets smaller.

Narrow it down

Start with a literal half-life on your backlog. Take the date the product began. Subtract it from today's date. Divide it in half. Any items older than that period: delete them. For a product that's been worked on for four years, that's any item older than two years.

Other strategies that have served me even better to achieve an even smaller, robust backlog:

  • Delete any feature requests filed by colleagues when they quit.
  • Delete any items that only contain a title (no details) for a feature that's already been released and not on the roadmap for the next three months.
  • Delete any items you wrote.

Then set new rules about how to add things to the backlog, and how realistic you want to be about what kind of clean-up (even just administrative) there is after a milestone is complete.

Embrace change

Present you knows so much more about what's important and how things will get prioritized than past you did! Let go of what you thought you knew and embrace what you know now!

Software doesn't live forever, and neither will you. Learning to let go of the dreams that will never come true leaves room for you to dream about a different future, one you can realize. I hope your backlog can reflect that.


What other toxic questions or narrowing criteria have you used? Does JIRA have a (free, please) plugin to delete anything beyond a certain date? Has anyone ever gotten upset that you deleted something from the backlog? Has anyone even noticed?

Photo by Kilian Karger on Unsplash

A Google Reader Replacement

When in the course of human events, it becomes necessary for one person to dissolve the social bands which have connected them with others, I find myself asking what I valued from Twitter in the first place. A constant stream of short, popular updates? No thanks. A Rolodex? Partially. A newspaper? Almost. I wanted to read my friends' blogs and newsletters, collected in a publication to be read at a time to my liking. Like a newspaper.

I wanted Google Reader back.

Google Reader, the beloved (and thus, consequently, sunset) product allowed you to subscribe to RSS feeds. RSS feeds connected me to the software testing community before I'd met any of them. I was learning by trial, error, and brute force at my first software testing role, and RSS feeds propelled me into learning from testers further along in their careers.

When Google Reader was retired, Twitter became the way I kept up with people in the industry. They'd tweet a link to their blog, which I have in enough open tabs to crash my browser. Pocket improved my workflow somewhat. I'd scroll Twitter, saves the links, and go to Pocket later when I need longer-form (and downloadable) posts for my subway ride.

I wish I'd found an RSS reader that worked the way I wanted: remembering where I left off, displaying the posts without destroying them. I remember trying Feedly and a few others before giving up in favor of my Twitter + Pocket workflow. This served me from ~2013 until (checks watch) two weeks ago, when Twitter was set ablaze by the egotiscial sociopath in charge. I still want to read my blogs, and when I saw this post, I honestly could not tell if it was a joke:

Post recommending you aggregate your RSS posts into an epub format

I suspected the epub format part, a file type compatible with my Kobo Clara ereader, was a rhetorical flourish. But would this solve my problem? Could I read the blogs without reading the tweets? Without a screen??

I'd already set up my Kobo Clara to integrate into a mainstay of my digital life, the tabs I save to read later in Pocket. After a recent unrelated ereader triumph, I started Duck Duck Go-ing if my ereader could subscribe to RSS feeds directly.

Escaping both Adobe and wired syncing gave me new-found freedom.

After a bit of searching, I hit upon a solution that would work: Qiip. (I'm assuming it's pronounced "keep" but do send me your alternate pronunciations.) Qiip lets you sync RSS feeds to Pocket. Sign in with your Pocket credentials, give Qiip the RSS feed URL, and poof: your favorite blog appears in your Pocket list. And for me, ultimately, on my ereader.

I do wish Qiip had separate login credentials. Every time I login to add another blog, it asks me if it can talk to Pocket again. Yes Qiip, it's fine, go do your thing.

But that's my only complaint, from me, a professional complainer. I love having the things I want to read appear in Pocket without having to scroll through Twitter. I love being able to read more of the internet on my ereader. I love skipping the self-marketing bonanza in favor of what people are trying to say.

A few days later, I discovered that Substack newsletters also have RSS feeds: add /feed to the end of the URL. I'm still unsubscribing from the Substack emails I receive, approximately 1/3 of my personal inbox. But I'm already delighted to have my Saturday morning, tea-in-the-garden reads separated from my email inbox.


How are you coping with the demise of the main online gathering place for software testers? Is Mastodon your go-to? Do you also dream of reading blogs like you'd read the newspaper? Can you convince my American friends to quit Instagram in favor of the federated Pixelfed?

Did you enjoy this tootorial?

Photo by Zoe on Unsplash

Story Slicing Workshops

I remotely facilitated this story slicing workshop created by Henrik Kniberg and Alistair Cockburn for two of the teams in my unit recently. They named it "Elephant Carpaccio" to give people the mental image of breaking down a big feature into very thin, vertical slices. Joep Schuurkes brought it to my attention when he facilitated the workshop a couple times in 2021, leaving behind not only these very helpful blog posts about how to run it but also the Miro board he used to do so.

My elephant encounter

The Setup

The 2.5 hour workshop as Joep ran it included three conversations:

  • a conversation about why we might split stories
  • a description of today's feature we'll be building
  • a short brainstorm about what would be a good (small enough) first slice of this feature

These three parts would fill the first 45 minutes. The rest of the workshop would be smaller groups tackling each of these tasks in bigger chunks of time:

  • breaking down the problem into between 10 and 20 stories
  • actually building the first few stories as sliced
  • a reflection and debrief on the whole workshop

The First Group

In my first running of the workshop, I was able to see a few things I didn't expect in the story breakdown:

  1. American state abbreviations: The problem lists different values of sales tax for AL, TX, and three other abbreviations Americans would typically know. Participants wanted to talk about the states using their names, but didn't know which abbreviation belonged to which state. I filled them in on the table I created.
  2. Sales tax vs. VAT: Another American vs. European participant thing! The answer to "how much does the thing cost?" will be different if you're adding the tax to the price, or assuming it will be included in the total. This wasn't important to the solving of the problem, so I let this difference persist.
  3. First things first: the calculating of the price was laid out very clearly as the first problem to solve. One persistent participant really had their heart set on data input validations and a particular user interface. It took several tries from their teammates and ultimately my nudging to encourage them to think about the problem from the most important pieces first.

I switched the instruction to take a demo/screenshot every 8 minutes, and instead asked groups to make one after each slice. This helped raise awareness for the difference between how they were breaking down the work, and how they were actually picking up the work. They might get through two or three slices in one go, only to have to go back and demo each scenario individually.

A few insights came through more clearly in the debrief than during the exercise:

  • It was easier to see progress and celebrate it when the work was broken down into smaller pieces.
  • The team understood the concept a lot better now, but there was still a big delta between how small they could slice a story, and what would make sense when the burden of code review and testing would typically take a few days.
  • A thinly sliced story is not the same as an MVP.

The Second Group

My second group benefitted from slightly clearer instructions. But they had all three of the insights the first group uncovered even before they started slicing or building any stories. They got hung up on other intricacies of the problem:

  1. Money: The American dollars in the problem statement used a . to separate the integer part of a money calcuation from its decimal component. Many of the participants were used to using , for that purpose, so they needed to consciously second-guess every calculation output.
  2. Saving: Both teams were using IDEs they weren't used to, which they hadn't set up for auto-saving. Most often an unexpected result prompted the question "Did we forget to save?"
  3. Slices as defined: They got the idea of how to slice the stories. But when they got to building the solution, they had to have an "Are we really going to do it like this?" conversation.

Just breaking down the stories wasn't where the learning happened for this team. It was the building that crystallized the gap between being able to define small pieces of work, and what it's like to actually build like that.

They got farther in the building than my first group had, so they were able to see how adding one feature impacted the already existing ones. The big insight from this group's debrief was how one small addition affects all the existing features, and the kind of time needed to do it right and test it thoroughly.


I'd love to run this workshop in other settings, but having a shared programming language and an ability to work in a strong-style pair are too big a barrier to entry to submit this as a workshop to a conference.

How have you gotten strangers started working together on code? How have you taught (or been taught) about slicing user stories into smaller, vertical slices?

What Would You Say You Do Here?

For the first time in years, I was with a group of people who:

  1. weren't at a conference
  2. didn't understand what my work looks like, and more excitingly
  3. were interested in hearing about it!

I hadn't been asked "what kind of work do you do?" since my role changed a year ago, so I wasn't prepared with a thorough-enough short answer for the person asking. They'd founded a digital agency a few months earlier and were used to farming out work to developers. They didn't quite understand where a "tester" might fit into their process.

What I said in the moment

"I make sure that stuff works, and the thing that was built is what you wanted. Sometimes that means I'm looking at the website, sometimes that means I'm writing code to verify stuff."

I got scolded later for giving such a simplistic view of my skill set, the industry, and particularly the depth and breadth of my current role.

What I could have said

"Everybody needs an editor. Testers help improve things in the product and process. They're there to collect information, sift through what's relevant, and advocate for what's important. This can look like making sure we agree how big a chunk we're biting off, gathering standards and expectations to compare them to what's been built, or troubleshooting issues customers are having."

"I've been doing this well enough for long enough that I'm not just doing this for one team, I'm doing this for my whole department, all seven teams. I'm in a position to see obstacles coming farther down the road, and I have the skills to pivot more quickly when unexpected hurdles catch us by surprise. I ask curious questions to understand what might be missing, and help eliminate the work that's distracting us from our focus."

It's close to a structure I'd want for this kind of explanation, with three layers of information, in case the person asking lost interest or we got interrupted.


How would you describe your role? Do you spend more time with examples from your day-to-day, or do you find that people outside the industry connect more to what you're saying when you keep it abstract?

Friends of Good Software - September 2022

The Friends of Good Software (FroGS Conf) had its sixth edition on Thursday, 8 September. I was eager to have it on a Thursday myself to have a real weekend, and include people who could never attend on a Saturday, but it had me a bit nervous about the number of attendees. Luckily we held strong by my measures of success: active participants, a full screen of faces, enough concurrent sessions that people were jockeying for different spots, and a retro full of requests to do it again. We will. :)

I ran the morning marketplace to determine our sessions. And I had the energy to attend a session in each of our five slots.

Heather Reid - Logs, usage stats, and how you use them

A FroGS newcomer but not at all stranger Heather Reid proposed a session building on her recent blog post about what the "highly-requested" part of "highly-requested feature" really means. She wanted to hear how other product teams were approaching data-driven decision-making.

Along with a list of tools, we identified the hardest part about using data: the shift in mindset required. It's a journey to go from decisions based on your experience to decisions based on your customers' experiences. Identifying the baseline and framing things as small experiments can help get the ball rolling.

Sanne Visser - Continued improvements to my planning system

I'm always interested to hear how Sanne's very elaborate planning system grows and compares to the simpler set of annual directions and weekly tasks I have.

Sanne uploaded some photos of her physical notebooks, gave examples of what she's striving towards, and gave a us a realistic look into what pieces and goals fall by the wayside when shit hits the fan. Sanne was kinder to Samuel Nitsche than he was to himself as he confessed he had no planning system at all. "What gets done now without a system?" she asked. Sanne's looking to shift her thinking from goals to habits in the coming months, focusing on outcomes rather than outputs.

Career trajectory - two separate sessions

I will refrain from publishing the personal details of my two fellow testers who are both at different "what do I want from my job?" points in their careers. I'm delighted to see that along with ensemble programming and database migration questions, FroGS has become a good place to support and advice during a time of reflection and contemplation. These topics bubbled up in both sessions:

You create your own luck

Our careers may seem like 98% luck. But continuing to connect our skills and values to the work we do puts us in more and better situations to keep building our careers. We can value the individuals on a team more than the mission of a company, or vice versa. Finding what we want more of is a years-long and perhaps continuous process.

You are more than your career

There is more to life than software, duh! We can make tradeoffs in our work to support the way we want to live our lives. That may mean rejecting the "hustle culture" of wanting that next promotion, and recognizing that staying as an individual contributor, getting satisfaction (or money) from a side-gig, or cutting down on working hours to pursue a passion.

Sanne Visser - Choosing not to access the system under test

After an intriguing experience report at LLEWT a few months ago, I was curious for an update from Sanne. I did not literally say "You are putting yourself in incredible pain" but someone did.

Sanne made an intentional choice to not be the person fixing things, to solve underlying problem instead, by not getting the required training or hardware necessary to perform testing herself. "I have been so frustrated with myself very, very frequently," she confessed. Some data setup would have been so much easier if she could just access the system. But Sanne's colleague who joined the session agreed it would have been too much on top of what Sanne's already responsible for.

Sanne's taken on shortening cycle time as her main goal. From the "multi-headed dragon" of problems she notices, she's been getting the team to vote on which experiments to try. Her improvements to stability, predictability, and planning have already made an impact. In the end, she gave herself permission to identify exit criteria for this experiment.


Thanks to everyone who attended FroGS Conf, and especially everyone who took notes. It allows participants who joined a different session to still share in some of the takeaways. And as noted by Heather, it helps people in the session who've missed a particular word or point, and builds the feeling of a collaborative community working together.

Based on the retro and who was able to join us, it appears that some people can only do weekends and some can only do weekdays. It's likely that we'll implement Sanne's suggestion of switching off between those options. Starting an hour later (10am instead of 9am Central European Time) worked better for us in the middle of Europe, and much better for our friends in the UK, Ireland, and Portugal. We'll keep that innovation.

Shoutout to the NS for going on strike and ruining the day we wanted to get together for drinks! And apologies to our co-organizer Cirilo Wortel for scheduling this event on a weekday during what turned out to be your busy time leading up to a big release. The rest of the organizers (Sanne Visser, Huib Schoots, Joep Schuurkes and myself) have our own retro later this week. We'll see how we can incorporate the rest of the feedback from the retro into our next editions.

De-Google-izing

Sometime in 2019, this article listing all the alternatives to Google products came to me over the wires. Ok it was probably Twitter. In a world of increasing surveillance, data mining, targeted advertising, and cookie pop-ups, I made it my mission to get off of Google products completely. Here's what I was able to do, why I went with the alternatives I did, and which Google products I'm still stuck on three years later.

What I was able to switch

Search Engine

This was the most straightforward one to switch. I'd already seen several colleagues turn to Duck Duck Go. I went through each of my browsers on my work machine, personal machine, and iPhone to point there instead.

In the first few weeks, if a search didn't return the perfect result in the first three listings, I'd find myself turning back to Google.com. Now I only end up on Google when I'm literally looking at a page that gives me zero results and I want to make sure the whole internet has nothing on the topic.

Email

This too was a straightforward one for me. The hosting service I use for elizabethzagroba.com, Dreamhost, came recommended by my friend Sahar Baharloo and already included email as part of the services for my website. I'd set up me@elizabethzagroba.com in 2011, but finally started switching my logins for my accounts to be connected to it. Switching all my accounts (and discovering which truly could not be switched) was the biggest part of this whole endeavor.

A couple friends recommended the privacy and security of ProtonMail. The email addresses I wanted had already been claimed, and their calendar feature wasn't available at the time. If you're looking to switch, try Proton first.

Web browser

I'd set my default to Firefox at work already. When everybody else uses Chrome, you catch more bugs in Firefox.

The article made me discover Brave browser, which I started using for personal stuff. I settled on Brave for my desktop machines and the Duck Duck Go browser for iOS. I'm not entirely sure why I chose those; I think it was just a successful first experiment.

Authenticator

I was able to switch to Authy for almost everything. I believe I chose it because it was listed first alphabetically in the article. The LastPass account I use at work won't let me use anything except Google Authenticator for reasons I cannot explain, so I do still have that app on my phone with that one account.

File hosting

I got everything I'd created off of Google Drive: deleting most of it, moving some of it to my personal machine, and put a few precious things into Dropbox and Dropmark where I already had accounts.

Calendar

Using my work (Outlook) calendar for during-the-weekday events and my personal physical calendar notebook worked great before the pandemic. Now I've got a mix of in-person things and video calls as part of my personal schedule.

All of the suggested digital calendar alternatives cost money and came tied to an email address, which I didn't want to switch again. I end up using a combination of archived emails for video chat links and writing on paper when the appointment is. It doesn't feel "optimized" or "automatic" in any way, but the physical act of having to flip the pages in my notebook and write the event down helps me not to forget it.

Video

I don't have the need to host video myself. I switched the video links on my website to point to Invidio.us instead of YouTube.com for my conference talks. I do find myself still using YouTube for exercise videos (thanks pandemic!) or when someone shares something on Twitter.

Translation

This is one of the ones I was most excited to discover. DeepL and Linguee, built by the same company, are for full-text translations and single-word dictionary lookups respectively. The quality of the translations is SO MUCH BETTER than Google Translate. The thought of no longer sending the sensitive information I receive in Dutch (tax letters, doctor results, immigration exams) to Google either through my Gmail or Google Translate feels great.

DeepL on desktop has a keyboard shortcut integration, so hitting Command + C twice (instead of the once you'd use for copying) opens the application and pastes what you've selected into the translator. Looking up individual words in Linguee gives you Wikipedia examples where the word is used in a sentence, so you can also see if it's part of a colloquial phrase or which preposition it's used with. Thanks to my friend Marine Boudeau for originally pointing me to these.

Analytics

I added Clicky analytics to my website. This is another one I tried and stuck with because it was first in the list. I don't pay them, so the data's forgotten after 30 days and I have to login every couple of months to keep the account alive. I try not to think about how unpopular my website is honestly, but when something blows up on Twitter, I like being able to see all the different countries my website visitors are coming from.

Fonts

I had been using Google Fonts on my website. Switching to Font Squirrel required choosing new fonts to use and hosting the fonts myself (a.k.a. putting them in a folder and using relative links instead of absolute ones). This was probably the most trivial thing to switch over.

What I wasn't using in the first place

I wasn't downloading an video games from Google Play, using the Android OS for my phone, instant messanging with GChat/Hangouts, or using Google Domains for hosting. Nor shall I be!

What I haven't switched

There are a few things I haven't switched, either because it's too much trouble when trying to live in a society with other people, or because I haven't given the alternatives a fair shake.

Maps

This is the big one. I spent a few weeks trying HereWeGo as an alternative to Google Maps. It was so bad that I decided to use it instead as an exploratory testing exercise. I need bike directions combined with landmarks, and I haven't found another map that combines them as well as Google does. Please tweet me what you're using instead if you've gotten used to something else. I'd be very interested to try again.

Docs, Sheets, Forms

People will read a document you send as a Dropbox Paper document, as an attachment, or in some other uncollaborative format. But convincing someone they need to set up an account at a different service just because you don't want to use Google is a step beyond what societal conventions will allow at this point.

Typeform makes more beautiful forms, but viewing the responses still puts you back in a Google Sheet. Some submissions will only accept Google Docs. These are not fights I can win, so I've stopped fighting.

Particular calendar features

If I want to schedule a call with my friends, where they can edit the invitation, I'm stuck sending a Google Calendar invitation from my Gmail. Accepting a Google Calendar invitation sent to me@elizabethzagroba.com in the browser where I'm logged in to my Gmail gives me a 400 error. A calendar notebook plus saved emails has worked surprisingly well for relatively low volume of personal appointments I have.

Movies

Google Play is where, with my American credit card, I can rent movies that aren't available on Netflix. It feels better than giving money to Amazon, but I also haven't looked that hard for other options of how to rent individual titles without a subscription.


That's my Google situation.

That doesn't absolve me of all the other corporations stalking me and ruining the world. I've quit Facebook but not Instagram. I've limited my Amazon purchases to Christmas gifts to family members I couldn't find another way to ship, and moved off Goodreads to the vastly superior recommendations and statistics of The StoryGraph. But the websites I get paid to work on are hosted through AWS. I'm still tied into Apple for hardware, Photos, my desktop email client, Preview, and Keynote. I'm mooching of a shared Netflix account until Netflix finally puts the kibosh on that. I've had to lower my expectations for my ability to escape these companies and remain an online professional.

What I can do is afford to pay for email and web hosting. If my translation and analytics services stopped having free options, I'd likely pay for those too. Something I didn't expect in moving off Google products: it feels good to pay their competitors so they can survive. Every little bit helps.

From API Challenges to a Playwright Cookbook

Soon after Maaret Pyhäjärvi and Alex Schladebeck began their endeavor to practice testing APIs using the API Challenges from Alan Richardson (aka The Evil Tester), they looped me into their periodic practice sessions. Why? To make Past Elizabeth jealous, presumably.

API Testing Challenges

We gathered for an hour every few weeks to work through the challenges. The tools we were using (pytest, the Python requests library, and PyCharm) were like home for Maaret and me. I'd been writing in a framework with these tools for my job for a few years already.

I wasn't the only one. These tools were free to use and available for a number of years already. What the three of us combined couldn't figure out by trial-and-error, reading the error message, reading the darn description of what we were supposed to do again, or relying on patterns from previous exercises, we were able to Google. With one notable exception of course, as we are testers after all:

It may not seem like you'd need three people to do the work that one person could do. But I assure you, having extra pairs of eyes to catch a typo, remember whether we were expecting it to pass or fail this time, see immediately that it's a whitespace issue making PyCharm angry, crack a joke, or help decide whether to keep going in the same direction makes the work go more smoothly.

More than once, we'd end a session a few minutes early because we were stuck and lost, only to come back a couple weeks later with fresh eyes, able to understand where we were stuck and what to do about it. After several months meeting infrequently, we got through all of the API Testing Challenges!

Then we were like...now what? We like learning together, but we'd achieved our goal.

Starting out with Playwright

After a bit of brainstorming, we landed on a skill Alex and I were both still building: UI automation. Naturally, Maaret was way ahead of us, and pointed us towards Playwright framework and a practice site from Thomas Sundberg of all the greatest hits: radio buttons, drop-downs, alerts, you name it.

Our experience with UIs, DOMs, automation, Selenium, exploration helped us, but didn't prevent every pickle we got ourselves into with Playwright. Though their documentation will tell you a lot of what you need to know (if you've correctly selected Python instead of Java or Node.js at the top), our desperation kept exceeding our patience. We escalated to the Playwright champion Andrew Knight and the Playwright community Slack channel.

Several times, it wasn't only the code that needed changing, but our perception of how Playwright wanted to interact with the website. These are a few I remember:

  1. an API response from a browser context can't be collected from a page context
  2. setting different contexts for a page and an alert on that page
  3. having that alert knowledge not help us when we also had to fill in a prompt
  4. expecting something in the DOM to tell us when an item in drop-down was checked

For the first three, wrapping our heads around a different way of thinking got us through the problem. For the last on, we lowered our expectations about what we could check. (Pun intended.)

Playwright Cookbook

We've tested what we can and should test on our first practice site. In upgrading to a more challenging one, we realized that we'd benefit from the knowledge our past selves gained. And that you could too.

We've published our progress on github as the Playwright Cookbook. It's a Python repository of what we found that worked for different UI situations. It's one step beyond the Python documentation on the Playwright website, it lets you compare an actual page to a test where we were able to select the element.

Fun was had by all

Trying to quickly get something done with a new UI automation tool had been my white whale, something I knew was annoying enough that I wouldn't know how to get unstuck. Working in an ensemble meant either (1) the knowledge we needed was in the room and just had to be shared, or (2) two brilliant, successful ladies known for their testing prowess also didn't have a clue what was happening. Either way, it made things better and achievable.

I am notoriously opposed to fun. But this has been fun.

What's next

What is next for us? We know we want to:

Have we reflected on what's valuable and not valuable to test on an API? Will we share more about this beyond this blog post? A conference talk or workshop? A Twitch stream?? Only time will tell. For now, enjoy the github repo. :)

Amateur Professional Career Coach

People come to me for career advice. It's been everybody -- colleagues, former colleagues, testers from the community, friends I know outside of software, younger family members -- everybody. I am not completely sure why. I have a job I enjoy, but I'm not a professional coach. I seem to be an amateur coach of professionals.

I do have (and am happy to share) strong opinions about what people should do when they describe particular situations they're in. I can immediately tell them what I would do. But figuring out what they should do is much more useful. So in response to tough questions, I ask them tough questions back.

Why is this so hard?

People come to me with tricky situations with their current roles. For some, venting is enough. But I might ask someone who seems exhausted, sick of it, checked-out, or seems stuck for too long:

  • How much longer could you let things stay the way they are? Another six weeks? Another six months?
  • It sounds like you're having {X} trouble with {Y} person. Have you told them this directly?
  • I don't have the skills to mentor you in {Z}. Do you know someone who does? Or where could you find someone like that?
  • Would aligning expectations help?
  • Is saying "no" an option here? (I've been called the "No Coach" for this.)

What should I do next?

I don't know what you should do next, or even if you should change what you're doing now! But here's what I will ask you about so you can decide:

  • What do you like about what you do now?
  • What parts of your current job do you want to stay the same?
  • What do you avoid or dread? What keeps you in bed in the morning?
  • What do people come to you for help with?
  • What is there no hope of changing in your current situation?

I know Esther Derby gave a webinar in March of 2021 describing the tipping point between whether you can reconcile your needs and values with your employer, or whether you should leave. But unforunately both the webinar and her name for this zone is lost to me.

What do you think of my CV?

I've written both for the Minsitry of Testing and on my own blog about resumes and how they relate to the interview. TL;DR: Tell me about the impact of what you've done, and give me some indication of how fluent vs. on the shelf the skill is for you. Other things I end up asking people:

  • I know you {did this other thing} or {have this other skill} too. Don't you want to brag about that?
  • It sounds like your skills would be a great match for {this kind of job}. Is that the kind of role you're applying for?
  • If a recruiter were trying to find someone like you on LinkedIn, what keywords would they search for?

Some of the feedback I've received after recent resume reviews:

  • "Thanks again for your feedback on my CV, it was INCREDIBLY useful and very gentle at that."
  • "You are a great feedback provider."
  • "Elizabeth was so good in helping me with my resume!!"

I'm curious who you've gone to for career advice. Were they in your industry? What made you seek them over other people for advice or wisdom? What question or piece of advice has changed the way you look at your current job or for a new one?

The Power of Separating Setup and Teardown From Your Tests

This week, I was trying to find an explanation for my colleagues about when it's better to separate the setup and teardown of your tests from the test code itself. I was hoping that pytest's own documentation would have a recommendation, since our test code for this particular repository is written in Python with pytest as a test runner. Pytest does explain many features of fixtures, and what different test output can look like, but not the power of combining them. That's what I'd like to explain here.

An example

I can't show you the code I was looking at from work, so here is a relatively trivial and useless example I was able to contrive in an hour. (Sidebar: I once tested an address field that truncated the leading zeroes of post codes, so though this test may be trivial, testing that the post code made it to the database intact can provide value.)

There's an API called Zippopotamus that can either: 1. take a city and state, and return you details about matching places; or 2. take a post code, and return you details about matching places.

I've got two tests below, both trying to accomplish the same thing: see if all the post codes returned for the city of Waterville in the state of Maine also include Waterville in their results.

  • Setup: get list of post codes for Waterville, Maine
  • Test: for each post code, check that Waterville is in the list of matching places
import requests
import pytest

zippopotamus_url = "https://api.zippopotam.us/us"


@pytest.fixture()
def post_codes():
    response = requests.get(f'{zippopotamus_url}/me/waterville')
    assert response.status_code == 200
    places = response.json()['places']
    post_codes = [place['post code'] for place in places]
    return post_codes


class TestZippopotamus:

    def test_setup_included_waterville_maine_included_in_each_post_code(self):
        response = requests.get(f'{zippopotamus_url}/me/waterville')
        assert response.status_code == 200
        places = response.json()['places']
        post_codes = [place['post code'] for place in places]
        for post_code in post_codes:
            response = requests.get(f'{zippopotamus_url}/{post_code}')
            assert response.status_code == 200
            places = response.json()['places']
            assert any(place['place name'] == 'Waterville' for place in places)

    def test_setup_separated_waterville_maine_included_in_each_post_code(self, post_codes):
        for post_code in post_codes:
            response = requests.get(f'{zippopotamus_url}/{post_code}')
            assert response.status_code == 200
            places = response.json()['places']
            assert any(place['place name'] == 'Waterville' for place in places)

The first test shows the setup included in the test. The second test has the setup separated from the test. It appears in the fixture called post_codes.

(venv) ez@EZ-mini blog-examples % pytest                     
========================== test session starts ===========================
platform darwin -- Python 3.10.1, pytest-7.1.2, pluggy-1.0.0
rootdir: /Users/ez/blog-examples
collected 2 items                                                        

test_error_vs_failure_pytest.py ..                                 [100%]

=========================== 2 passed in 1.46s ============================

When you run these tests, they both pass. One test is a little longer, which you may find easier to follow than navigating around in the code, or harder to follow because there's code that's more about data collection than what we want to test. I find it yucky (a technical term) to have more than one thing called request or response in a single test, but these are all personal preferences.

Now imagine instead of waterville in the API requests, I've gone on auto-pilot and typed whatever in the setup for the tests. Here's what pytest gives us as the output.

(venv) ez@EZ-mini blog-examples % pytest
========================== test session starts ===========================
platform darwin -- Python 3.10.1, pytest-7.1.2, pluggy-1.0.0
rootdir: /Users/ez/blog-examples
collected 2 items                                                        

test_error_vs_failure_pytest.py FE                                 [100%]

================================= ERRORS =================================
_ ERROR at setup of TestZippopotamus.test_setup_separated_waterville_maine_included_in_each_post_code _

    @pytest.fixture()
    def post_codes():
        response = requests.get(f'{zippopotamus_url}/me/whatever')
>       assert response.status_code == 200
E       assert 404 == 200
E        +  where 404 = <Response [404]>.status_code

test_error_vs_failure_pytest.py:10: AssertionError
================================ FAILURES ================================
_ TestZippopotamus.test_setup_included_waterville_maine_included_in_each_post_code _

self = <test_error_vs_failure_pytest.TestZippopotamus object at 0x101f4c160>

    def test_setup_included_waterville_maine_included_in_each_post_code(self):
        response = requests.get(f'{zippopotamus_url}/me/whatever')
>       assert response.status_code == 200
E       assert 404 == 200
E        +  where 404 = <Response [404]>.status_code

test_error_vs_failure_pytest.py:20: AssertionError
======================== short test summary info =========================
FAILED test_error_vs_failure_pytest.py::TestZippopotamus::test_setup_included_waterville_maine_included_in_each_post_code
ERROR test_error_vs_failure_pytest.py::TestZippopotamus::test_setup_separated_waterville_maine_included_in_each_post_code
======================= 1 failed, 1 error in 0.71s =======================

Neither test passes. They both get mad at the same spot, where they're checking that they got the post codes for "Whatever, Maine" and found that, oh wait no, they haven't been able to do that.

But one test fails and one test errors: The test with the setup included fails. The test with the setup in the fixture errors. This difference is why I prefer to separate my setup (and teardown, which behaves the same way) from my test code.

The power of separating setup and teardown from your tests

  1. More of the test code is about what's being tested, instead of being about how you get to the right place.

  2. Pytest will give you an error when code fails in the setup or teardown, and a failure when the code inside the test fails.

  3. If you're reusing setup or teardown, you'll only have to fix an issue in the code in one spot.

  4. If you're running a bunch of tests with shared setup or teardown in a pipeline, it'll be easier to diagnose when something outside what you're trying to test has gone awry.

Reasons to keep the setup and teardown with your tests

  1. You are early enough in the development process that the setup and teardown don't need to be used anywhere else yet. You can extract them when they do, but for now, it's a little faster to read with everything in one place.

  2. If you don't have your IDE setup correctly, PyCharm may not let you Ctrl + click through the code to follow the fixture code. (Here's how to setup PyCharm to recognize pytest fixtures.)

  3. If you don't trust someone reading or debugging the test (other colleagues, future you, or possibly even other colleagues after you've moved to a different team) to be able to follow the code through to the fixtures. Or no one else is looking at the code!

What have I missed?

What other reasons are there? What do you tend to do for your team when your code is shared? What do you tend to do for yourself when you only have your future self to help out? How would you have written this Python code differently? Which articles do you point to when you're explaining a separation of concerns?

The Llandegfan Exploratory Workshop on Testing

An adventurous band of brave souls gathered in the northwest of Wales on the week of a transit strike in the United Kingdom. The topic: whole team testing. The conclusion: even the experts have trouble doing it well.

The peer conference format was apt for exploring mostly failure. Brief experience reports proved ample fodder for in-depth discussions of the circumstances and reflections on possible alternatives. It's better to reflect on your less-than-successful work with your troubleshooting-inclined peers than it is with your colleagues.


Ash: When "Whole Team Testing" becomes "Testing for the Whole Team"

First up was Ash Winter with a story of culture clash between Ash and the teams he help guide in their testing (cough did all the testing for cough). Ash discovered over the course of his six-month contract that getting everyone to nod along to his suggestions of having unit tests, API integration tests, front-end tests, limited end-to-end tests, and exploratory tests was completely different from agreeing on what those were or building the habits on the teams to make them happen. Saying the words "sensible journeys" and "meaningfully testable" wasn't meaningful at all.

By being a white man who looked the part, it was easy to get invited to the right meetings and seen as the authority. (How wonderful to be able to have a group all share in how outrageous this is compared to the experience other people have!) Ash was seen as an authority for all testing decisions, so teams looked to him rather than thinking for themselves.

Upon reflection, Ash acknowledged he would have done better to slow down and understand the expectations of the project before jumping in with prescriptions from his consulting playbook. The teams needed to know what habits to build day-to-day instead of receiving what must have sounded like prophesies from the future.

Sanne: Problem Preference

In listening to a book-that-could-have-been-a-blog-post, Sanne came across the question: "How have you chosen the kinds of problem you pick up?" It made her think about her preference for focusing team habits and communication so she could bring underlying issues to the surface. She's got a predisposition to be proactive and will run at a problem a hundred different ways if you let her.

On her new assignment, Sanne wants to let the team do the work instead of trying to do it all herself. So she's taking a radical step: she doesn't have access to the test environment. Her goal is to leave a legacy behind at the companies she works for, but it's too soon at her current assignment to evaluate how that will pan out.

Yours Truly: This Diagram Asked More Questions Than It Answered

I told the story of this blog post, with an addendum: I made a similar diagram for a different product that came in handy on the project I'm currently jumping into.

It was a great delight to hear my peers admire the middle of my three diagrams, the one deemed unprofessional and literally laughed at by my colleagues. Sometimes the complexity of the model gets reveals more about the complexity of the situation than a clean, organized model does.

I don't have any notes from what I said or what discussion occurred afterwards. Perhaps another participant's blog post will cover that bit in the coming weeks.

Duncan: Quality Centered Delivery

Duncan showed a truly dazzling amount of data extracted and anonomized from his five teams' JIRA stats. In so doing, he was able to prove to the teams (after wading through their nit-picks and expections) that a huge proportion of their time was spent idle: questions in need of an answer, code in need of a review, customers with no one to hear their feedback. Duncan deliberately dubbed this "idle" time to keep the focus on how the work was flowing rather than on optimizing for engineer busyness.

To shrink idle time, developers, testers, and the PM started working in an ensemble. Idle times dropped dramatically. The team kept a Slack call open all day for collaboration. One fateful day, the too-busy subject matter expert and too-busy client dropped into the call. Wait time plumeted to zero. The story of this particular success proliferated through the organization thanks to the praise from an influential developer on the team: development was fun again.

Duncan's was the one success story of the peer conference, though he was quick to point out that things could have changed after he left the assignment.

Vernon: How could I, Vernon, "The Quality Coach" Richards, make communication mistakes?!

It was a delight to get into the nitty-gritty details of a story that Vernon conflated and glossed over a bit in his keynote at Agile Testing Days in 2021. And to see the relationship repaired and strengthened in real-time with a colleague who witnessed what went down. (I'm just here for the gossip, clearly.)

A colleague asked a tester to create a release plan for the team by themselves. As the tester's manager, Vernon thought this was an outrageous way to "collaborate". Without spending time to understand the colleague's context, beginning from a place of unconditional positive regard (as coaches are meant to), or verifying his approach with his own boss, Vernon went on the war path against this "bully".

Remarkably, escalation and accusation did not solve the problem at hand: the tester didn't have the skills to build a test plan. Nor did Vernon's outrage address the real problem: there wasn't alignment at the organization about what the biggest fire was. Vernon wishes now that he'd protected his 1-on-1 time with his direct reports, and empowered them to address the situation rather than doing it for them.


In summary, it is not easy, straightforward, or simple to get a whole team to test.

Our lunch walk with a view of Snowdonia

A note about the surroundings for this gathering: spectacular. It was an 13-hour journey of four trains, one bus, and one bike to get back home, but it was worth it to be transported to views of Snowdonia National Park, a small town where the Welsh language holds a stronger footing than I expected, and a small group willing to make the same trek to geek out.

Many thanks to Chris Chant, Alison Mure, and Joep Schuurkes for making this conference possible, well-facilitated, and parent-friendly. Many thanks to my fellow participants: Ash Winter, Sanne Visser, Duncan Nisbet, Vernon Richards, Gwen Diagram, and Jason Dixon for being my peers. And B. Mure for listening well enough to capture some of the goofy things I said.

I look forward to making the trek again in the future.