I don't know, let's look together.

A few weeks ago, something was failing. My colleague didn't know why. They reached out to me. I've copied our Slack conversation mostly verbatim, with a bit a cleanup for typos.

Here's how it began:

developer [11:35 AM] Hi Elizabeth, do you by any chance have experience with pytest exit codes in gitlab ci? {link to pipeline job output} this job on on line (316) runs pytest, and the test fails. I would expect to get an exit code 1, but as seen on line 390 it is 0.

So many good things here already. They've asked about my experience before assuming I don't have any. They've linked to the pipeline job so I can read the error message in context myself. They've also summarized what their expected behavior should be, and what the actual behavior was.

ez [11:37 AM] not really, but i thought those exit codes were a bash thing, not a python thing {link to stack overflow}

I stated up front that I didn't have the answer. This gives my developer a chance to bail immediately, in case they know a more qualified but less available expert. I used safety language ("I thought" vs. stating a fact) to reinforce that I could be wrong, but link to my source so my developer can judge for themselves.

developer [11:39 AM] pytest does describe them here https://docs.pytest.org/en/7.1.x/reference/exit-codes.html

developer [11:40 AM] gitlab indeed by default exits on error, which can be disabled with set +e

The developer has found (and read) an even better reference, the documentation for the pytest library, huzzah! That definitely trumps my Stack Overflow link. Now it's clear which reference to use. They've also brought us one step further along in the troubleshooting process, introducing the +e as a potential exit code helper to overwrite whatever pytest is doing.

ez [11:40 AM] huh, guess you know more than i do!

Recognition! Not exactly praise here, but I admit that they are more informed and on a better track. Celebrating when you're wrong and the developer's right helps build credibility and rapport for the next time when it's the other way around.

Now I want to meet them where they are, and get them one step further.

ez [11:46 AM] can i give you a call? i’m trying to figure out if the code is being logged in the wrong spot, or if gitlab isn’t responding to the code correctly.

developer [11:47 AM] yes sure

Zoom [11:47 AM] Call | Zoom meeting started by elizabeth.zagroba | Ended at 11:55 AM - Lasted 8 minutes | 2 people joined

I've spent a few minutes Duck Duck Go-ing and reading about the issue. I've got more links I've got pulled up in my browser, but rather than continuing to chat links and theories back and forth, I wonder if calling the developer will get this solved by lunchtime at noon.

ez [11:50 AM] https://gitlab.com/gitlab-org/gitlab/-/issues/340390

I don't have the recording of the meeting, but the last message I sent the developer was while we were on the Zoom. I put the link in the Slack chat so it wouldn't be lost to time or browser history after we hung up.

My theory about the exit code was on the right track. The developer assumed that if pytest exited with a 1 or a 0 on line 174, the bash script on line 175 would know that already. It didn't. We had to collect the exit code on line 174.

Only the last little piece, about needing to collect the exit code on the same line the command is executed, was new information. This technical bit was the only thing I learned that day. The rest I knew, and am grateful for:

  • Ask people about their experience before diving into an explanation.
  • Describe what you've tried already.
  • Admit when you don't know something, or describe how deep your knowledge is. (And be kind to people who make themselves vulnerable in this way.)
  • Link to the thing so your pair can see what you're talking about for themselves.
  • Celebrate the good things.
  • When the communication format is getting in the way, try something else.

I would definitely pair with this developer again, on any problem, whether or not I knew the answer. Every day should be so great.

Agile Testing Days 2022

I was on the program committee for Agile Testing Days again this year, so I reviewed (anonymously) one out of every six submitted presentations. It was a bit heartbreaking to see the number of skilled professionals whose talks and workshops didn't make the cut. I can't say that the cutthroat competition necessarily led to a better program, but the sessions I attended surpassed my expectations.

Agile Testing Days is A LOT, so these are my notes and takeaways from only some of the sessions I attended.

Anne-Marie Charrett - Quality Coaching Masterclass

Anne-Marie graciously let me participate in (and host a small bit of) her tutorial about quality coaching. It was great to spend a whole day talking about and with people taking the same approach to their influencing without managing role. It confirmed that I've approached my role in a sensible way: have a mandate from management, wait until teams ask for help instead of inflicting help, give people an indication of when you'll come back for more support or follow-up, treat teams at different maturity stages differently.

I need to revisit all the diagrams and lists Anne-Marie shared of what quality coaching can be. I want to look through each one and see if it's something I want to do more (or less) of as part of my role. "Investing in onboarding is the ultimate shift-left" is now my go-to explanation for why I'm spending so much time explaining stuff to the new joiners.

Gwen Diagram - Happiness is Quality

The little things (getting a PagerDuty call off-hours, flaky tests in the pipeline, a monstrous legacy code base) can pile up to drive your most talented engineers away and destroy engineer happiness.

As acting CTO, Gwen measures everything at her organization. She tracks number of alerts. She hosts "flake parties" to finally fix or delete flaky tests. She uses (relatively infrequent every-two-months because nobody likes them) retros to collect all kinds of feedback. And crucially: to collect responses to the same set of questions over time. She knows that making every measure reflect 100% satisfaction is somewhere between not cost-effective and impossible, but she knows to keep "I am comfortable asking for help" at 100%.

Fiona Charles - What could possibly go wrong?

Fiona covered a bevy of consequences of negligence, unethical practices, and misplaced confidence in software. Her particular example of testing the emergency services call system in England reminded me that I need to tie in the customer impact element more to the daily work of my teams. Agile is oriented to the people building the software, but not the people the software is ultimately inflicted upon. It is our responsibility as engineers1 to build software ethically, private, and secure by design.

Toyer Mamoojee - Refining your Test Automation approach in modern contexts

I'm glad I finally got to see Toyer speak after only hearing good things from my Agile Testing Days friends (arguably real friends now?) and the internet for years. Toyer revealed his depth of experience working in test automation through his lists of challenges he sees everywhere, and refinement guide to getting back on track.

Everybody's got long-running tests, isolation issues, maintenance issues, lack of transparency, delays before testing, and a desire to tackle non-functional tests all at once. His refinement guide points to breaking steps down into smaller pieces: break a monolith into microservices, create a small and deterministic test environment, write tests at the lowest level and closer to the code. Along with helping people build their automation skills, you've also got to shift their mindset to thinking about the ideal future state.

Sam Nitsche - Trouble in the Old Republic

This is undoubtedly the most stagecraft I'd seen at a conference. Hell, I've seen plays with fewer lighting cues and costume changes. I didn't expect to also learn something at this talk after the smoke from the smoke machine faded, but remarkably, I did!

Database people have really deep knowledge about databases, but very little programming knowledge. They lack the fancy tools that developers have to perform their jobs more easily. Like Toyer suggested about automated tests, Sam recommended making the database part of the software.

Parveen Khan - Building Quality - Influence, Observability, and You

Parveen's in a position like I am, serving across several teams, and wondering if she's having any impact. She noted how much harder it is to build allies at work when you're remote. Setting up regular 1-on-1's to share accomplishments also builds trust and visibility for influence.

For me, this list of knowing if you've influenced people was my biggest takeaway:

  • People come to you for your advice or your opinion.
  • People do things for you when they don't have to.
  • You can set a direction and move forward without authority.

Marie Drake - Learning the Fundamentals of Performance Testing

I've been on lots of projects that aspire but never execute performance testing. Marie's talk was a great introduction to and disambiguation of all the terms I've never seen executed: smoke, sanity, stress, spike, soak, and possibly other performance tests I couldn't write down fast enough. My biggest takeaway was: bottlenecks in the front-end happen for individual users, while bottlenecks in the back-end happen when there are too many concurrent users.

Echoing what Gwen said in her talk: if you can't measure it, you can't improve it.

Ash Winter - Better organisation design enables great testing

Ash Winter's Golden Rules of consulting (what he blindly assumes before gathering any information) made me laugh, and definitely sound true:

  • you have too much work in progress
  • you are building stuff no one will use
  • testing is not the real problem in your organization

The real problem is organizational design. Your org chart is not where the work happens; it's all the connections that you don't see and can't draw, built up through influence and reputation, that get stuff done. Discovering the shape of your organization can help you figure out what kind of testing you should be doing. (Maybe it's time to pick up Team Toplogies again, although I hate that the authors tell you how to read it.)

Laurie Sirois - Creating Career Ladder for QAs

Career ladders help retain talented engineers, align expectations, and hire more successfully. Nobody's career will be linear, but providing a structure for what the options are of what's next helps you give clear feedback to people about how they're doing. On the other hand, if your organization can't support people growing in their careers or has more T-shaped needs, a career ladder will be misleading and disappointing!

It's amazing to me that conferences like this are part of the structure of my life. I live in a foreign country and work with very few people who have the time to help guide and provide feedback on my work. It's gathering with this community year after year that fills that gap for me. Thank you Agile Testing Days, Sansoucci Park, and the Dorint Hotel sauna. Thank you to Thomas Rinke, who thanked me for modelling how to ask a question and for banging on the piano a bit. Thanks to everyone who enjoyed my real talk, and the talk that accidentally upstaged my real talk.

Sansoucci Park in Potsdam

And definitely no thanks for the projector in the main room or the Deutsche Bahn.

  1. For more on this, I highly recommend The Great Post Office Scandal. It is jam-packed with WTFs per minute and will radicalize you on the issues of audit trails, release notes, and transparency for starters. 

Half-Life For Your Backlog

This summer, I helped a team think about the important work that we wanted to tackle. Then vacations happened. Priorities shifted. The product finally went live to a bigger set of potential customers. And those stories we'd written remained in the backlog. When we opened one at refinement this week, a developer joked that the items in the backlog should have a half-life.

Everyone else laughed. I insisted they were on to something.

Backlogs are toxic

Every product team I've been on has items (user stories, bugs, even epics) that are old. Maybe the person who wrote them has since left the company. Maybe the bug doesn't have clear enough reproduction steps or impact statement to ever prioritize it. Maybe the dream feature from two years ago already exists but the earliest description of its possibilities remains.

Nobody needs this garbage. Nobody has the context anymore. And most importantly, nobody will notice if these items disappear from your backlog.

What would happen if, instead of treating every item ever logged in JIRA as a precious pearl, you treated it like the biohazard that it is? Get rid of it!

Evidence of a toxic backlog

I know a team's backlog needs a half-life when:

  • Comments appear on the JIRA items ahead of refinement asking "do we still need this?"
  • Items belong to an epic, sprint, or some other gathering place that is already closed.
  • Items belong to many sprints but have never been in progress.
  • People are scared to edit or remove items from the backlog.

Lean software development stress just-in-time refinement to prevent the decay, waste, and stress caused by building up a big backlog that never gets smaller.

Narrow it down

Start with a literal half-life on your backlog. Take the date the product began. Subtract it from today's date. Divide it in half. Any items older than that period: delete them. For a product that's been worked on for four years, that's any item older than two years.

Other strategies that have served me even better to achieve an even smaller, robust backlog:

  • Delete any feature requests filed by colleagues when they quit.
  • Delete any items that only contain a title (no details) for a feature that's already been released and not on the roadmap for the next three months.
  • Delete any items you wrote.

Then set new rules about how to add things to the backlog, and how realistic you want to be about what kind of clean-up (even just administrative) there is after a milestone is complete.

Embrace change

Present you knows so much more about what's important and how things will get prioritized than past you did! Let go of what you thought you knew and embrace what you know now!

Software doesn't live forever, and neither will you. Learning to let go of the dreams that will never come true leaves room for you to dream about a different future, one you can realize. I hope your backlog can reflect that.

What other toxic questions or narrowing criteria have you used? Does JIRA have a (free, please) plugin to delete anything beyond a certain date? Has anyone ever gotten upset that you deleted something from the backlog? Has anyone even noticed?

Photo by Kilian Karger on Unsplash

A Google Reader Replacement

When in the course of human events, it becomes necessary for one person to dissolve the social bands which have connected them with others, I find myself asking what I valued from Twitter in the first place. A constant stream of short, popular updates? No thanks. A Rolodex? Partially. A newspaper? Almost. I wanted to read my friends' blogs and newsletters, collected in a publication to be read at a time to my liking. Like a newspaper.

I wanted Google Reader back.

Google Reader, the beloved (and thus, consequently, sunset) product allowed you to subscribe to RSS feeds. RSS feeds connected me to the software testing community before I'd met any of them. I was learning by trial, error, and brute force at my first software testing role, and RSS feeds propelled me into learning from testers further along in their careers.

When Google Reader was retired, Twitter became the way I kept up with people in the industry. They'd tweet a link to their blog, which I have in enough open tabs to crash my browser. Pocket improved my workflow somewhat. I'd scroll Twitter, saves the links, and go to Pocket later when I need longer-form (and downloadable) posts for my subway ride.

I wish I'd found an RSS reader that worked the way I wanted: remembering where I left off, displaying the posts without destroying them. I remember trying Feedly and a few others before giving up in favor of my Twitter + Pocket workflow. This served me from ~2013 until (checks watch) two weeks ago, when Twitter was set ablaze by the egotiscial sociopath in charge. I still want to read my blogs, and when I saw this post, I honestly could not tell if it was a joke:

Post recommending you aggregate your RSS posts into an epub format

I suspected the epub format part, a file type compatible with my Kobo Clara ereader, was a rhetorical flourish. But would this solve my problem? Could I read the blogs without reading the tweets? Without a screen??

I'd already set up my Kobo Clara to integrate into a mainstay of my digital life, the tabs I save to read later in Pocket. After a recent unrelated ereader triumph, I started Duck Duck Go-ing if my ereader could subscribe to RSS feeds directly.

Escaping both Adobe and wired syncing gave me new-found freedom.

After a bit of searching, I hit upon a solution that would work: Qiip. (I'm assuming it's pronounced "keep" but do send me your alternate pronunciations.) Qiip lets you sync RSS feeds to Pocket. Sign in with your Pocket credentials, give Qiip the RSS feed URL, and poof: your favorite blog appears in your Pocket list. And for me, ultimately, on my ereader.

I do wish Qiip had separate login credentials. Every time I login to add another blog, it asks me if it can talk to Pocket again. Yes Qiip, it's fine, go do your thing.

But that's my only complaint, from me, a professional complainer. I love having the things I want to read appear in Pocket without having to scroll through Twitter. I love being able to read more of the internet on my ereader. I love skipping the self-marketing bonanza in favor of what people are trying to say.

A few days later, I discovered that Substack newsletters also have RSS feeds: add /feed to the end of the URL. I'm still unsubscribing from the Substack emails I receive, approximately 1/3 of my personal inbox. But I'm already delighted to have my Saturday morning, tea-in-the-garden reads separated from my email inbox.

How are you coping with the demise of the main online gathering place for software testers? Is Mastodon your go-to? Do you also dream of reading blogs like you'd read the newspaper? Can you convince my American friends to quit Instagram in favor of the federated Pixelfed?

Did you enjoy this tootorial?

Photo by Zoe on Unsplash