Recapping My Year for a Performance Review

We've got annual performance reviews where I work. That shouldn't be the only time I receive feedback about how I'm doing. But the company budgets are set once a year, and thus the consequential "am I getting a raise or a promotion?" conversation typically happens once a year.

Due to a power vacuum in my department, my boss is also responsible for an R&D department comprised of hundreds of people. He isn't focused on my day-to-day, and certainly doesn't remember what I accomplished a year ago. I barely do.

To prepare for my performance review, I wanted to present him with a clear picture of where I've been focusing my efforts. I have three places where I could see what I'd done, but only two were reviewable at a glance.

My focus board

With my amorphous floating-around-the-department Quality Lead role, I ask myself "what's the best thing to focus on?" a few times a week. I keep a Trello board of possible topics. It helps me remember what I wanted to start, pulls me back to what's important when I've been interrupted, and ensures that I communicate back to whoever's affected when I finish something. It also keeps me honest: it prevents me from having too much in progress work at the same time.

The Trello board has five columns, from left to right:

  • Done in {this month}
  • In Progress
  • This Week {with the dates, including whether I've got any days off that week}
  • Next Week
  • Backlog

Tasks have a title and a tag (either a team or theme), broken down small enough that I can complete them within a few days. "Organize the conference" would be too big, but "draft Outlook invitation, identify and email all departments" with a conference tag would be small enough.

My goal is to keep the In Progress column down to one. Most times there's one thing in there I'm waiting to hear back on and one thing I can actively work on. At the end of the month, I take the whole Done column and move it to a completely separate Done Trello board. This way I can keep the information around without having to look at it all the time.

It was my Done Trello board I reviewed for my performance review. At a glance, I could see that much of my work focused on a shared testing repository, a side project, and helping out three particular teams.

My calendar

My calendar also gave me an overview of my effort for the year. Leadership book club, 1-on-1 coaching conversations, and knowledge-sharing sessions took small, incremental work every week or two. The work was typically too small to put on my Trello board, but still visible from the meeting titles as I paged through the weekly view of my calendar.

My notebook

Jerry Weinberg's Becoming A Technical Leader got me in the habit of journaling for a few minutes at the end of my workday. I hadn't spent time to summarize, group, or even re-read these journal entries along the way. I could have spent a lot of time reading through all my journal entries, but doing so wouldn't add much to what ended up being the ~7 minutes I had to recap my year to my boss.


In the end, nothing my boss said in my performance review was a surprise to me, which is just as it should be. I was able to remind him about some of the harder-to-see code review and 1-on-1 coaching. My boss was also able to bring to light something I couldn't or didn't see: my holistic way of thinking about our products, our department, and our company had influenced the way other people were thinking. People weren't just staying in their lane, performing their prescribed duties; they were thinking more about what all would be required to solve a problem, and how they could help.

How do you compile a summary of your year at work? Do you collect everything at performance review time, keep a brag document of all your accomplishments as you go along, or go next-level and update your resume every time you've succeeded? How do you capture the things that you invested a lot in that didn't go as anticipated?

Test Automation Guidelines

I received a large merge request recently, with thousands of lines of code changed and a handful of bullet points explaining some of how the test automation framework had changed. "It's already been reviewed from a technical perspective," the author of the code said. "I'd like you to review it from a test automation perspective."

I'd spent a few hours dipping into the code and deciding how to approach this review. This "test automation perspective" charter came from the conversation I decided to start with the author, before leaving any written comments on the merge request. I was looking to focus my efforts in a fruitful, productive direction, where my suggestions would likely be heeded. But what were the test automation principles I should be evaluating against? What did I expect from the setup of a test automation framework? Which criteria was I using to evaluate this merge request in front of me?

The Automation in Testing TRIMS heuristic came first to my mind:

  • Targeted
  • Reliable
  • Informative
  • Maintainable
  • Speedy

But there were other things I was noticing in reading the tests that made me question and identify my assumptions. I realized I needed to write down my assumptions. I wanted to come to a common understanding with the other testers in my department, rather than making decisions case-by-case, line-by-line with this author.

And thus, the Test Automation Guidelines for my department were born. Or rather, compiled from years of working on test automation code and hearing greater automators than I am write and speak about the topic.


Test Automation Guidelines

Entire repository

  • Test automation code must be version-controlled and linted.
  • Each function or method should do one thing and one thing only. Group functions and methods that do similar things together. “Separation of concerns” is usually how this is described.
  • Code comments shouldn't duplicate information the code can provide. They should describe why the code is the way it is, and be used sparingly.
  • The README should contain information on setting people up who are new to the repository to run the tests, and information about code style and contribution guidelines.

Individual automated tests

To automate a test or not to automate a test
  • Tests should be automated to the extent that the effort in writing and maintaining them is less (or less frustrating) than testing the same thing through exploration.
  • Automated tests contain an assert or verify. Assertions are better when they are checking something unique (an id, a name, etc.).
  • If you're using automation to expedite exploratory testing and not decide something on its own, make that clear.
  • Each test should test one thing and one thing only.
Readability and ownership
  • Tests should be readable. The best way to make sure you are not the only person who can read them is to pair to write them. The next best way is through code review. Smaller branches are easier to review and merge than bigger ones.
  • Automated tests are owned by the whole team.
  • Automated test output should be readable. You should be able to tell from the output what the test was trying to do, and how far it got.
Determinism
  • Don’t trust an automated test you haven’t seen fail. If you can’t change something about the test to make it fail, maybe it’s not testing what you think it’s testing.
  • Automated tests should provide information we care about. A big pile of green tests only helps us if they’re testing something relevant.
  • A failing (or even worse, sometimes failing) automated test should be a problem. Invest the time to make it deterministic, or delete it. Run the tests and publish the results so failing tests matter.

Resources

  1. João Proença’s talk: “Should we just... delete it?!
  2. Joep Schuurkes’s blog post: Test automation - five questions leading to five heuristics
  3. Joep Schuurkes’s blog post: How this tester writes code

Another department at my company had collected the TRIMS heuristic and a few other pointers (automated at the lowest level, use static analysis tools, etc.) that I linked my colleagues to rather than rewriting. Outfitted with my guidelines and theirs, I was able to go from a conversation-as-code-review deeper into the line-by-line-is-this-what-we-want-here code review.

I encouraged the author to identify and write down their own approach to the code after our conversation. They had strong opinions about what should go in a page object, when it made sense to break a test into more than one, and how the output should read when a test failed. By writing those preferences down, I could evaluate whether they were being applied consistently. Everybody needs an editor.


Do you have guidelines you refer to when reviewing test automation code? Beyond the links I provided above, is there some other reference you'd point me to? When do you find yourself bending or questioning the guidelines you thought you held dear?

Belgian Exploratory Workshop on Testing 2018

I'm going through some old conference notes again, with the aim to eventually get rid of them all. Today's edition comes to you from the Belgian Exploratory Workshop on Testing in December of 2018. Being invited to one of the most charming cities in the world to talk shop with experts from around Europe was...pretty much my dream scenario when I'd moved to this continent a few months earlier in 2018.

Coaching mindset and role definition are common themes throughout the presentations. They're also top-of-mind for me now, during performance review season, and as I shift from filling in for testers missing around the department back to thinking more holistically about our teams and products again.

Vera Baum

Vera spoke about the learning process, specifically helping coach testers on their learning journey. Learning should not be an incentive or a punishment. The advantage of not being a manager is that people were able to set learning goals without fearing they're not focusing enough on their day-to-day work. Learning goals were set more clearly for people earlier in their career, they need more of a scaffold.

Aleksandra Korencka

Aleksandra spoke about how to be a leader without a leadership title. Even making a checklist of simple things to think about for every release helped her colleagues in the their regression testing. For Aleksandra:

  • seniority = freedom + responsibility + impact
  • experience = (people x technical) intuition

She went through the process of creating a testing manifesto; an inspirational vision for her team. The process of creating the manifesto proved to be more valuable than the written document itself.

Shanteel (I apologize for not writing down your last name)

Shanteel was in a spot where their developers were undervaluing testing, because everyone sees other people's jobs as easier than their own. To shift their mindset, the group discussion pointed them towards building relationships with a few allies who can help cause a revolt when the time is right.

Marcel Gehlen

Marcel found that he had more influence over testing as a manager than he did as a tester. The people in his department could test what they thought a customer needed instead of just the software. Testers did stuff that "shouldn't be done"; they "cheated". Plus they got more visibility when they had an advocate higher up in the org chart.


I also gave an experience report. It was about a certain project manager from a previous company who was so distracting and forgetful that we had to work around him. I scheduled shadow meetings that we hid from the project manager so the my developers and I could make real progress. The project manager's name became the go-to insult for the rest of the conference. :)

Shoutout to Beren Van Daele for organizing BREWT in the coziest conference location. I could have spent the whole week in that library/lounge. I am always accepting good (or even bad!) reasons to go back to Ghent and have some decent stoverij.

Building Skills Over Time

Our engineering teams have developers and testers on them. For our other roles, we've got specialists shared across teams: product owners, UX designers, UX researchers, data analysts, technical writers, etc. Specialists usually attend the weekly refinement sessions and sprint reviews, and skip the other sprint ceremonies.

One of our engineering teams is adding a Kafka event-driven architecture to our low-code platform. One of the specialists serving the team was struggling to understand the atomic particle of this architecture: business events. (Here's my favorite concise introduction to the topic.) They kept thinking in terms of a user interface, while the team described a back-end system.

I saw the specialist struggling in meetings as patient subject matter experts used their lingo to explain it all. The specialist still seemed lost. I met with them individually and realized why they were stuck in the UI: they didn't know how APIs worked.

I don't know how APIs work.

All the explanations from our subject matter experts had jumped from "a user does this" to "so an API does that", but this specialist didn't have a good grasp of what an API was. Any explanation that started with "you don't want the API to be tightly-coupled" did not sink in for them. Explaining it more times wasn't getting them there. We needed to start from the beginning.

I say "we", because as Quality Lead for the department, I see my role as making everyone else's work smooth and shiny. I also suspected this specialist wasn't the only one struggling with the topic.

Let's learn together.

I started scheduling optional 45-minute meetings every few weeks. I didn't have the energy to add another weekly or bi-weekly recurring meeting to my slue of calendar invitations. Ad-hoc one-by-one sessions made this endeavor manageable and opt-out-at-any-time-able.

For topics, I saw that there were a few steps I knew we should cover to get everyone up-to-speed on business events (and how they're talked about in the context of our product and company):

  1. What is a REST API?
  2. What is an OData API?
  3. When would you choose REST over OData or vice versa?
  4. What is a business event?
  5. When would you choose business events over/combine them with REST or OData?

I kept all the details in the same Dropbox Paper document:

  • the ultimate goal (to understand when to use business events)
  • upcoming potential topics (starting with the list above, which grew as bigger questions came up during the sessions)
  • the date and specific topic for the next session (we'd decide at the end of each session what the next most important step was)
  • the recording of the Zoom call from the session
  • the notes I took during the session

Yes, I took notes in addition to recording the sessions. Every session, I'd share my screen and take notes. My intention was specifically to help others learn, so recording the sessions and taking notes (that they could help clarify and correct in real-time) freed them from the cognitive load of both learning and remembering in the moment.

For the earlier sessions when I was explaining the topic, taking notes helped slow me down enough for the information to sink in for people who were hearing it for the first time. The notes provided an obvious place to collect links with examples of API specs or tools (like Postman and the OpenAPI web editor).

For the later sessions when the subject matter experts were explaining the topic, the notes helped me make sure I'd understood what was said and capture the answers to questions from the audience.

The notes also served another purpose later: it helped people decide if they needed to watch the recording. The extended team I invited to these sessions was seven people, and later two subject matter experts, so not everyone could make a 45-minute meeting fit in their schedule. People who can't take 45 minutes to learn something crucial about their work really don't need to spend 45 minutes watching a video to find out if they care about a topic. Glancing through the notes helped them decide if they wanted to hear the whole conversation.

Impact one year later

In the first few months, some people were learning and some people were listening to information they already had a good grasp of. It took a few months for everyone to be learning. That's when these sessions really started to pay off. Even if I felt like they'd been rambling, obvious, or useless, one of the seven participants would reach out to me to confirm that the session and the notes helped them. That feedback kept me going.

At the retro the Kafka team had looking back at 2022, there was a stickie about this specialist. It said that collaboration between them and the team had improved. I had to agree. They shared a common vocabulary. The specialist understood the concepts the team was dealing with. They could conceptualize beyond the UI to the back-end. The team goes to the specialist for little/quick quick questions in a way they wouldn't have before, because every conversation felt like starting from the beginning. Now, they hit the ground running, together.

I also want to give credit to this specialist: their remarkable improvement reflects the overall time and effort they've put into deepening their knowledge on business events. The sessions I organized were only part of their journey.

Takeaways

I think most of our specialists had enough of an idea about REST APIs when we started these learning sessions a year ago. But nobody knew as much about REST, OData, business events, and how or why to combine them as they do now.

I started with the belief that if one specialist didn't know everything, likely other people were in the same position. I had a growth mindset for my colleagues: given the right environment and the right pace, all of these specialists could absorb this information. I also had a growth mindset for myself: given enough time, I could make a difference. It was worth the investment.

Photo by Suzanne D. Williams on Unsplash

Advising Middle Managers

Three of the teams in my unit had new team leads start in October. The team leads split their time between engineering (they have a specialty in either development or testing) and managing (having 1-on-1's with teammates and communicating around the company).

Part of my job as quality lead is helping these teams, and by extension their leaders, be effective. I've had some similar coaching conversations with each of these three new team leads recently. The members of their teams have different skills and personalities, but each leader is facing some of the same challenges. Here's roughly what they're going through, and a couple of options we brainstormed for what to do or not do.


Refinement facilitation

What they notice

They're at a refinement meeting, noticing that two people are talking past each other, and they're also trying to take notes.

What they can do
  1. Point out that there is a misunderstanding and lack of listening. Summarize the points of the two people, declare that there is a gap between their ideas, and see if that's enough for them to realize they need to fix it.
  2. Ask for help. Explain that note-taker and conversation-facilitator is too many roles for one person to play for a 10-person group, and identify someone to take notes.

Objectives and key results

What they notice

The team lead has been tasked with coming up with goals for the coming quarter and year for their team, but the product goals are fuzzy and they just started at the company a few weeks ago.

What they can do
  1. Point out the discrepancy in their understanding of what's going on, and what they're expected to do to their manager. Explain that they don't have enough context to understand what purpose the OKRs should serve, and without a clearer roadmap, they aren't in a position to articulate the goals for the team.
  2. Ask for help. Gather information from the team about what previous OKRs looked like, what the team wants to work towards, and how much they care about this topic at all.

Role expectations

What they notice

Someone on their team has been promoted and they've been at the company for many years, but they seem to be doing fewer things (or different things) than the team lead would expect.

What they can do
  1. Point out the discrepancy between the work they're seeing and what's expected. Describe the specific tasks they were expecting the team member to perform. Ask if the team lead's expectations align with what the team member understood their work should be. Compare both of these understandings with the written job description for the role.
  2. Ask for help. Compare the three (lead's, member's, job description) ideas of what the team member's role could look like, and bring these ideas to the manager. Have them help you sort out which is closest to what makes sense in this context. Practice telling your team member what they need to change by giving the speech to your manager first.

Managing vs. coding

What they notice

They were hired to be a developer and a team lead. Looking at their calendar, they see that 80-90% of their time is coordinating and communicating, leaving only 10-20% for development work.

What they can do
  1. Point out the discrepancy between what they expected the role to be and what you're doing to their manager. Find out if there's a specific amount of time they should be setting aside for focused development work, and see how that compares to their schedule.
  2. Ask for help. Identify the tasks that they're currently taking on, and figure out which ones can be shared with particularly skilled individuals or anyone on the team. Team leads don't have to host every standup, refinement, retro, etc. They just need to make sure the work happens.

This was probably a repetitive post to read. Being a manager can feel very repetitive, especially while establishing new habits and common contexts. As long as the team members remain the same, working through these particular issues will pay off for months or years to come.

Luckily all of my team leads were already doing the work that's hardest to teach: noticing things. They weren't always sure what the next step was, but being able to recognize that uncertainty and bring it to someone to ask for help is what their job is. They don't need to solve every problem. They need to identify which things are problems, and work collaboratively to come to a solution.

I don't know, let's look together.

A few weeks ago, something was failing. My colleague didn't know why. They reached out to me. I've copied our Slack conversation mostly verbatim, with a bit a cleanup for typos.

Here's how it began:

developer [11:35 AM] Hi Elizabeth, do you by any chance have experience with pytest exit codes in gitlab ci? {link to pipeline job output} this job on on line (316) runs pytest, and the test fails. I would expect to get an exit code 1, but as seen on line 390 it is 0.

So many good things here already. They've asked about my experience before assuming I don't have any. They've linked to the pipeline job so I can read the error message in context myself. They've also summarized what their expected behavior should be, and what the actual behavior was.

ez [11:37 AM] not really, but i thought those exit codes were a bash thing, not a python thing {link to stack overflow}

I stated up front that I didn't have the answer. This gives my developer a chance to bail immediately, in case they know a more qualified but less available expert. I used safety language ("I thought" vs. stating a fact) to reinforce that I could be wrong, but link to my source so my developer can judge for themselves.

developer [11:39 AM] pytest does describe them here https://docs.pytest.org/en/7.1.x/reference/exit-codes.html

developer [11:40 AM] gitlab indeed by default exits on error, which can be disabled with set +e

The developer has found (and read) an even better reference, the documentation for the pytest library, huzzah! That definitely trumps my Stack Overflow link. Now it's clear which reference to use. They've also brought us one step further along in the troubleshooting process, introducing the +e as a potential exit code helper to overwrite whatever pytest is doing.

ez [11:40 AM] huh, guess you know more than i do!

Recognition! Not exactly praise here, but I admit that they are more informed and on a better track. Celebrating when you're wrong and the developer's right helps build credibility and rapport for the next time when it's the other way around.

Now I want to meet them where they are, and get them one step further.

ez [11:46 AM] can i give you a call? i’m trying to figure out if the code is being logged in the wrong spot, or if gitlab isn’t responding to the code correctly.

developer [11:47 AM] yes sure

Zoom [11:47 AM] Call | Zoom meeting started by elizabeth.zagroba | Ended at 11:55 AM - Lasted 8 minutes | 2 people joined

I've spent a few minutes Duck Duck Go-ing and reading about the issue. I've got more links I've got pulled up in my browser, but rather than continuing to chat links and theories back and forth, I wonder if calling the developer will get this solved by lunchtime at noon.

ez [11:50 AM] https://gitlab.com/gitlab-org/gitlab/-/issues/340390

I don't have the recording of the meeting, but the last message I sent the developer was while we were on the Zoom. I put the link in the Slack chat so it wouldn't be lost to time or browser history after we hung up.

My theory about the exit code was on the right track. The developer assumed that if pytest exited with a 1 or a 0 on line 174, the bash script on line 175 would know that already. It didn't. We had to collect the exit code on line 174.


Only the last little piece, about needing to collect the exit code on the same line the command is executed, was new information. This technical bit was the only thing I learned that day. The rest I knew, and am grateful for:

  • Ask people about their experience before diving into an explanation.
  • Describe what you've tried already.
  • Admit when you don't know something, or describe how deep your knowledge is. (And be kind to people who make themselves vulnerable in this way.)
  • Link to the thing so your pair can see what you're talking about for themselves.
  • Celebrate the good things.
  • When the communication format is getting in the way, try something else.

I would definitely pair with this developer again, on any problem, whether or not I knew the answer. Every day should be so great.

Agile Testing Days 2022

I was on the program committee for Agile Testing Days again this year, so I reviewed (anonymously) one out of every six submitted presentations. It was a bit heartbreaking to see the number of skilled professionals whose talks and workshops didn't make the cut. I can't say that the cutthroat competition necessarily led to a better program, but the sessions I attended surpassed my expectations.

Agile Testing Days is A LOT, so these are my notes and takeaways from only some of the sessions I attended.

Anne-Marie Charrett - Quality Coaching Masterclass

Anne-Marie graciously let me participate in (and host a small bit of) her tutorial about quality coaching. It was great to spend a whole day talking about and with people taking the same approach to their influencing without managing role. It confirmed that I've approached my role in a sensible way: have a mandate from management, wait until teams ask for help instead of inflicting help, give people an indication of when you'll come back for more support or follow-up, treat teams at different maturity stages differently.

I need to revisit all the diagrams and lists Anne-Marie shared of what quality coaching can be. I want to look through each one and see if it's something I want to do more (or less) of as part of my role. "Investing in onboarding is the ultimate shift-left" is now my go-to explanation for why I'm spending so much time explaining stuff to the new joiners.

Gwen Diagram - Happiness is Quality

The little things (getting a PagerDuty call off-hours, flaky tests in the pipeline, a monstrous legacy code base) can pile up to drive your most talented engineers away and destroy engineer happiness.

As acting CTO, Gwen measures everything at her organization. She tracks number of alerts. She hosts "flake parties" to finally fix or delete flaky tests. She uses (relatively infrequent every-two-months because nobody likes them) retros to collect all kinds of feedback. And crucially: to collect responses to the same set of questions over time. She knows that making every measure reflect 100% satisfaction is somewhere between not cost-effective and impossible, but she knows to keep "I am comfortable asking for help" at 100%.

Fiona Charles - What could possibly go wrong?

Fiona covered a bevy of consequences of negligence, unethical practices, and misplaced confidence in software. Her particular example of testing the emergency services call system in England reminded me that I need to tie in the customer impact element more to the daily work of my teams. Agile is oriented to the people building the software, but not the people the software is ultimately inflicted upon. It is our responsibility as engineers1 to build software ethically, private, and secure by design.

Toyer Mamoojee - Refining your Test Automation approach in modern contexts

I'm glad I finally got to see Toyer speak after only hearing good things from my Agile Testing Days friends (arguably real friends now?) and the internet for years. Toyer revealed his depth of experience working in test automation through his lists of challenges he sees everywhere, and refinement guide to getting back on track.

Everybody's got long-running tests, isolation issues, maintenance issues, lack of transparency, delays before testing, and a desire to tackle non-functional tests all at once. His refinement guide points to breaking steps down into smaller pieces: break a monolith into microservices, create a small and deterministic test environment, write tests at the lowest level and closer to the code. Along with helping people build their automation skills, you've also got to shift their mindset to thinking about the ideal future state.

Sam Nitsche - Trouble in the Old Republic

This is undoubtedly the most stagecraft I'd seen at a conference. Hell, I've seen plays with fewer lighting cues and costume changes. I didn't expect to also learn something at this talk after the smoke from the smoke machine faded, but remarkably, I did!

Database people have really deep knowledge about databases, but very little programming knowledge. They lack the fancy tools that developers have to perform their jobs more easily. Like Toyer suggested about automated tests, Sam recommended making the database part of the software.

Parveen Khan - Building Quality - Influence, Observability, and You

Parveen's in a position like I am, serving across several teams, and wondering if she's having any impact. She noted how much harder it is to build allies at work when you're remote. Setting up regular 1-on-1's to share accomplishments also builds trust and visibility for influence.

For me, this list of knowing if you've influenced people was my biggest takeaway:

  • People come to you for your advice or your opinion.
  • People do things for you when they don't have to.
  • You can set a direction and move forward without authority.

Marie Drake - Learning the Fundamentals of Performance Testing

I've been on lots of projects that aspire but never execute performance testing. Marie's talk was a great introduction to and disambiguation of all the terms I've never seen executed: smoke, sanity, stress, spike, soak, and possibly other performance tests I couldn't write down fast enough. My biggest takeaway was: bottlenecks in the front-end happen for individual users, while bottlenecks in the back-end happen when there are too many concurrent users.

Echoing what Gwen said in her talk: if you can't measure it, you can't improve it.

Ash Winter - Better organisation design enables great testing

Ash Winter's Golden Rules of consulting (what he blindly assumes before gathering any information) made me laugh, and definitely sound true:

  • you have too much work in progress
  • you are building stuff no one will use
  • testing is not the real problem in your organization

The real problem is organizational design. Your org chart is not where the work happens; it's all the connections that you don't see and can't draw, built up through influence and reputation, that get stuff done. Discovering the shape of your organization can help you figure out what kind of testing you should be doing. (Maybe it's time to pick up Team Toplogies again, although I hate that the authors tell you how to read it.)

Laurie Sirois - Creating Career Ladder for QAs

Career ladders help retain talented engineers, align expectations, and hire more successfully. Nobody's career will be linear, but providing a structure for what the options are of what's next helps you give clear feedback to people about how they're doing. On the other hand, if your organization can't support people growing in their careers or has more T-shaped needs, a career ladder will be misleading and disappointing!


It's amazing to me that conferences like this are part of the structure of my life. I live in a foreign country and work with very few people who have the time to help guide and provide feedback on my work. It's gathering with this community year after year that fills that gap for me. Thank you Agile Testing Days, Sansoucci Park, and the Dorint Hotel sauna. Thank you to Thomas Rinke, who thanked me for modelling how to ask a question and for banging on the piano a bit. Thanks to everyone who enjoyed my real talk, and the talk that accidentally upstaged my real talk.

Sansoucci Park in Potsdam

And definitely no thanks for the projector in the main room or the Deutsche Bahn.


  1. For more on this, I highly recommend The Great Post Office Scandal. It is jam-packed with WTFs per minute and will radicalize you on the issues of audit trails, release notes, and transparency for starters. 

Half-Life For Your Backlog

This summer, I helped a team think about the important work that we wanted to tackle. Then vacations happened. Priorities shifted. The product finally went live to a bigger set of potential customers. And those stories we'd written remained in the backlog. When we opened one at refinement this week, a developer joked that the items in the backlog should have a half-life.

Everyone else laughed. I insisted they were on to something.

Backlogs are toxic

Every product team I've been on has items (user stories, bugs, even epics) that are old. Maybe the person who wrote them has since left the company. Maybe the bug doesn't have clear enough reproduction steps or impact statement to ever prioritize it. Maybe the dream feature from two years ago already exists but the earliest description of its possibilities remains.

Nobody needs this garbage. Nobody has the context anymore. And most importantly, nobody will notice if these items disappear from your backlog.

What would happen if, instead of treating every item ever logged in JIRA as a precious pearl, you treated it like the biohazard that it is? Get rid of it!

Evidence of a toxic backlog

I know a team's backlog needs a half-life when:

  • Comments appear on the JIRA items ahead of refinement asking "do we still need this?"
  • Items belong to an epic, sprint, or some other gathering place that is already closed.
  • Items belong to many sprints but have never been in progress.
  • People are scared to edit or remove items from the backlog.

Lean software development stress just-in-time refinement to prevent the decay, waste, and stress caused by building up a big backlog that never gets smaller.

Narrow it down

Start with a literal half-life on your backlog. Take the date the product began. Subtract it from today's date. Divide it in half. Any items older than that period: delete them. For a product that's been worked on for four years, that's any item older than two years.

Other strategies that have served me even better to achieve an even smaller, robust backlog:

  • Delete any feature requests filed by colleagues when they quit.
  • Delete any items that only contain a title (no details) for a feature that's already been released and not on the roadmap for the next three months.
  • Delete any items you wrote.

Then set new rules about how to add things to the backlog, and how realistic you want to be about what kind of clean-up (even just administrative) there is after a milestone is complete.

Embrace change

Present you knows so much more about what's important and how things will get prioritized than past you did! Let go of what you thought you knew and embrace what you know now!

Software doesn't live forever, and neither will you. Learning to let go of the dreams that will never come true leaves room for you to dream about a different future, one you can realize. I hope your backlog can reflect that.


What other toxic questions or narrowing criteria have you used? Does JIRA have a (free, please) plugin to delete anything beyond a certain date? Has anyone ever gotten upset that you deleted something from the backlog? Has anyone even noticed?

Photo by Kilian Karger on Unsplash

A Google Reader Replacement

When in the course of human events, it becomes necessary for one person to dissolve the social bands which have connected them with others, I find myself asking what I valued from Twitter in the first place. A constant stream of short, popular updates? No thanks. A Rolodex? Partially. A newspaper? Almost. I wanted to read my friends' blogs and newsletters, collected in a publication to be read at a time to my liking. Like a newspaper.

I wanted Google Reader back.

Google Reader, the beloved (and thus, consequently, sunset) product allowed you to subscribe to RSS feeds. RSS feeds connected me to the software testing community before I'd met any of them. I was learning by trial, error, and brute force at my first software testing role, and RSS feeds propelled me into learning from testers further along in their careers.

When Google Reader was retired, Twitter became the way I kept up with people in the industry. They'd tweet a link to their blog, which I have in enough open tabs to crash my browser. Pocket improved my workflow somewhat. I'd scroll Twitter, saves the links, and go to Pocket later when I need longer-form (and downloadable) posts for my subway ride.

I wish I'd found an RSS reader that worked the way I wanted: remembering where I left off, displaying the posts without destroying them. I remember trying Feedly and a few others before giving up in favor of my Twitter + Pocket workflow. This served me from ~2013 until (checks watch) two weeks ago, when Twitter was set ablaze by the egotiscial sociopath in charge. I still want to read my blogs, and when I saw this post, I honestly could not tell if it was a joke:

Post recommending you aggregate your RSS posts into an epub format

I suspected the epub format part, a file type compatible with my Kobo Clara ereader, was a rhetorical flourish. But would this solve my problem? Could I read the blogs without reading the tweets? Without a screen??

I'd already set up my Kobo Clara to integrate into a mainstay of my digital life, the tabs I save to read later in Pocket. After a recent unrelated ereader triumph, I started Duck Duck Go-ing if my ereader could subscribe to RSS feeds directly.

Escaping both Adobe and wired syncing gave me new-found freedom.

After a bit of searching, I hit upon a solution that would work: Qiip. (I'm assuming it's pronounced "keep" but do send me your alternate pronunciations.) Qiip lets you sync RSS feeds to Pocket. Sign in with your Pocket credentials, give Qiip the RSS feed URL, and poof: your favorite blog appears in your Pocket list. And for me, ultimately, on my ereader.

I do wish Qiip had separate login credentials. Every time I login to add another blog, it asks me if it can talk to Pocket again. Yes Qiip, it's fine, go do your thing.

But that's my only complaint, from me, a professional complainer. I love having the things I want to read appear in Pocket without having to scroll through Twitter. I love being able to read more of the internet on my ereader. I love skipping the self-marketing bonanza in favor of what people are trying to say.

A few days later, I discovered that Substack newsletters also have RSS feeds: add /feed to the end of the URL. I'm still unsubscribing from the Substack emails I receive, approximately 1/3 of my personal inbox. But I'm already delighted to have my Saturday morning, tea-in-the-garden reads separated from my email inbox.


How are you coping with the demise of the main online gathering place for software testers? Is Mastodon your go-to? Do you also dream of reading blogs like you'd read the newspaper? Can you convince my American friends to quit Instagram in favor of the federated Pixelfed?

Did you enjoy this tootorial?

Photo by Zoe on Unsplash

Story Slicing Workshops

I remotely facilitated this story slicing workshop created by Henrik Kniberg and Alistair Cockburn for two of the teams in my unit recently. They named it "Elephant Carpaccio" to give people the mental image of breaking down a big feature into very thin, vertical slices. Joep Schuurkes brought it to my attention when he facilitated the workshop a couple times in 2021, leaving behind not only these very helpful blog posts about how to run it but also the Miro board he used to do so.

My elephant encounter

The Setup

The 2.5 hour workshop as Joep ran it included three conversations:

  • a conversation about why we might split stories
  • a description of today's feature we'll be building
  • a short brainstorm about what would be a good (small enough) first slice of this feature

These three parts would fill the first 45 minutes. The rest of the workshop would be smaller groups tackling each of these tasks in bigger chunks of time:

  • breaking down the problem into between 10 and 20 stories
  • actually building the first few stories as sliced
  • a reflection and debrief on the whole workshop

The First Group

In my first running of the workshop, I was able to see a few things I didn't expect in the story breakdown:

  1. American state abbreviations: The problem lists different values of sales tax for AL, TX, and three other abbreviations Americans would typically know. Participants wanted to talk about the states using their names, but didn't know which abbreviation belonged to which state. I filled them in on the table I created.
  2. Sales tax vs. VAT: Another American vs. European participant thing! The answer to "how much does the thing cost?" will be different if you're adding the tax to the price, or assuming it will be included in the total. This wasn't important to the solving of the problem, so I let this difference persist.
  3. First things first: the calculating of the price was laid out very clearly as the first problem to solve. One persistent participant really had their heart set on data input validations and a particular user interface. It took several tries from their teammates and ultimately my nudging to encourage them to think about the problem from the most important pieces first.

I switched the instruction to take a demo/screenshot every 8 minutes, and instead asked groups to make one after each slice. This helped raise awareness for the difference between how they were breaking down the work, and how they were actually picking up the work. They might get through two or three slices in one go, only to have to go back and demo each scenario individually.

A few insights came through more clearly in the debrief than during the exercise:

  • It was easier to see progress and celebrate it when the work was broken down into smaller pieces.
  • The team understood the concept a lot better now, but there was still a big delta between how small they could slice a story, and what would make sense when the burden of code review and testing would typically take a few days.
  • A thinly sliced story is not the same as an MVP.

The Second Group

My second group benefitted from slightly clearer instructions. But they had all three of the insights the first group uncovered even before they started slicing or building any stories. They got hung up on other intricacies of the problem:

  1. Money: The American dollars in the problem statement used a . to separate the integer part of a money calcuation from its decimal component. Many of the participants were used to using , for that purpose, so they needed to consciously second-guess every calculation output.
  2. Saving: Both teams were using IDEs they weren't used to, which they hadn't set up for auto-saving. Most often an unexpected result prompted the question "Did we forget to save?"
  3. Slices as defined: They got the idea of how to slice the stories. But when they got to building the solution, they had to have an "Are we really going to do it like this?" conversation.

Just breaking down the stories wasn't where the learning happened for this team. It was the building that crystallized the gap between being able to define small pieces of work, and what it's like to actually build like that.

They got farther in the building than my first group had, so they were able to see how adding one feature impacted the already existing ones. The big insight from this group's debrief was how one small addition affects all the existing features, and the kind of time needed to do it right and test it thoroughly.


I'd love to run this workshop in other settings, but having a shared programming language and an ability to work in a strong-style pair are too big a barrier to entry to submit this as a workshop to a conference.

How have you gotten strangers started working together on code? How have you taught (or been taught) about slicing user stories into smaller, vertical slices?