The Llandegfan Exploratory Workshop on Testing

An adventurous band of brave souls gathered in the northwest of Wales on the week of a transit strike in the United Kingdom. The topic: whole team testing. The conclusion: even the experts have trouble doing it well.

The peer conference format was apt for exploring mostly failure. Brief experience reports proved ample fodder for in-depth discussions of the circumstances and reflections on possible alternatives. It's better to reflect on your less-than-successful work with your troubleshooting-inclined peers than it is with your colleagues.


Ash: When "Whole Team Testing" becomes "Testing for the Whole Team"

First up was Ash Winter with a story of culture clash between Ash and the teams he help guide in their testing (cough did all the testing for cough). Ash discovered over the course of his six-month contract that getting everyone to nod along to his suggestions of having unit tests, API integration tests, front-end tests, limited end-to-end tests, and exploratory tests was completely different from agreeing on what those were or building the habits on the teams to make them happen. Saying the words "sensible journeys" and "meaningfully testable" wasn't meaningful at all.

By being a white man who looked the part, it was easy to get invited to the right meetings and seen as the authority. (How wonderful to be able to have a group all share in how outrageous this is compared to the experience other people have!) Ash was seen as an authority for all testing decisions, so teams looked to him rather than thinking for themselves.

Upon reflection, Ash acknowledged he would have done better to slow down and understand the expectations of the project before jumping in with prescriptions from his consulting playbook. The teams needed to know what habits to build day-to-day instead of receiving what must have sounded like prophesies from the future.

Sanne: Problem Preference

In listening to a book-that-could-have-been-a-blog-post, Sanne came across the question: "How have you chosen the kinds of problem you pick up?" It made her think about her preference for focusing team habits and communication so she could bring underlying issues to the surface. She's got a predisposition to be proactive and will run at a problem a hundred different ways if you let her.

On her new assignment, Sanne wants to let the team do the work instead of trying to do it all herself. So she's taking a radical step: she doesn't have access to the test environment. Her goal is to leave a legacy behind at the companies she works for, but it's too soon at her current assignment to evaluate how that will pan out.

Yours Truly: This Diagram Asked More Questions Than It Answered

I told the story of this blog post, with an addendum: I made a similar diagram for a different product that came in handy on the project I'm currently jumping into.

It was a great delight to hear my peers admire the middle of my three diagrams, the one deemed unprofessional and literally laughed at by my colleagues. Sometimes the complexity of the model gets reveals more about the complexity of the situation than a clean, organized model does.

I don't have any notes from what I said or what discussion occurred afterwards. Perhaps another participant's blog post will cover that bit in the coming weeks.

Duncan: Quality Centered Delivery

Duncan showed a truly dazzling amount of data extracted and anonomized from his five teams' JIRA stats. In so doing, he was able to prove to the teams (after wading through their nit-picks and expections) that a huge proportion of their time was spent waiting: waiting for questions to get answered, waiting for code to get reviewed, waiting for feedback from the customer. Duncan deliberately dubbed this "wait" time to keep the focus on how the work was flowing rather than on optimizing for engineer busyness.

To shrink wait time, developers, testers, and the PM started working in an ensemble. Wait times dropped dramatically. The team kept a Slack call open all day for collaboration. One fateful day, the too-busy subject matter expert and too-busy client dropped into the call. Wait time plumeted to zero. The story of this particular success proliferated through the organization thanks to the praise from an influential developer on the team: development was fun again.

Duncan's was the one success story of the peer conference, though he was quick to point out that things could have changed after he left the assignment.

Vernon: How could I, Vernon, "The Quality Coach" Richards, make communication mistakes?!

It was a delight to get into the nitty-gritty details of a story that Vernon conflated and glossed over a bit in his keynote at Agile Testing Days in 2021. And to see the relationship repaired and strengthened in real-time with a colleague who witnessed what went down. (I'm just here for the gossip, clearly.)

A colleague asked a tester to create a release plan for the team by themselves. As the tester's manager, Vernon thought this was an outrageous way to "collaborate". Without spending time to understand the colleague's context, beginning from a place of unconditional positive regard (as coaches are meant to), or verifying his approach with his own boss, Vernon went on the war path against this "bully".

Remarkably, escalation and accusation did not solve the problem at hand: the tester didn't have the skills to build a test plan. Nor did Vernon's outrage address the real problem: there wasn't alignment at the organization about what the biggest fire was. Vernon wishes now that he'd protected his 1-on-1 time with his direct reports, and empowered them to address the situation rather than doing it for them.


In summary, it is not easy, straightforward, or simple to get a whole team to test.

Our lunch walk with a view of Snowdonia

A note about the surroundings for this gathering: spectacular. It was an 13-hour journey of four trains, one bus, and one bike to get back home, but it was worth it to be transported to views of Snowdonia National Park, a small town where the Welsh language holds a stronger footing than I expected, and a small group willing to make the same trek to geek out.

Many thanks to Chris Chant, Alison Mure, and Joep Schuurkes for making this conference possible, well-facilitated, and parent-friendly. Many thanks to my fellow participants: Ash Winter, Sanne Visser, Duncan Nisbet, Vernon Richards, Gwen Diagram, and Jason Dixon for being my peers. And B. Mure for listening well enough to capture some of the goofy things I said.

I look forward to making the trek again in the future.

From Crafting Project to Critical Infrastructure

Just for me

Three years ago, I had a shit laptop. My company makes a Windows desktop software product that allows you to build your own applications. Mac users working the software could open it on their Windows virtual machine in Parallels. When I did that, my company's software crashed, Parallels crashed, and then my whole Mac crashed. My job was to create app builds, run them, and test them. Due to my shit laptop, I couldn't do that locally.

Luckily, our app was also hosted in our public cloud. Through the cloud UI, you could make a build, see which build was on which of your environments, and deploying a new build. But the UI was...not an ideal workflow for me. It was slow to load, required several steps of clicking and waiting for a minute or two - just long enough to get distracted thinking about something else. A deploy process that might optimally take ~8 minutes took ~15 minutes as my mind wandered and the UI didn't update immediately.

I needed a one-step process to deploy, with updates frequent enough to hold my attention. I decided to abandon the UI for the API.

I wrote a Python script that took command-line input and printed output to the console as the steps of the process progressed. I used my two crafting days that month to break down the problem, setup the whole repository, and get the code to a state where it built and deployed an app to an environment.

A code review from Joep Schuurkes moved the code from a long list of functions to different classes corresponding to the API endpoints I was calling. I think the commands were limited to --build and --deploy. To make sure the refactor was successful, I'd scroll up in my Terminal history and run those two commands again. Crafting days on subsequent months brought a bit more error-handling to account for mistypes on my side or failures/timeouts from the APIs.

At this point, it was a solid tool that saved me about a half-hour per day. I presented it to the developers on my team, offering them access to the repository so they too could benefit from this time-savings.

They were deeply unimpressed. They didn't have shit laptops, they had Windows laptops, they didn't have to run Parallels, they weren't constantly switching between branches and needing actual builds of the application to test. To them, this script was relatively useless. That was fine by me! The time and frustration the script saved me was more than worth the effort to build it. I used it several times a day myself, and got to use it as an example in the "Whole Team Approach to Continuous Delivery" workshop I paired with Lisa Crispin on. That was more than enough.

Slide from the workshop

Pipelines emerge

Six months later, a developer on my team got excited to set up a pipeline for our application. They wanted to run static code analysis on a build of our application, and run our functional tests against a deployed application running in a deployed environment. They copy + pasted my code as a starting point for the build and deploy, copy + pasted the static code analysis scans from another unit, and connected the two in a pipeline that provided value to the wider team. Developers weren't great at running tests on their feature branches on their machines; now we had a pipeline that would do it for them.

Other teams saw our pipeline and discovered my deployment script in the process. Rather than copy + pasting the code as my teammate did, they pinned their pipelines to the most recent version of the code on the master branch.

With more users and use cases, fellow colleagues were eager to also use their two crafting days per month to add the features they needed. I'd receive pull requests of things I didn't need for a context I didn't have, or feature requests I used my limited crafting time to fulfill. Without a style guide, a linter, tests, or a set scope, it was hard to turn away pull requests weeks or months in the making that people were eager to see included in the master branch. I merged it to keep everyone unblocked. As the code grew to serve every individual need, I lost interest in supporting what had originally been my darling pet project.

Still Valuable?

Two years after the original two-day crafting project, my role shifted from serving one team and one application to thinking about quality for the seven engineering teams in my unit. No longer did I need to deploy the application to a hosted environment. At the same time, my old team shifted where the repository was located, and the APIs I'd been calling in my script wouldn't do a lot of what they used to.

I got to explore what it meant to be the Quality Lead for my unit, and nobody I served needed this script. I left the list of improvements I'd brainstormed for it languishing at the bottom of my personal Trello board. I didn't get any requests from other departments to use or update it.

Still Valuable!

Nine months later, the spark got reignited! A fork of the deployment script got presented in another unit, complete with a UI on top of it. Someone on my old project discovered my script, and decided to add a feature to upload builds from the new repository location to make it useful again. They shared the code for a review after just a few hours of effort.

I had a chance to think through what parts of the repository were resuable for this use-case, which parts would be better copy + pasted for better readability, and got the merge request to a place where it fit in with the existing code style before anyone's heart and soul had been poured into it.

Now a bloated script eight different actions, I decided to start writing tests for it. I didn't need the tests to make sure the existing code worked; everyone using it in their pipelines was enough to prove that. Tests will allow for future refactoring of the code and updating the version of the API I'm calling.

The first test I added confirmed that the new functionality did what the code submitter expected it to do, gave me a way to change individual parameters faster. and gave me the confidence and excitement I'd been missing.

I'm just getting going on tests for the rest of the existing code, but I'm looking forward to it!


Why do I tell you this story? Well, here's what I think when I look back at the evolution of this code base:

  • write tests, even before you really need them
  • set up a linter and coding guidelines before you give anyone else access to your repo
  • if you want to be precious about your code, tell people to fork instead of submitting merge requests
  • if you want the code to be in its most findable place and shareable state, you'll have to invest the time to collaborate with people on their changes
  • good things come to those who wait :)

Talk About Your Test Strategy

I was invited to join a team's debate this week about what environment to point our third-party security testers towards for their upcoming penetration test. I asked what I thought was both an obvious question and something worth discussing with the team:

"Do we want them to identify security risks, or are we just checking boxes here?"

A combination of stunned silence and nervous giggling (muted over Zoom) ran through the team. "We don't talk about that out loud," the team lead told me.

But that's exactly what I'm there to help uncover as the Quality Lead for this team and the others in our unit: how deep or shallow should our testing be? If our testing uncovers issues, are we interested in mitigating them? If not, why are we testing?

A Test Strategy in Five W's

This conversation took me back to a few years ago. I was working on a product in a phase before production-level quality that we dubbed "demo-driven development" in retrospect. We were showing off a combination of Powerpoint slides and small pieces of the product in order to gain more funding. A person interested in testing but with too large a scope to pay attention to my team in particular asked me for a test strategy.

But the demos kept changing. What was important this week wouldn't be important the next. There wasn't a lot of exploratory testing being performed or automated tests being written. All my time was occupied in figuring out what had already been promised, what we were trying to sell, and filling in the gaps between those with a very specific path our product owner would follow during a demo, down to browser and screen resolution.

I asked the person who wanted the test strategy document what they were going to do with it, what it might be used for. They sent me the enormous table where a link to my test strategy would be added, and clearly never looked at or noticed again.

Could I document in an official test strategy document for my team that I wasn't doing much testing? It turns out, yes.

I outlined the document with the five w's: who, what, when, where, and why. The whole document looked something like this. I don't think you even needed to scroll to read it.

  • Who: Our stakeholders are the people we're selling to, our product owner, and our team, in that order.
  • What: We're testing one particular happy path in Firefox (our product owner's default browser).
  • When: Due to the volatile nature of our product's priorities, our minimal testing has been concentrated after user stories are completed.
  • Where: We're running the application locally for demos. We haven't had a chance to set everything up we'd need to have in a hosted environment.
  • Why: We test to ensure that the one happy path is demonstrable to a customer in a demo, and to provide our product owner with the work-arounds for the gaps in our product.

I sent it to the person, expecting them to get back to me and tell me I couldn't do testing like this. Or at least, I couldn't write it down. But they never read the document! They thanked me, linked it in their table, and went on their merry way.

A Test Strategy in Stakeholders and Risks

I liked the way I shaped my test strategy around the very specific set of stakeholders and their risks in the five w's strategy. I wanted to bring this same connection to the teams I support when I started as Quality Lead for my unit. I ran a test strategy workshop for each of them to identify their stakeholders, talk about the risks that matter to them, and see how their team activities mitigated those risks. I got to this Miro board template after a few rounds.

  1. List the software the team is responsible for. (Our teams typically have legacy products they're maintaining in addition to the new things their roadmap focuses on.)
  2. Mind map the stakeholders for these products.
  3. Add stickies next to the stakeholders' names with their possible risks and concerns.
  4. Review the types of testing activities (things like exploratory testing, reviewing the production logs, static code analysis, etc.) for comprehension and completeness.
  5. Move each testing activity onto the impact (it's important vs. it's not important) and priority (we do this vs. we don't do this) quadrants.
  6. Vote on stickies that landed in an unexpected spot.
  7. Talk about the most-voted stickies in order, and identify action points with owners from there.

Part of this workshop was to show the teams that not every piece of testing is something that matters to the stakeholders. I didn't expect them to do every possible kind of testing imaginable. But I did want them to all understand and agree what kinds of testing they were and weren't doing. I got them talking about it out loud.

A Test Strategy Derived from a Vision

Believe it or not, a one-time workshop was not enough to get everyone to identify and build the perfect test strategy! As the teams grew and the workshop faded from memory, I got questions about the test strategy for the teams. I heard about goals of "bug-free software" and asked about "what best practices to follow" to get there.

As fun as it would be to pontificate about how there is no such thing as bug-free software, and there are no best practices outside of the obvious domain, that doesn't help people know what to do. So I wrote a "Quality Vision" document. (Pro tip: use a noun you wouldn't use for anything else so it's easy to pull it up by typing "vision" in your browser bar.) The Quality Vision for the unit places trust in the expertise of the teams to choose their own ways forward. It has things like:

  • Is our product at the right level of quality to release right now? This is a constant conversation between the development team and your product owner. Think about the risks and concerns of the customers you're targeting.
  • Data Security We're not to use production/customer data for development purposes outside support incidents. Here's a link to a more in-depth document from our Security team.
  • Reliability Here's a link to the document of what we promise our customers in our service-level agreement.

It's not going to tell you what the right answer is for your team right now, but it'll give you some things to point to when you're discussing quality with your team.


Because after all:

Quality is value to some person who matters at a particular point in time.

For the penetration test, the team lead quickly followed their "We don't talk about that out loud" comment with a "why not both?" jest. Why can't we both check the boxes for the authorities, and uncover valuable information that we want to act on?

Indeed, that's where we landed. We decided to point the security team to the production environment because that would reveal the best information. Unless that setup takes too long for the team, then we'll point them to the test environment. But regardless: we'll tell our bosses and our product owner what we're doing and why. We'll talk about our test strategy out loud.

How have you started a conversation about quality with your team? When have you decided not to test something? What have you not tested and also not discussed?

Photo by Patrick Fore on Unsplash

CVs From A Hiring Perspective

In my department at work, every tester is on their own team. I'm in a position where I look at tester resumes before we decide to screen them. Some of the resumes come in from people applying on the company website. We've also got a recruiter searching for people on LinkedIn. She asked me before we posted a new job: if we're not requiring experience with a particular tech stack, what should she search for?

I didn't have a great answer for her at first. It took me some time to think about what we'd value in this role, what would make a candidate's resume stand out.

I don't look for particular buzzwords on resumes, and no particular buzzword can eliminate you from the hiring process. But I do look for some of the same things I look for in a test report: an indicator of the depth and quality of your work, and an ability to connect your work to the value to the development team or the business.

A resume that regurgitates the calendar or job description of the candidate doesn't stand out to me:

  • Attended standup, refinement, sprint planning, sprint review
  • Made test plan, executed tests, reported tests

I want to know what was hard about it. I want to know who else was there and how you interacted with them. I want to know what you do that other people in your position don't do.

  • Facilitated hybrid standup across three time zones
  • Refined ideas into user stories with acceptance criteria that met our definition of ready
  • Spoke to key stakeholders to identify risks and incorporate them into a test plan
  • Shared test results verbally to spark conversation with developers about impact
  • Presented test plan at guild to inspire other testers to shift left

I was on a TestBash Careers panel about having a CV that gets you noticed. (I regret to inform you that my ability to think of an answer and unmute are not always fast enough to chime in!) I'm digging into the unanswered questions on the club thread, and can tell you that my fellow panelists and host are more shrewed about when or when not to reveal a personal detail on a resume. I am privileged enough to lean towards the truth.

Back at work, I told the recruiter there were a couple things should could search for: exploratory testing and integration testing. Our desire to cast as wide and inclusive a net as possible has been an investment of her time, and the time of the team members who speak to the candidates first. It's not cheap. I hope it's worth it.

Join us if you're interested.

Photo by Christina @ wocintechchat.com on Unsplash

Communicate Using Three Layers of Information

I've joked about writing Buzzfeed-style clickbait titles for JIRA tickets:

Partly, clickbait titles gave me an easy way to search the database, or type a few words in my browser bar, to pull up the right issue quickly. The JIRA ticket itself had steps to reproduce, an explanation of why it was important, and an annotated screenshot for people to easily understand if what I saw was happening on their machine.

Partly, clickbait titles gave us a convenient shorthand to help us remember whether it's this logout issue or some other logout that we're talking about.

Partly, clickbait titles made the developers interested to investigate and pick up the issue.

I was reading Giles Turnbull's The agile comms handbook last weekend and found the lure part of the three communication layers he described (lure, context, and detail) reminiscent of the clickbait JIRA ticket titles I'd written. And that I'd see these three communication layers (title, description, details) described similarly in lots of places before.


Lure, context, detail

This whole (short and great) The agile comms handbook is about how to communicate in ways that move as fast as the work does on an Agile team while still being effective for busy people to consume. In the book, Giles Turnbull describes creating different layers of information as lure, context, and detail. Lure grabs the attention of busy people and gets them interested in knowing more. Context gets people to the point where they know just enough. Detail is for people who need to know more and have the time to follow a link to another page, read a whole PDF, etc.

Why, how, what

Simon Sinek describes it as why, how, what in this TED Talk on "How great leaders inspire action". He gives the example of an advertisement for Apple, where starting with the why is much more inspiring and motivating than starting with the what. Getting people to buy into your vision will ensure that they follow along to the goal.

Title, executive summary, list of issues

When we worked together, Martin Hynie taught me to write weekly test reports with a title, executive summary, and list of issues, with the idea that the executive summary should be enough for the busy person to understand, but that less busy and more curious people would want the detail in the list of issues.

Intent, location, details

Maaret Pyhäjärvi describes the ensemble (mob) programming practice of communicating at the highest level of abstraction (intent) before being more specific about where we're going (location) or if necessary, mouse clicks and keystrokes (details). Giving keystrokes and mouse clicks to someone who knows how to operate the software is frustrating, but so is giving a high-level explanation to someone who's never used the software before. Expressing intent first can lead to better action, even if another member of the ensemble has a different action in mind than you do. Being able to identify when to jump between the levels is key for effective communication. Knowing when to jump to a different layer of communication is the skill I find hardest to build while learning to ensemble.

Title, lede, body

In the content management system I tested for New York Public Radio, article pages were broken down into title, lede, and body, presumably reflecting something every journalist learns on their first day at a newspaper. On the website, title and lede were displayed on the homepage and tag pages. You'd only see the body once you clicked in to read or listen to the story.


I read so many JIRA tickets and Slack messages that only contain the lowest/detail level of information. The person trying to bring everyone else up-to-speed on an issue does need to include all the detail. It makes sense that that's where their mind is. Giles Turnbull identifies why the detail layer of information is the default: the details already exist. Creating the other lure and context layers of information takes more work.

Being able to zoom out and answer the kinds of questions you'd expect in a refinement meeting ("Who will benefit from this? How does it work now and why does it need to change?") helps you prioritize the work. It helps the team understand how what they're doing fits into the bigger picture. Learning how to write the lure and the context is a separate technical skill that needs to be recognized and built.


How did you learn to break down your communication into different layers? Do the title, headers, and paragraphs of this blog post fit this model? Which of these breakdowns resonates most with you? Where are you practicing communicating this way?

Photo by Clark Van Der Beken on Unsplash

SoftTest 2019 in Dublin

I got the chance to present at SoftTest in Dublin, Ireland in 2019. It was the last time they were going to host the conference, but unlike most events, they knew that even in 2019. So I was even more delighted to be part of it and finally see a bit of Dublin (the original Boston, MA if you ask me).


Janet Gregory gave a talk version of a workshop I'd co-facilitate with Lisa Crispin a few weeks earlier about testing in DevOps. She connected the questions we ask and the work that we do as testers to the business risk. Existential questions like "Why am I here?" and "Are we delivering the right thing?" are encouraged. Visualizing our work and collaborating allows us to get feedback from small experiments.

In one experiment, Conor Fitzgerald and Rob Meaney tried team-based testing. REcurring issues like build failures, end-to-end testing failures, and ultimately rework motivated their teammates to embrace change. They started from automation completely devoid of risk analysis and testing tasks invisible on the board. They used their crisis to build the new habit of radiating information about testing, making both their own work and the testing work to be picked up as visible and pick-up-able as possible.

Margaret Dineen spoke about making quality engineering visible and tagible, and connecting it to business value. The strategic intent of the organization informs what valuable software you deliver, and vice versa. Identifying the Impact, Confidence, and Ease of implementation (ICE) drives the software teams to actually deliver things the user cares about. Tracking outcomes across silos only works if you're open to measuring how badly disconnected software delivery is from the needs of your users.

Joe Joyce spoke about quality in an API's lifecycle something I would come to know and understand better in the time since this conference. He focused on getting the specs right, since they're faster to change and rollback, and serve as a point of collaboration even before the code or test code has been written. I couldn't agree more.

Hugh McCamphill's talk was about knowing your automation. Making abstractions, while inherently leaky, should be done at the domain rather than the implementation level to cut down on maintenance costs. Focus on saving time building; invest in learning.

Jean-Paul Varwijk gave a talk I remembered as a sweeping history of software testing, but see now in my notes that he called it "Testing as Advantage". Testing must be an advantage if it's stuck around this long, right? Build up collaboration skills to remove the separation needed to progress. Supply information continuously. Collecting sensible metrics and using them for the right purpose will tell you way more than tests will.


What a high-quality collection of speakers and talks for such a local community. This was one of the last places I got to travel before travelling became infitinitely harder. I'm so grateful to have been learning together in the same place at the same time as so many greats.

What's You're Missing in Your Conference Abstract: Spoilers

I've had the unique honor of participating in the program committee for Agile Testing Days for 2021 and 2022. I get to see anonymized submissions for talks and workshops (not keynotes), rate the entries on a number of criteria, and provide a few sentences of feedback for the author and the conference organizers to understand how and whether the submission fits in the program. I've also helped review papers in the Ministry of Testing call for papers. Anyone can.

Almost all of them are missing one thing: spoilers. I want spoilers. I want them to tell me how they faced a problem or general trend in the market at their job and what they did about it. I want enough detail, any detail, to know what I'll be seeing beyond the first two slides.

Here's what that looks like for a talk and a workshop.


Talks

For a talk, I'm expecting a particular format:

  1. Description of a problem the author has experienced
  2. Journey or decision point that led the author to change their behavior
  3. Details about the direction going forward

Sometimes I'm missing the author's personal connection in 1. (looks like they'll summarize a book, article, someone else's work, etc.) or why a change was needed in 2.

But it's 3. that I long for. Authors tell me that they solved a problem, or had trouble solving it, but not how. "I spent three months migrating our tests from Selenium to Playwright and faced trouble." Where did you face trouble? Was it because of differences between the languages? Developers couldn't read the failing test output anymore? What happened?? Tell me about the journey of what that looked like.

Workshops

For a workshop, I want to see:

  1. Description of a skill the author was missing
  2. Journey where the author built this skill
  3. Journey participants will go on to build this skill

Workshops ideas are great with 1. and often include 2. But you know what participants will mostly care about before they get trapped in a room for hours with you? That's right, it's 3. Will this be a lecture disguished as a workshop so the author has more time to speak? Will participants be working in groups? Will this get them on the first step towards building a skill? How much will they be able to do differently at work on Monday? Give me some sense of what the activities are, how long they might take, and how much you expect people to achieve during them.


More tips on conference submissions

Anytime I submit an abstract to a conference, I find something in Rob Lambert's Blazingly Simple Guide To Submitting To Conferences that makes me reconsider what I've written. Read it if you're anywhere on the journey from "do I even have an idea for this conference?" to "you mean I'm supposed to speak on stage in front of all these people sitting before me?"

Lisi Hocke also has a soup-to-nuts overview of conference speaking that will get you feeling like you can do this.

Photo by Brandi Redd on Unsplash

Tell Your Colleagues About Your Work

At the interview for my very first job at my hometown library, the librarian recounted my work as a volunteer. In addition to reshelving the returned books, I'd put the books that were already on the shelves back in order. She asked if I had anything else to share. "No," I said. I let the librarian speak for me. My work spoke for itself.

My job at the library took no coordination, little or no communication, and was quite boring. There was always someone else at the desk to talk to patrons and handle the actual "returning the books" part. I could just listen to my iPod and shelve the books. The disappearing books and neatness of the shelves were evidence of my work that anyone could notice and evaluate at a glance.

My work in software has been quite the opposite. It's been hard to evaluate by a person who doesn't know where (or how) to look. They might notice if I'd done my job very poorly or not at all. But the difference between a job very well done and a job just, well, done has been overwhelmingly large and invisible.

I spent most days in my career thus far as a tester on a development team with a daily standup. Standup was the place I could make at least a small part of my work visible. Contributions only became clear in bigger groups if someone else championed my testing work, or if the impact of my leadership was felt more widely.

Now, I'm serving several teams. I don't have a daily standup. I've been working remotely for two years. I could put in a lot of work in my current role that no one would ever see. It's felt strange to feel like everyone is wondering how I spent an hour, to being much less accountable for my time. I have the freedom to waste a lot of time on something that isn't important or doesn't matter to anyone else.

But that's not how I want to spend my time. I want to use my time and effort effectively. There's always more to do. I want to share with my colleagues what I've done and what I'm planning on picking up next, so they can tell me if my effort is duplicated, wasted, or in need of a course-correction.


Where, who, what, and how I tell them

I have invitations to each of the standups for the teams I support. I go once every couple weeks to keep an ear out for how I can help them. Sometimes that's the right time to share a bit of what I've been working on too, if it's just for them.

I've also held myself accountable in three other meetings that span across the department:

  1. There's a weekly meeting for the whole department. There I share highlights of general interest for less than five minutes and add linked details in the shared agenda.
  2. I do a similar thing for the bi-weekly sync of all the engineering leads, tailored to their interests.

In these quick updates, I share why I picked up the work, who it serves, and what problem it solves. I've seen so many sprint demos that share the "what" but not the "who" or the "why" that I try to include all three pieces in my stories.

  1. I host a meeting every few weeks with the engineering and product managers that's all about my work. They're the ones who need to see the outcomes and impact, so it's worth investing a bit of time to make sure I'm focusing on the right things for them.

In the meeting focused on my role, I let the conversation be freer. We dig into more of the why's and "is this valuable?"'s of what I'm up to.

I've also got a 1-on-1 with my manager. I've got 5-10 minutes every two weeks to share and get advice on particularly triumphant or hairy situations not fit for a wider audience. The rest of our half hour 1-on-1 is my boss responding to what I shared and sharing information with me from the wider company context.


Why I tell them

Why do I do all of this? Why do I invest so much time in talking about the work?

Because the work does not speak for itself. Testing is not like shelving books in a library. Being able to explain my work helps me collaborate. It makes it clear what kinds of problems they can come to be with, and ensures that I'm top-of-mind when such problems do arise.

Sharing what I'm working on builds trust. It's not arrogant, bragging, or self-indulgent. It's a neccesary part of making sure I'm doing the right thing, and getting credit for it.

Photo by Shiva Prasad Gaddameedi on Unsplash

The Five Hour Tester

I ran a series of testing skills workshops at work that Joep Schuurkes and Helena Jeret-Mäe developed. I used both their teaching materials (available at FourHourTester.net) and the schedule Joep followed when he ran them at the office (splitting the exercises up into five hours) to run the series again. The five topics are:

  1. interpretation
  2. modelling
  3. test design
  4. note taking
  5. bug reporting

What

This was what got me enthusiastic about giving these workshops: the material. It was topics, concepts, blog posts, and books that I'd been studying since I discovered the wider community of testers about ten years ago. Sharing the oldies but goodies like the "Mary had a little lamb" heuristic and the Test Heuristics Cheat Sheet with a new set of people makes me excited about what they'll discover about their software with these tools in their toolbelt.

Why

The company had grown enough in the years since Joep gave the workshops that there was demand for the series again. Testers are embedded on teams by themselves and reporting to developers. The workshops offered an opportunity for testers to grow (and affirm) their skills while building relationships with the testing-minded outside their teams.

Who

I posted an invitation to the workshops in the Slack channel for the R&D department, and was surprised to see interest in attending from elsewhere in the company. Attendance ranged from about 20 participants in the first session to a low of 5 during the school vacation week. (An learning intiative launched by the People & Culture department would streamline registration if/when I offer the workshops again.)

When

The sessions were in the hour right after lunch time on Monday on five consecutive weeks. That slot didn't require me to move or miss important recurring meetings. Recording the sessions made it possible to catch-up on the ones people missed.

Where

I hosted the sessions remotely, which allowed for audio-visual parity for everyone participating in a way that hybrid sessions would not have. I recorded them so people could also watch them afterwards.

How

Each hour started with a short "lecture", the part where we'd name and notice the skill. Next came an exercise, giving participants hands-on practice in drawing attention to the testing skill. This part took about half of the hour. The other half of the hour was the individual reflection and debrief.

I wasn't sure if people could stay or actively participate for the whole hour, so I had people debrief individually and with the whole group instead of in pairs or small groups. Our conversations strayed from the testing skill into the natural things you'd expect during an experiential workshop: talking about the exercise itself, or talking about how this fits in with your work. It was delightful to see people making the connection from one lesson to another and reflecting on what they'd want to change in their day-to-day.


The feedback I collected from the participants indicates that everyone got something different out of it. Each lesson resonated with somebody, which felt like a success to me.

Now I'm curious: what testing skills do you teach to a wide audience at your company? What materials do you use?


Photo by J. Kelly Brito on Unsplash

Strengthen Your Code Review Skills

I spent my first two years at my current company getting my code reviewed and the following almost two years reviewing 3-10 merge requests per week. Our tech stack was in Python, with pytest as our test running, the requests library for API tests, and Selenium for browser tests, all hosted in our company's paid gitlab instance. All that experience shaped how (and whether I) offer feedback on the merge requests I reviewed for members of my own team and neighboring teams working in the same tech stack.

Define the relationship.

There are power dynamics at play in any relationship at work. For members of my team, they had to have a really good argument to refute one of my "suggestions" because I was their test specialist and their team lead evaluating their ability to respond to feedback and change their behavior by performance review time. No pressure!

For members of other teams, they had more power to push back. It could empower them with the knowledge I shared, but they were free to reject it.

Focus on what matters.

When I review a merge request, I start with the question "what is this code supposed to do?" If it's a merge request for my team, the JIRA ticket number in the title or the branch name would clue me in. For code from other teams, champions would use the description field to explain what the product change was and how the test code supported that. Most merge requests left me guessing a bit. I'd have to read the code contained in the tests to figure out what the test, and ultimately the product, was supposed to do.

Reading the code also got me thinking about the things I was best equipped to help the code submitter with: maintainability and "what if?" scenarios. As a tester, I could look at a list of tests that create, read, and delete a thing and ask "is update also part of the picture here?" As a code reviewer with a longer tenure at the company, I had a more informed view of whether copy and pasting would work or if a new function was needed.

We had two linters set up to run on every commit: flake8 covered style enforcement (indentation, blank lines, etc.) and vulture identified unused code. For issues of style that a machine couldn't decide, we had written guidelines to point to. I pointed to these three the most often:

  1. comments explain why the code is the way it is, not what it does (so code is clearer to read and update)
  2. setup and teardown should take place outside the test (so pytest reporting tells you there's an error instead of a failure when something's off)
  3. API tests assert the status code before any details about the response body (because the body's not going to have the right stuff in it anyway if the status code is wrong)

I would give feedback about these topics, trying to ask questions to disambiguate my observations from interpretations. I knew that the person who'd written the code had spent more time and thought steeped in the problem than I had. Questions allowed me to assume competence while gathering evidence to the contrary.

As I read other people's code, I saw lots of weird stuff: stuff I would name differently, stuff I would put in a different order, stuff that took up more or fewer lines than I would have to write the same thing. My experience living in a non-native English speaking culture served me well here: I let it go. Was the test name meaningful to them and their team? Did putting it across a couple more lines help them understand it better? Was it actually a problem with what the code did or just a personal opinion? Did they want to set the constant right before they used it instead of at the top? Go for it! It works, I can understand what they meant, and that should be the threshold. My review was not an opportunity for me to show off my Python skills. I was there to help the code submitter with their tests. I reserved the right to remain silent on unimportant matters.

Communicate well.

Praise the good!

Notice when people have done something well and praise them for it! Positive reinforcement is the best way to turn up the good on what's already happening in your code base.

Right level of abstraction

I reviewed merge requests that were 30% done that I mistook for 100% done; conversely I saw at 110% done that I would have killed at 30%. A little [WIP] label in the name of the merge request or bullet list of which tests were still missing helped me offer the code submitter the right kind of feedback at the right time.

Sometimes, the code isn't the problem, the product is. I've seen a 500 http status code returned for something the user screwed up, which should be in the 400-range. A code comment "Should this be a 400 response?" opened up a more interesting conversation about where the product was in its lifecycle and the code submitter could lobby their team to change the product's behavior.

If having the conversation about the code isn't the right approach, I tried having the meta-conversation instead. "I'm not convinced this API spec is done. Where are you in that process?"

Code quality measurement: WTFs/minute

Right format

Tone is hard in writing. I do prefer writing, because it gives me the opportunity to have several drafts, separating my WTFs-per-minute from what the code submitter receives. I just don't always hit send. Before leaving a comment on a particular line in gitlab, I ask myself: is this the right format? Have I removed any judgy adverbs like "just", "obviously", or "actually"? Would a video call, a Slack message, or a comment on the whole merge request be more likely to be embraced?

The receiving

One of the many tough things about feedback is that the receiver determines the priority of the feedback. (For all the other tough things about feedback, read What Did You Say? The Art of Giving and Receiving Feedback.) The code submitter often completely miss what I meant the first time. Even if I thought I'd delivered my feedback as well as I could have, it wasn't always accepted. Everyone extracts different information from the same situation. The feedback that provides the most information can be the hardest to accept.

It doesn't have to be like this.

Asynchronous

You may have read my first paragraph and asked yourself "why is she doing so many async code reviews?" and you'd be right! The company was scaling at a speed that I was forced to optimize for "number of minutes per day without video calls" over shared understanding.

Synchronous

Did you know that working in a pair or an ensemble can do all this feedback and knowledge-sharing stuff in a better way? See more about how an ensemble made learning happen in this slide deck.

When I was able to, having a conversation helped me make sure that I was giving the right feedback at the right time to the right person. Pairing with the code submitter got not only the mistakes fixed, but also the thought processes behind those mistakes.

Don't take my word for it.

I have the benefit of learning from smart people who are also thinking through what code reviews are and what they can be. I have yet to be free at a time when the code reading club Felienne Hermans started has met, but I look forward to joining sometime in the future. Here is a collection resources that I've already found useful:

Videos

Books

Blog posts

Quotes found on Twitter


Photo by Robin Mathlener on Unsplash