SoCraTes UK, August 2020

After years of envying the tweets and stories from SoCraTes conferences, this cursèd disease finally made attending one possible. I could afford the time (part of one day, instead of a whole weekend + travel) and the cost (£2.50, instead of €€€).

I'm preparing to host an open space of our own in September (TestCraftCamp, tickets are free!), so aside from the sessions, I was also interested in the infrastructure of the event. The hosts hung out in one central Zoom room (a one-month subscription cost them £10). Each attendee was appointed co-host, allowing us to move around among the many breakout rooms as we pleased. (There may have been as many breakout rooms as there were participants, named with varying levels of creativity but functionally the same.) A Notion board captured the user-generated schedule, along with links to the Mural whiteboards the organizers created for each room, rather than for each session. Chat happened in a Discord channel, but I wouldn't recommend it.

The first session I attended was a discussion about the value and hazards of leaving a job. First: decide where you want to go, what kind of thing you're looking for. You're not going to get where you want to go by running away from your current role. Run towards something. Have honest conversations with your peers, manager, mentor, coach, etc. about what change you can affect at the organization. (Things like the nature of the product, the tech stack, or the customer base are unlikely to change quickly.) If you do decide to change jobs, it will take a lot of energy to find something new and build your network at the new place. You may have to prove yourself all over again, and solve the problems you've already solved with this different group of people.

After taking a total of three bullet points in my notebook, the host of the second session I attended about hypergrowth was having trouble taking notes and listening to/facilitating the session. I volunteered to jump in to take the notes. Here's what I made:

Notes from hypergrowth session

I was trying to capture what people said, change the background colors, and move the notes around to put similar ideas next to each other. It was hard to know in the moment whether all of that is legible or helpful for anyone else. Luckily people in the session and in the Discord channel later thanked me for my work. I'm still struggling with how to impart any note-taking skill onto others, particularly when this can be seen as a gendered task.

Men: Wow, you are so good at taking notes!
Me: You would be too if you got stuck being the one doing it all the time!

But I digress, back to hypergrowth: The things that worked before COVID and at smaller orgs don't work at a bigger org today. Find your people within the smaller org - your department, your mentor, your friends - so you can feel stable even as things around you grow and change. Communicate your expectations clearly, for yourself and for your team. Keep a journal of what you've accomplished to feel more anchored in the value of your work.

After lunch, I started off in the wrong room because I didn't notice the sessions had been moved, but found my way to the one about coaching your peers. The conversation floated from a definition of terms to how to make space for mentoring. In order for people to mentor, there must be consent from the mentee. Ideally there is also some expertise in the organization around what is good, and someone (mentor or not) to teach the mentee what is good. The conversation reminded me of a talk I saw from Marianne Duijst; here's the moment she started talking about the difference between coaches, sponsors, teachers, and role models, and how they can support you in different ways.

Anytime Maaret Pyhäjärvi is running an exploratory testing session, go to it. You will be better for it. This time, we tested a simple web application. We ended up spending most of the time adding scenarios to the automated tests Maaret had already set up in Robot framework. One of the questions it brought up for me: do you leave failed tests in your automation suite to highlight unexpected behavior? I had this problem this week: I'd updated a schema to validate against the current behavior while the story was in progress, instead of the behavior we wanted when it was done, so I could verify other fields in the output. Next time I think I'll separate them into two tests and leave them both as pending (marked as xfail in the pytest framework) so it's clear something needs to change before the software is released.

Thank you for the SoCraTes organizers for, as I learned a coach does, holding this space for us to examine ourselves and reflect. As someone who is much more fluent in emoji than GIF, I'm grateful to have won the hearts of the participants with this "leting the cat out of the bag" GIF in the GIF challenge.

Cat aggressively jumping out of a backpack

TestBash Netherlands 2019

In reviewing my notes from the TestBash Netherlands that occurred in May of 2019, two big, related themes jump out at me:

  1. keep exploratory testing
  2. learn by sharing

Andy Brown gave a talk about human factors in highly automated systems. As flying airplanes becomes more automated, pilots know less about how to switch to manual overrides in a time of crisis. You might not have more than a few hours a year of practice for when things go down. Continuing to share stories from the past and learn from them is one way out.

For Gitte Ottosen, who gave us a tester's perspective on Agile, learning never ends. Understanding the customer journey and tying your work back to the business value are essential for making informed decisions about what to test, and which subset of those things to automate. Teaching, knowledge-sharing, coaching, and pairing can get the whole team involved in advancing quality, even if they're not all strong exploratory testers.

Jit Gosai spoke about continuous testing. Practice test-driven development, story mapping, and three amigos meetings before the code is written. Improve automation test suites, use exploratory and mob testing, and incorporate feedback from the customer. When you're exploratory testing, you're not just confirming that the software functions as expected, you're testing the goal of your whole organization. Jit found that getting everybody on the team exploratory testing caught more bugs than automated tests.

Marit van Dijk hit the nail on the head with her talk: keep exploratory testing. Maintaining a consistent state of test data across mutliple teams is difficult. Rather than spend your time setting up control mechanisms for those systems, explore the systems themselves. Bugs are found not in the things we can control or know about, but in the space between systems. Pairing a developer with a tester was a lot faster than testing solo, because nobody had to go back and reproduce bugs. Spread the risk across systems by hooking them up one at a time.

Vera Gehlen-Baum and Beren Van Daele spoke about incorporating your learning into your backlog. Identify what you want to learn, and write actionable acceptance criteria for your learning. This should include sharing what you've learned individually or with the team; it doesn't have to be confined to same sprint as the work for the team. Linking personal goals to business goals will ensure that people see they're improving.

Joep Schuurkes talked us through what he was thinking as he was live-coding, which is learning and sharing at the most granular level. Separate concerns when you automate: keep what the code is supposed to accomplish separate from how it's accomplishing it. He expanded the CRUD heuristic that helps me decide what to automate, and added an extra DERR to make it CRUDDERR to ensure you can also debug, explore, run, and report on your code.

Anne Colder and Vincent Wijnen gave an experience report about their mentor-mentee relationship. Mentoring is different from teaching. Ask questions about your mentee's experience, and expect different questions from them than you'd receive from your more experienced colleagues. Mentoring stimulates reflection on both sides.

Drew Pontikis spoke about the illusion of control. Challenge your own thinking by listening to new voices. Recognize when you can affect change in a situation.

Ben Simo gave the last talk of the day about the art of scientific investigation. Design your next experiment based on your previous ones, and adapt as you go along.

There was a whole slew of 99-second talks, but the only memorable and explicable thing I wrote down from them was something Ilena said: "Understand your team doesn't function the way that you do."

The day prior to the day of talks, I helped facilitate and debrief a workshop that Joep Schuurkes developed around building an API testing framework. It was the first time we'd given it together, so that whole day was a 'learn by sharing' experience for me.

Humans Conf, June 2020

I found Humans Conf on Twitter, and found myself in a position to attend when gestures at the general state of the world moved it online. You can check out the output from the entire open space on the Notion page. It took place for a few hours on a Tuesday night. I discovered the energy it takes to listen and participate attentively in the evening affected my work for the following couple of days. I am only human!

The first session I attended was about making diversity more inclusive, and hosted by Lina Zubyte. Due to the makeup of the participants, the conversation veered towards being a woman in tech. We spoke about incorrect assumptions, how we felt about particular language (especially guys), and how much energy that drains from us all the time. I heard stories from other women that echoed situations from my experience. I recognized how telling a story from a memorably bad day helps others understand and connect. I pondered if there's a way to make people understand, without reliving everyday indignities. My takeaway from the conversation was clear: you get to decide how to spend your energy. There will be people whose minds need changing, but the energy you would expend making that happen wouldn't be worth it. Let those people who challenge the premises of the problem space fall off your radar, and invest in people where you can make a difference.

For the second session, I started off in motivating interactions, hosted Maren Baermann. I truly enjoyed the model she introduced regarding extrinsic vs. intrinsic motivation, and tying those to our basic needs. But when the Zoom gods kicked me out of the free room, I took it as a sign to go search for a less interactive session where I could just listen. I landed in a session relating couples therapy to your team dynamics, hosted by Daria Dorda. The conversation covered some national stereotypes, how people get distracted from the real message, and how to get the necessary distance to engage with a topic. My takeaway here was: if you don't set boundaries, others will for you. Amen.

I'm grateful to Benjamin Reitzammer who hosted the open space, and all the others who made it run smoothly. Seeing just a handful of familiar faces set me at ease. I'd never seen Zoom chat used effectively, but the rush of answers to the closing questions at the end of the evening gave me that warm & fuzzy feeling like I'd actually attended a conference in person. I needed that, so thank you.

I don't think I can bring myself to join the next Humans Conf on Wednesday, 15 July. I've got a week off, and I need some time away from Zoom to be feeling more human. :)

Test Automation Day 2018

I got a free ticket to Test Automation Day in 2018, just after I'd moved to Rotterdam. I was overwhelmed by the confluence of events: Angie Jones keynoting, Ard Kramer running the show, and meeting Amy Phillips in real life. (Neither of us were sure it was the first time we'd met because we'd been following each other on Twitter for so long!)

My most shocking note from Angie's keynote is "clicker, no notes," because of course Angie had her talk down pat. In a talk that anticipated the current, urgent conversation in AI and machine learning, Angie recognized that we can't agree on what human ethics should look like. Figure out who you're advocating for, and tie the bugs back to that business value. You're not going to be able to define all the business requirements up front; expect the unexpected.

Amy Phillips spoke about how tests in a DevOps environemnt allow you to get fast feedback. Like Agile, this style of working is not about minimizing the pain and struggle in developing software, but rather about bringing that pain forward. DevOps allows us to become aware of problems sooner, so we can act on them sooner. Running more tests is not necessarily better. Do not accept flaky tests. Free yourself from an overreliance on end-to-end tests and tests that cover non-critical paths to get your build time down. Rather than running tests on every commit, improve your monitoring.

I've got some quotes from other talks and the panel that day:

  • "A tester is someone who believes things can be different." ~ Jerry Weinberg
  • "The team was not mature enough to determine priorities."
  • "Maintain the relationships you want to build."
  • "Are you just doing it because you can?" (regarding UI automation)

I can't tell everybody how welcome and in the right place I felt by being able to jump in this day on short notice.

Start With Belief

Imagine a scenario where you find a software bug. You go to another colleague. They perform the exact same reproduction steps you did. But the bug doesn't happen on their machine. What now?

Works on my machine

Your colleague may not believe you found a bug, or they may not be sure if you did. They may blame you for doing something you shouldn't have. They may insist that most users have a machine more like theirs than yours, and it doesn't matter if it doesn't work on your machine. They may think it's too much trouble to track down what's happening on your machine, and leave the burden to you to figure it out. They could have a fixed mindset, and think that you, your machine, the software you're running never change. (Read more about fixed vs. growth mindsets in this brilliant Brain Pickings article.)

Does not work on every machine

Instead, they could have started with belief. They could commend you for uncovering something they themselves could not. They could be curious about how your machine and software are different from what they have running, and look into how many other users this is affecting. They could pair with you to come up with ideas about how to stop the issue from happening. If they have more access to the underlying systems, they could look into the code and configuration settings. They could have a growth mindset, and think that your machine, the software you're running, and most of all you, can change.

Start with belief

Now, imagine a different scenario. Imagine someone describes being mistreated by the police. They were doing something that is fully within their rights, and the cops said they weren't allowed to.

I believe you.

Start with belief. Do not think that because you've had different interactions with the police, that the police must not treat people in the way you're hearing. Do not think that your individual circumstances, particularly the color of your skin, means that you'll be fine. (Think more about how racism is fascism applied to a particular category.) Take the time to shut up, listen, and discover how you can help. Believe that things can change. It starts with belief.

Joe Brusky/flickr

Trying Out Open API Editors

I was editing an Open API with multiple layers of inheritance recently. I kept uncovering errors long after I created them because of the way they display in editor.swagger.io.

This great talk from Lorna Jane Mitchell about Open APIs made me realize how many other tools I could use to edit these specs. There's a whole list at openapi.tools. During my crafting days, I resolved to try some new editors. I decided to see what it was like to edit the existing complicated spec, and write a spec we were missing for a very simple API (two GET calls).

TL;DR

I'm going to use the Visual Studio Code Open API plugin to write and navigate through our specs. We're rendering our specs with editor.swagger.io, so I'm going to keep running it through there to confirm they appear as expected for our stakeholders.


Editors

Spotlight Studio

Lorna mentioned this one in her talk. It's got a web editor, but I downloaded the Mac client. It's a GUI interface, so rather than writing YAML, it's more like filling out a form. It turns out I would rather write YAML than fill out a form! Good to know.

The way it organized hierarchy did not suit my mental model for what I was trying to accomplish. I thought about writing a new spec for two GET calls more in a hierarchy (Things both API calls shared. like security > endpoint for the call > parameters). Spotlight Studio grouped adding any new thing into one menu: API, endpoint, parameter, whatever.

Spotlight Studio has a git integration feature, where you can switch branches within the application. I'd connected it to my remote repo, so it couldn't see the local branch I'd created from my Terminal. When I wanted to save what I had so far (no auto-save??), I found save buried deep inside a menu without a keyboard shortcut to save it. I wasn't interested in changing my workflow to accomodate the tool.

Cmd + S it's not that hard

In looking at my existing, complicated API with inheritance, I didn't find a way to see everything in the same view. You had to click through to see inherited sections. Viewing descriptions required a mouse hover.

The final straw for Spotlight Studio was the error panel. Although thoughfully displayed to be informative without alarming, the line numbers didn't reflect where the issue was.

Overall: The GUI was getting in my way rather than helping me. Pass.

Senya for IntelliJ and Kaizen for Eclipse

I couldn't get either of these installed on my Mac! The instructions were essentially "Install from the marketplace, restart the IDE, and it should just work!" My machine enjoys sending me on fruitless adventures in debugging, but I chose to give up on these tools rather than trying to figure it out.

Swagger Hub

SwaggerHub came recommended by my colleague Reinout, so I signed up for a free trial to give it a shot. I'm still not completely sure if this is within the security guidelines our company has for creating accounts and sharing data with third parties, so I deleted the data I'd put in at the end of my session.

It's a lot like editor.swagger.io, but with more bells and whistles. It does have a separate error panel, which seems like it would be what I want for my big complicated API. But when I was writing my new API in it, the error panel would pop open to remind me about missing fields whenever it decided to auto-save. Not cool.

Two and a half hours after confirming my email address to use the product, an account manager reached out to me to find out if I had time for a quick phone call. No, I did not, I was in the middle of trying to ignore your pop-ups while writing my damn API spec! Their Twitter person didn't understand my complaint about the errors that appeared in the panel. The one I tweeted about was trying to tell me that request parameters get labelled individually with required: true or required: false, while you can throw all the required response parameters in a list.

Memes will not save you

Overall: If I had a license, I'd use SwaggerHub to look at existing APIs, but not to write new ones. I didn't look into using it to run a mock server, but I bet that'd be useful.

OAIE Sketch

Have you ever used sequencediagram.org to create a UML diagram and thought "What if this looked more 90's?" Well, you're in luck! OAIE Sketch will make you nostalgic for Windows 95. After cloning the github repo and open the .html file locally (whatever works I guess?), you'll see something like this.

😍😍😍

I liked the way it was built for you to either update the YAML or the visualization, then decide when to push changes to the other side. But I couldn't figure out how to paste in a spec I had somewhere else and get the visualization to update.

Overall: Might be a good way to think about shared outputs if it updated?

Visual Studio Code Open API plugin

This put me back where I started: the plugin for Visual Studio Code. It's got syntax highlighting. (It made me realize that my export from SwaggerHub added style and explode fields to my request parameters! I guess I'll save figuring out if I should keep those for another day.) It's got a schema, so I can navigate around the spec based on how the things are connected without having to remember line numbers. It's got error messaging that is clear enough without being invasive: red squigglies appear on affected lines and red trangles appear next to the line number on the left. They're small enough to ignore if you're in the middle of writing, but easy enough to find and notice without going on for too long. I'm sticking with this.

If a test falls in a forest...

The saying goes "If a tree falls in a forest and no one is around to hear it, does it make a sound?" I have similar question that shapes the way I think about software testing: If a test is performed but no one takes action on the results, should I have performed it? I think not.

If the answer to "Who cares?" is "No one," don't perform that test. If you're not going to take action on the results of your testing in the coming hours, days, or weeks, don't perform that test. The world around you will change in the meantime, and the old results will not be as valuable.

One of the 12 Agile Principles is simplicity, or maximizing the work not done. Testing on an agile team provides information to help decide what work should picked up in the coming iteration(s). But without meaningful collaboration or feedback, testing is a pile of work for no reason. Work is not meant to produce waste. Save your time and your sanity by thoughtfully analyzing what should not be done, and coming to an agreement with your team about it.

My team gets scared about the quality of our product and skeptical about how I'm using my time when I describe what I'm not testing, or which automated tests I'm not going to run. "But isn't testing your job?" says the look on their faces. "But then what are you going to do?" is what they manage to say. Rather than capitulating for appearances, to just "look busy," I take this as a challenge to make my exploratory testing and other work I'm doing for the team more visible.

Risk-based testing

In her TestBash Home talk, Nishi Grover Garg asked us to think about estimating impact and likelihood (with possible home intruders as an example). I'd have trouble pinning down our no-estimates team on concrete numbers for undesireable software behavior.

Slide from Nishi's TestBash Home talk

But it does reflect the conversations Fiona Charles's test strategy workshop encouraged me to spark on my team. We do talk about "Yes, this would be a problem, but customers can use this work-around." Or "Yes, we could dive in and investigate whether than could ever happen, but is that more important than picking up the next story?" Being able to identify risks and have thoughtful conversations about their threat to stakeholders allows us to make informed decisions about how we should be spending our time. In testing, we don't always want the most information, we want to discover the best information about the product as efficiently as we can.

Examples from my current project

Cross-browser testing

We were preparing our web application for a big marketing presentation. The presenter had Firefox as the default browser on their PC. We had a script of the actions they'd perform on stage, and which pages the audience would see. I happened to find bugs on pages we weren't showing, or in the way the scroll bars behaved in Chrome rather than Firefox on my Mac.

I did not add these issues as bugs in our tracking system, or dig into them further. I knew that they did not pose a risk for the presentation, and a new design would be coming along before customers would potentially use those pages in Chrome on a Mac.s

The pipeline

We have a pipeline. It runs the tests we've automated at the API and the browser levels against the build in our test environment. I hoped it would inspire the team to think about what the next step could be: getting the tests to run against before merging into our main line, setting up an environment where we're not dependent on the (shared) test environment, looking at the results to see where our application or tests need to change.

But we don't look at the results. We don't have alerts, we don't open the page during standup, we don't use them as a reference when we're debugging, we don't have a habit of looking at the results. If we do happen to look at the results, we don't take action on it. Building the stability of our feedback loop is not seen as high-priority a task as building new features.

We don't need to run this pipeline. It's using up AWS resources. Looking at the long line of red X's on the results page only provides alert fatigue. We would be better served by not running these tests.

Minimum viable deadline

We promised to deliver a feature to a dependent team by a sadline. (A sadline is a deadline without consequences.) In the week before the sadline, three stories were left. On the first story, I found a mistake the developer declared "superficial" when he was lamenting our lack of deep testing. He decided to review the automated tests I'd written for the second story. He found a couple of use-cases that would require a very particular set of circumstances to occur. I wanted to encourage the behavior of reviewing the tests and thinking about what they're doing more deeply, so I spent the last hour and a half before a holiday weekend automating these two cases.

I'd drafted some basic automated tests for the third story, but the last feature went relatively unexplored. I should have used my scant time to test the third story more thoroughly instead. The complicated tests for the second story could have waited until next week. While we would be curious about the results, it would not have stopped our delivery of the feature. I should not have written them.

Far Side cartoon


You may be scared to say no to testing things that don't matter, where the performance will not reveal any risks or cause any follow-up actions to take place. It may be tempting to spend a bunch of time testing all the things you can think of, and only reporting on the tests that yield meaningful results.

But life is not about keeping busy. Make your time at work meaningful by executing meaningful work and declining to do things that aren't important right now.

TestBash Home (Spring 2020)

Typing to the people you usually see in person can have the same energy as tweeting at them from across a big room.

TestBash Home, put on by the Ministry of Testing, was a 24-hour around-the-world tour of talks, panels, and coaching from the best of the best. I missed purposely and prudently skipped a big chunk of the schedule to get a good night's sleep and properly absorb what I could attend.

I was definitely overloaded with places to chat (Slack, the video streaming app Crowdcast, Twitter, or the app built for the occasion) simultaneously with the video stream, the same way I would be at an in-person conference. I got that same "I want to hang out more but I'm so tired" feeling I get in real life. Though as Gwen correctly noted, this is real life now.


Cost-Benefit of Automated Tests

João Proença and Michaela Greiler weighed the costs and benefits of continually updating and running automated tests. Coming from two different angles (cognitive biases vs. Ph.D.-level number-crunching), they shared a similar quadrant model to make better decisions about your automation.

João's model on my actual TV screen Michaela's model

João took a stronger "if it's not providing value, do something about it" approach, whether that's editing the test or deleting it altogether. He asked us to consider: If you were asked to write the same test from scratch today, would you do it the same way? When you need fast feedback, what's the opportunity cost of fixing, or even just running, a particular test? João reminded us to ask "what is being tested?" and decide if that still matters before jumping in to fix a failing test.

Michaela brought a perspective from a much larger software company (Microsoft) than I've worked at before. Her approach left open another option for the fate of tests with questionable value: only run them at certain stages. Only run the tests now where the cost of running them at a later stage is too high. Running tests at the wrong stage can increase false alarms and diagnosis time. Running tests that exercise unchanged code should be avoided.

Michaela's stages


Learning & Teaching

Veerle Verhagen's 99-second talk got me rethinking some learning experiences I've had.

It boiled down to this: learning is easy and teaching is hard. Try everything you've had trouble learning again, but with a better teacher this time. Whoa.


Evil User Stories

Every security talk I've heard tells you to look at the OWASP Top 10 and make a threat model. Anne Oikarinen told us what to do next: make an evil user story. If I were a person who made a mistake, shouldn't have access at all, or received more access to the system than anticipated, what would happen? How could we mitigate those risks?

If your team doesn't know where to start with web security, Anne suggested adding static code analysis tools and exploratory testing to your practice. If you've got outside penetration testers exploring your software, add logging around the issues they trigger so you know better what's happening next time.

I use the big list of naughty strings to test for security vulnerabilities I don't always completely understand. Anne also recommended Hacker 101, a free class about web security.


Live Coaching

James Lyndsay's coaching session reminded me how valuable gathering information and stepping back to ask "what will I learn by performing this test?" are when you're stuck. I've got to go back to my notes from a workshop I took with him a couple years ago to reinforce some of these lessons for myself.


Continuous Delivery Survival Guide

Amy Phillips might be the first person I met where neither of us could remember if we'd met in person before, or if we just knew each other from Twitter. Her talk about surviving continuous delivery from 2017 lives on an essential onboarding guide for testers today.

Amy wants you to have enough context about your new team and their values before you jump into ideas about what you could change. People probably won't come to you to hand things over for the testing phase, so you have to figure out for yourself where you fit into their process. Amy recommended what I think of as "digital archaeology": reading through all the team artifacts to get a sense of their culture. Even if you don't get write access, looking at the customer support tickets, team backlog, and git commits can show you how the team actually works, rather than how they tell you they should work.

Amy really made me laugh when she described learning about the different development environments. Ask what the developers use. Ask what sales uses. Ask how many environments there are, how they're different, and what people's expectations are around them.

You can ask questions without knowing what a good answer is.

Even without knowing how to implement something, you can ask questions about it to trigger developers' wheels to start turning. Looking at the structure of the code and what's been changed may give you ideas about what hasn't been covered.


Burnout

Maryam Umar spoke about being burned out from juggling too many priorities, combined with unmedicated anxiety. Maryam called out how difficult it can be to start from scratch and build a support network when you relocate.

Seemed appropriate to watch this one from the garden

I worried a lot about building a support network before I relocated, but not much since. I'm so glad I've been able to recognize the "this is too much" feeling for myself. Saying no to big but overwhelming opportunities has left space in my life for things I know I want, and greener pastures I couldn't have imagined.


Leadership

This leadership panel made me want to have Nicola Sedgwick, Alessandra Moreira, and Shey Compton in my corner as I transition to a manager position. Obviously (to me at this point in my career and after various other trainings I've participated in) leaders do not have to be people managers, and vice versa. People need to give you permission to lead them. Leaders have a vision they can communicate up and down the chain of command.

The best way to be a leader is to lead by example. Sponsor other people at your organization who you see can be better leaders. Having the technical chops will allow people to believe in your value. It's easy for testers to underestimate our ability to influence behavior, but bug advocacy is a lot about that.

Nicola's framework for sharing feedback is something I'm defintely going to try in a 1-on-1 meeting this week: share an observation, recount the accompanying behavior, and describe the impact.


Morale

Jenny Bramble spoke about bad and good metrics, specifically that morale was the only good one. Morale encompasses psychological safety, emotional health, contentment, pride, deilght, and core values. It's hard to measure, but that also means it's hard to game.

If your team isn't one where people can have negative emotions, disagree, or talk about mistakes or risks, you're doing it wrong. Genuinely asking people how it's going and taking action on the results is the best thing you can do to improve morale.

Possible survey questions to help measure morale

I look forward to Jenny's proposed next talk about the oral history of bugs on teams.


Titles

I'd seen this talk from Martin Hynie on the Ministry of Testing Dojo, and worked with Martin in the past. So I was expecting to be reminded that testing isn't a straightforward observe -> evaluate -> report operation. I'd seen him take on challenges outside the perceived role of tester, and I'd done so myself. I knew that it's easier to create a good artifact if you start by creating an imperfect one and ask people to correct it, rather than starting from scratch.

My biggest takeaway was in the Q&A: Someone's association and past experiences of people with my job title are more meaningful than the job title itself in setting up my working relationship with them.


Optimism

My favorite 99-second talk from the second set was Jen Kitson's about optimism. While we discover and often have to report a lot of bad news as testers, optimism is essential to testing. We notice things, report bugs, and pushing for fixes because we can imagine a world that can be better.


I'm so glad I got to attend TestBash Home. I would listen to Vernon Richards talk about sports balls I don't understand. Gwen Diagram gives me life and energy in a way that I cannot fully explain. There were people I got to see or chat with that I haven't encountered for months, or years, and it felt good. It felt like home.

Errors You Might Encounter While Editing an Open API Specification

One of the tasks for my team last week was updating an existing GET API call in our specification with some new fields. The Open API Specification, formerly known as Swagger, allows you to provide the details for building in API in a compact, informative way. When you've got the authentication set up correctly, you can use the examples to actually call the API right from the spec!

My team builds with a framework that has the power to auto-generate API specs in this format. We've chosen to write them ourselves rather than have them auto-generated so we can be more specific in what kinds of errors and error messaging people will encounter for different calls. For example, a 404 Not Found might make sense on a GET call for a specific resource, but not for a search call.

When you open http://editor.swagger.io/, you should see a two-pane view of the editor on the left and the rendering on the right. If you've opened this URL before, your browser session will remember and display your most recent changes. If it's the first time you've opened it, the example specification of the Swagger Petstore should appear like this:

Petstore example spec

I hope that in describing the errors I encountered, you can keep an eye on them as you're editing rather than having to go back through the specification at the end to figure out what went wrong.

Errors I encountered

Red errors in the box at the top and next to line numbers in editor

Indentation errors and using reserved characters (I found square brackets, dashes, colons at least fall into this category) in unexpected ways will likely give you an error in a red box between the navigation and the title of the spec.

If you're lucky, the error corresponds to the line it mentions, you find a red X on that line, and you'll be able to figure out what went wrong there.

If you're less lucky, the line mentioned in the red box will refer to the beginning of the next code block that's unparsable because of the syntax error, or the first place where the reserved character you've used incorrectly is actually used correctly.

The hardest part about these errors is that you may not notice them. If you're scrolled down the page far enough, you won't see the error box or the red X as you're creating the error.

Spinning without loading

If you're getting a spinner where a part of the specification should be loading, you've got a issue with the reference to a schema. Schemas allow you to chunk out and reuse part of the spec, with a reference to them in another place.

I kept getting the spinner when I referred to a schema that didn't exist, either because I'd updated the name (but not the reference to it) or because I'd screwed up whether it was singular or plural. Fixing the error isn't always enough to make this particular error disappear. Reloading the page will make it re-evaluate what you've got.

😱 could not render this component, see the console.

What a fun and exciting mistake you've made in the specification, to cause this very comforting and reassuring error message.

Like the endless spinning error, this one means something very specific: you've designated something as an array, but you haven't explained what kinds of items appear in the array. Adding a description or reference in the items section should do the trick.

Editing without the Open API editor

It's possible the Open API web editor was not the best tool for this job.

The Visual Studio Code Open API plugin did make the red errors obvious enough that you could see them from anywhere in the document. It also gave me the collapsed version of the longer spec in alphabetical order. This allowed me to navigate around without remembering the line numbers of where I dumped the schema separately from the overall spec. Unfortunately the extension didn't catch when I referred to a spec that didn't exist, but I expect seeing the list of schemas on the side would help discover this mistake. The extension also didn't notice if I didn't define the objects in an array.

There's also an Open API spec tool for offline use, but the instructions went beyond the interest I had for this blog post. Try it out yourself, and maybe I will the next time I've got to edit these specs.

Remembering TestBash Brighton 2018

TestBash Brighton was one of 10,000 things I had to do in the weeks just before I left my whole life (family, friends, job, apartment, belongings) and moved across the ocean into the unknown. It was the first place I'd gotten to share that big news in person, with people who would become larger parts of my life once I moved. Visiting the city where I'd studied during university and first thought "I could leave the United States" brought things full circle for me.

I paired on presenting a brand-new workshop and talk, with two different people. This would have been a lot, even without all the uncertaincies and distractions swirling in my life those days. After hustling to adjust the schedule of and present our morning workshop, I distinctly remember choosing to skip the afternoon one I'd planned on attending in order to fill out immigration and relocation paperwork. I'm shocked to find I was able to focus enough on some other peoples' talks to take coherent notes about them.

Royal Pavilion in downtown Brighton


Anusha Nirmalananthan's talk about sharing a chronic illness sticks with me today. One of the things I love to jump into is troubleshooting. I hear about a problem, and I'm already thinking of ways to solve it, and asking about what you've tried already. Anusha reminded us that listening and not saying anything can be more helpful and powerful than all the patronizing "Have you tried...?" questions in the world.


Emily Weber spoke about communities of practice, which have always been billed as "guilds" in places where I've worked. Emily encouraged us to connect with people around our organizations in our discipline in a supportive, voluntary group without a hierarchy or an end date. While the occasional guild meeting I've attended has turned into a groan-fest, dedicating time and energy to fostering change (to code, to job descriptions, to your team's way of working, etc.) gives me that "I did something today" feeling. I'm grateful to be able to make time to learn with my colleagues, and build a support network for when I need advice from outside the bubble of my team.


I loved Rosie Hamilton's talk about logic in testing because it drove me back to the basics. How do we decide what is true? How do I describe my thought process? When the availability of relevant cases prevents effective inductive reasoning (determining a heuristic), we have to move to abductive reasoning (determining the likeliest explanation from the available information). Realizing when you're doing this and what other information might be available to you elevates your testing.


Looking back at my notes from Aaron Hodder's talk on structured exploratory testing make me realize how much freedom I can have in my testing when working with an inattentive team. His suggestions about making for easier reporting, fewer rabbit holes, and predictability of a time-table suggest that someone is really breathing down your neck about the status, progress, and depth of your work. The biggest advantage I've had in sharing my testing charters with my team is that I find out which ones aren't valuable before I spend time executing them. Actively choosing not to test something when we don't care about the outcome or the risk it poses is very effective work.


Alan Page spoke about the modern testing principles he'd been shaping on his podcast for a while. My notes boil it down to: testers should do less testing and more coaching. This has certainly served me on teams producing too much for me to personally go as deep as I'd like in testing, but it also pays off when I'm out of the office or unavailable at the office. Working on a team that knows how to test means I get to look at higher-quality code, with more interesting bugs.


Ash Winter gave a talk immediately after mine, so perhaps I did not gather everything from it. But I did save these tid-bits: a pipeline is built to prove that something shouldn't go out. A pipeline can provide a massive amount of information, but without a strategy, too much data doesn't help humans make better decisions. Small things you can do to make huge improvements: deploy regularly, and learn source control.


Reflecting on the open space, the social events, and the atmosphere at TestBash Brighton 2018 makes me wish for the experience we all missed at the now-cancelled event in 2020. Jumping into the unknown seemed so doable when I knew there'd be so many people to share with and learn from on this side of the Atlantic. I don't know when I'll see you all again, but I look forward to that possibility.

Can't go to Britain without a proper tea