Karen Johnson at CAST 2015

I was about to throw away a whole notebook full of conference notes I'd already blogged about. But because I'd already shipped these notes across an ocean and ignored them for a few years, I figured I'd give them a final read first. Luckily I found a few pages of gems.

Karen Johnson's talk at CAST (The Conference for the Association of Software Testing) in 2015 covered the breath and depth of her experience in the software industry. I've taken on the career coaching role for some colleagues, and so much of what Karen emphasized is what I've been saying to them as they try to advocate for that promotion, discover their next step, or accept that they have done the best they could given the circumstances.

Testers should have a portfolio

How do you make your work visible? Particularly when your work is invisible when done well? Tell people about it in standups. Tell your boss at your 1-on-1. Write it down and then share a link the next time it comes up.

If you were to do a retro on yourself, what would you want it to be?

Karen advocated for giving back to the community, and challenged the gentleman who pointed out this was not strictly part of the job description. "That's just who I am." The senior roles at my current company are often differentiated not by technical prowess, but by building others up. Sharing to a big audience trips people up, but the steady radiating of information in writing, pairing with people to solve problems, and being the go-to person for questions serves to enrich and scale a company more effectively.

Where's the gap between what you're doing now and what you want to do?

This is something I'm able to concretely address within my company, when I know how the departments are structured and how the job descriptions fit into the org chart. I've made a list (or tasked my colleagues to do this for themselves) comparing their current job description to the one they're aiming for. Next to each item, list what you're already doing to fill that gap, or at least one thing you could be doing in that direction. Sometimes it's a matter of recognizing for yourself how much you already do.

Your boss may never be your mentor

These talks from Marianne Duijst and Martin Hynie dive into this deeper, but find help where you can get it.

In five years you might specialize in something that doesn't exist today

I feel a bit silly asking people in 2021 where they see themselves in the future, or even what they want for themselves. Don't feel bad if you don't have it all planned out.

How to get promoted

Be reliable, dependable, the one to be counted on, and willing to take on more. Who wouldn't want someone like that around all the time?


Karen says during the talk that she doesn't know what mark she will leave on the testing community. I certainly appreciate this bit of her wisdom.

The Flow of Two Exploratory Testing Sessions

The Exploratory Testing Peer Conference #ET19 followed the 2019 Valencia edition of the European Testing Conference. I said something that I pretty immediately forgot in the upheaval when I returned to the office, but it really stuck with Anne-Marie Charrett. I appreciate both that she remembered this at all, and continues to insist that I'm the one who had this stroke of genius when she really brought it to life. Here's the idea:

It's straightforward to follow one thread or idea through an exploratory testing session. It's not straightforward to decide which path to take, feed information from one path back into another, recognize that there are different paths, or bring others along for this journey.

The Ensemble at Work

We have a weekly ensemble testing session at my workplace. For an hour and a half, testers from different teams working in the same tech stack come together to share knowledge and build testing skills. In a recent ensemble testing session, a tester on one team brought a ticket they'd been avoiding tackling on their own. They knew they didn't know how to test the fix. But they felt like they'd talked about it enough as a team that they should have understood what to do already.

We read through the story with the group of testers. We determined that a static code analysis security scan had discovered vulnerabilities in a couple of libraries. The developers had fixed the issue by removing the libraries. It was our mission to make sure those libraries were removed.

Immediately a plan came to my mind: 1. Map out what kinds of pages there were, assuming that different pages of the same type would be likely to load the same libraries: list view, detail view, landing page, etc. 1. Look at one of each of those pages with the Network tab open in the developer tools.

In a quick spirt of excitement, I dumped this idea on the group without figuring out if people knew what either of these things meant. (It turns out, not everyone did.) But everyone seemed to understand that there was somewhere in the developer tools where we could tell which libraries were loaded, so we started there. Exploring in a group is not about getting everyone to follow my idea immediately (or ever), it's about making sure everyone is on board and understands what's going on.

Proving the absence of a thing is harder than proving the presence of something, so we spent a bit of time looking through the Console and Storage tabs, as well as reloading the page with the Network tab opened, to figure out what appeared where. That helped everyone remember or discover that we didn't need to reload the page if the Network tab was open before the page loaded. This sped us up for the rest of the session.

Next, we looked at a couple of similar-looking list pages. We searched for the libraries in the Network tab. They weren't there. Now that we'd seen a couple of examples, I decided it was the right time to bring up my original idea of grouping pages by type. (Going from the abstract to the concrete doesn't work for everybody, so sometimes going from the concrete to the abstract works better.) I asked "These last two pages both looked like list pages, what other kinds of pages are there? Can we list them? Should we look at a detail page?" This comment blew the mind of the tester who brought this ticket. They'd been testing this product for two years and had never organized the product this way in their brain. It may not have occurred to them that a product could be organized in different ways in their thoughts depending on the circumstance. We got as far as listing the concrete pages we'd checked, but not as far as identifying all the types in the abstract before the energy in the ensemble moved on.

We looked at a detail page. We looked at a settings page. Then one of the testers who's been looking at a lot of front-end Javascript noticed two things: both the URLs we were searching for had ajax in them, so we only needed to search for one thing on each page we opened. And second, they knew that ajax was used to make changes to pages that had already loaded, so they asked "what kinds of pages change after they're loaded?" In this particular application, it was mostly forms in pop-up windows, so we concentrated our efforts there for the rest of the session.

The whole session took about an hour and a half. A tester that came in scared and confused left empowered, with information to bring to their developers, and a plan for how to execute the rest of the testing. Here's one way of looking at our exploratory testing session:

At every stage, we absorbed a lesson as a group, and used it as our new superpower to make our testing better for the next bit. There were other paths we could have pursued, but many of these weren't consciously mentioned or acknowledged during the session.

The Ensemble at the Conference

I facilitated a couple of ensemble sessions with groups at Agile Testing Days. Our first emsemble had a couple people drop out, so it ended up being one tester, one developer, and me. We were looking at a very straightforward application from Alan Richardson where you can decide whether a string contains 7 characters and is valid (in the set A-Z, a-z, 0-9, and * ). A few different times the developer and I asked if we should look at the source code. Rather than trying to interrogate the application based on the behavior from different inputs (black box testing), we wanted to go to the source (white box testing).

But we never did. We kept trying different inputs, getting increasingly creative with order, special characters, Unicode characters, other languages as we progressed. But we never chose a different path. Even as I tried to encourage us to take notes so we wouldn't try the same things we'd already tried, we didn't.

We did manage to find a good resource for copying and pasting unicode characters, but we didn't learn how to explore the application more efficiently, or take what we learned earlier in the session to apply it to the rest of the session.

The Power of Exploratory Testing

Brute force will get you somewhere. Trying enough different inputs, or different pages, and you'll gather more information about how the application works. But the power of exploratory testing comes from learning from your earlier results. It's realizing there are different ways to go, different paths to follow, jumping on one of those while it serves you, and making sure everyone else is along for the ride.

Agile Testing Days 2020

When you say no to what you don't have the energy for, you leave your time, attention, and devotion free to pursue what keeps you energized. Through a tumult of plans and dreams that only 2020 could continue to crush, the Agile Testing Days crew put on a revitalizing conference. I only caught a glimpse of the preparation that went into providing a top-notch experience for speakers and participants, and I have to commend the group on their commitment to the cause. Bravo.

The talks were recorded. I've decided to cut myself off on catching up on missed talks after two weeks. Reflecting on what wisdom I've garned from a small subset of the offerings at Agile Testing Days will be more valuable than being exposed to every last word. In that spirit, let me give you a few takeaways in place of an exhaustive recap or analysis.

Parveen Khan: watch this if you're looking to collaborate

  • It is in doing the work that we discover what we must do.
  • Don't get stuck when learning something alone; pair.

João Proença: watch this if you need a different direction

  • Performing a health check and making a diagnosis are different skills.
  • What barriers do we set for ourselves?
  • The product is all the pieces the customers see, including the cloud infrastructure (and whatever else your team isn't responsible for).

Paul Holland & Huib Schoots: watch this if your automation isn't providing valuable information

  • The sunk cost fallacy means we're unlikely to get rid of automation code, even if it's ineffective.
  • We often fail at getting people to change.

Gitte Kligaard talk: watch this if you've been hiding your true self at work

  • "I'm just going to rewind." (Possibly my favorite takeaway from this conference, I'm going to use this anytime I trip over my words.)
  • Creativity comes when inspired by others.
  • Being professional is knowing your craft, and admitting when you don't know.
  • Spend time with yourself to listen to yourself.

Alex Schladebeck & Ashley Hunsberger: no recording, but follow these ladies on Twitter for how to balance life and leadership

  • Tell people what you want. They may be able to help you.
  • Have a clear vision of what you want.
  • Write down your fear. Ask yourself: Why is this positive? How can I build the courage to do this?
  • Be explicit about what you're doing.
  • Show how you're working.
  • Build a practice of reflection.

Angie Jones: watch this if you need a push to get into automation

  • Determine your goal first. If you don't know it, you'll definitely fail.
  • When you're a leader, celebrate the small victories.
  • There's no reason to shame people for being creative and doing the best they can.
  • It's not realistic to assume master level by default.
  • Find out why developers don't participate in automation.

Rob Meaney: watch this if you don't get why observability is important

  • We learn profound lessons from painful experiences.
  • Build relationships. Influence people at the right time.
  • Pain + reflection => progress
  • It doesn't matter how much we test a thing if nobody wants to use it.

Tobias Geyer: watch this if you're struggling with an ethical dilemna

  • Could ethics be a non-functional requirement?
  • Read the codes of ethics proposed by the IEEE, ISTQB, ACM, and James Bach.
  • A hippocrattic oath for software testing: avoid harm.
  • Promote environment sustainability.

Smita Mishra: watch this if you're interviewing users

  • Listen to be able to ask clarifying questions and dig deeper.
  • Ask users: their objective, what they found, about the impact, where they're struggling, what could make their lives easier.

Eveline Moolenaars: watch this if you're learning to coach

  • "We have a policy; it's somewhere on the internet" isn't enough.
  • "Everyone deserves a coach to make them aware of what they've forgotten." ~Brad Gilbert

Federico Toledo: watch this if you or those around you are losing steam

  • A tester with a sense of purpose is more resilient.
  • Focus on strengths more than weaknesses, turning up the good.
  • Provide visibility that you're doing something with the feedback.
  • Ask people if they're getting what they need.

Nicola Sedgwick: watch this if you're the gatekeeper for quality

  • Do not correlate your own successes with the quality of the entire system.
  • Contract-driven work is not holistic quality.
  • Report when there has been no progress.
  • Ask for a code walk-through before a build is ready.
  • Shift quality left all the way to the executive team.
  • Testers were hoarding responsibility for quality. Let it go.

Joep Schuurkes: watch this if (or while) you're stuck in an ineffective meeting

  • Ask yourself: is my work inside or outside of meetings?
  • Get the right group of people in the room before holding a meeting.
  • Express specific appreciations for things done well so people will keep doing them.
  • Meetings are synchornous collaboration with a purpose.
  • Standups should remove impediments.
  • The metaphors we choose say something about how we feel about our work.
  • Leadership is creating an environment where everyone can contribute to solving a problem. ~Jerry Weinberg

Clare Norman: watch this if you're truly stuck

  • Quality is everyone's care.
  • "Success is liking who you are, what you do, and how you do it." ~Maya Angelou
  • Your courage quota for the day might vary.
  • You don't get better by doing the same thing everyday.
  • I didn't know how I needed to be helped.
  • You can't drag someone through time.
  • We live far less in the present than we ought to.
  • How magical is it when people value the change that you're making?
  • Passion spreads when other people share in your excitement.
  • Have a cheerleader in your life.

Clare Norman channeling Maya Angelou on success

Jenny Bramble: watch this if your tests are failing on expected behavior

  • Many defects I filed are never fixed. Unfixed defects become expected behavior.
  • Automation has never found a defect. Automation tells us behavior has changed.
  • Tests should pass on expected behavior, including some defects.
  • Document your tests for others, including your future self.

James Lyndsay & Anne Colder: watch this if you feel like an impostor

  • Acknowledge when you can't help.
  • If you could take one step forward, what would you do?
  • Talk to your person, or ask a smaller group of people if they have an answer to your problem.
  • Forgiveness is one of the most powerful things you can do as a human being.
  • After you make a mistake: wallow in it (really feel it first), forgive yourself, then dance.
  • Your job title allows other people to figure out how to work with you.

Gitte Klitgaard AMA: watch this if you miss the hallway track

  • Listen to hear what's being said, not to respond.
  • We can't read minds.
  • Creating a safe space allows you to feel uncomfortable.
  • Silence is a tool.

Sophie Küster: watch this if you're ready to tell your colleagues your big secret

  • There is strength in showing vulnerability.
  • People want to help you. Let them.
  • Ask yourself: how am I treating myself?
  • Anything worth doing is worth doing poorly. Poorly done is better than not done at all.

Gitte Klitgaard & Morgan Ahlström: watch this if you want to get psychological safety in the boardroom and on the roadmap

  • Make time to address psychological safety. Put it on the same level in your team's goals as your product goals.
  • We are role models. We lead by example.
  • More people were uncomfortable giving than receiving code reviews.

Ard Kramer: watch this if you're getting burned out

  • Humans have an unrealitic belief in our own influence.
  • Performing your testing role doesn't make you popular.
  • Most people looking for confirmation that the software is working. To form a logical proof, we look for evidence that the software doesn't work.
  • Ask yourself: which circumstances could I control? Did I manage expectations correctly?

I'm grateful I got to connect with people who keep me going. I'm grateful Agile Testing Days tried to make this work in person. I'm grateful I was able to drop my workshop instead of abaondoning people in breakout rooms to do something they've never done before. I'm grateful the mobbing session I helped organize and facilitate went smoothly. I'm grateful that my job allows me the time and space to be rejunivated by this all. I'm grateful. 🙏

Finding relevant search results

In his crafting time, one of our developers decided to fine-tune our search results. He added relevancy scoring to give weights to different text fields. It was my job to determine if the right results were turning up at the top of the list of search results. So I had to ponder: what made a search result relevant?

First, I realized that feedback from our users is the best way to answer this question. Anything I could do to get this feature out into production, where we'd get real data about what people searched for, would be more valuable that brainstorming test cases for this feature. I set myself a timebox of two and a half hours, the rest of the afternoon on a day in a week filled with competing priorities. We'd agreed as a team ahead of time that I could determine the testing approach, and our product owner would decide what was or wasn't worth fixing before this feature went out.

I saved the first 45 minutes of my timebox to research what people had to say about search relevancy. Surely I was not the first person contemplating this problem. Over on the Ministry of Testing Club forum, I found what seemed to be a promising title of a post. It turned out a past Elizabeth from a year ago wrote it, and nobody had answered it satisfactorily in the intervening time. 🤦‍♀️

After Duck-Duck-Go-ing some actual answers, I found a couple of resources from websites I've trusted and found fruitful in the past: A List Apart and philosophe.com. A List Apart suggested honing in on a specific intent, searching for the queries users seek most frequently, and seeing how far those items fall from the top in the search results. The philosophe guidance about testing search gave me something deeper to consider: users shouldn't have to ponder the reasoning behind the search results. That was enough for me to develop some test cases.

As I searched and adjusted the weights of various fields, plenty of normal things happened: setting the relevancy values to zero meant the result wasn't listed, multiple instances of the same word counted more than a single instance, and giving fields stronger weights caused their results to move up in the rankings. But as a bug magnet, I uncovered things that were both interesting to discover and outside the original scope of the story.

Log of zero is negative infinity

Bugs

  1. I opted to edit data that was already in our test environment rather than setting up completely new data. In doing so, I discovered a couple description fields that were missing an indexing update in the search results when edited through the UI.

  2. I tried to use the default values for everything to see "normal" results. One field was going to add a value to the relevancy rating, so zero seemed like it should be the default option. Unfortunately a couple of the options for the weighting feature transformed the value using the log and ln (natural log) functions, which are undefined at zero. All my search results disappeared.

  3. I looked at the data that was already in the database, and used a search term that showed up a lot already. It turned up nothing. I searched for part of the word. That turned up all the results I was expecting. I realized the search term was indexed separately because of the characters we used to break the word apart. Imagine having a bunch of sun-bleached items, but you can only find them if you search for sun or bleached, not sun-bleached.

Bug 1 the developer agreed was a bug, and we fixed as part of the story. Bug 2 was "working as expected" from a developer point-of-view, but seemed a little weird to the product owner. We meant to look into as part of the story and decide if we should eliminate the log functions as options entirely, but other priorities came crashing down upon before we could. It's out on production to the handful of internal users with access to the relevancy tuning. Bug 3 we added to the backlog, and I hope someday a swarm of user complaints make it a priority for us.

Morals

  1. Users know best.
  2. Someone else on the internet has had your problem before.
  3. Report information that matters.
  4. When the risk is low, let it go.

SoCraTes UK, August 2020

After years of envying the tweets and stories from SoCraTes conferences, this cursèd disease finally made attending one possible. I could afford the time (part of one day, instead of a whole weekend + travel) and the cost (£2.50, instead of €€€).

I'm preparing to host an open space of our own in September (TestCraftCamp, tickets are free!), so aside from the sessions, I was also interested in the infrastructure of the event. The hosts hung out in one central Zoom room (a one-month subscription cost them £10). Each attendee was appointed co-host, allowing us to move around among the many breakout rooms as we pleased. (There may have been as many breakout rooms as there were participants, named with varying levels of creativity but functionally the same.) A Notion board captured the user-generated schedule, along with links to the Mural whiteboards the organizers created for each room, rather than for each session. Chat happened in a Discord channel, but I wouldn't recommend it.

The first session I attended was a discussion about the value and hazards of leaving a job. First: decide where you want to go, what kind of thing you're looking for. You're not going to get where you want to go by running away from your current role. Run towards something. Have honest conversations with your peers, manager, mentor, coach, etc. about what change you can affect at the organization. (Things like the nature of the product, the tech stack, or the customer base are unlikely to change quickly.) If you do decide to change jobs, it will take a lot of energy to find something new and build your network at the new place. You may have to prove yourself all over again, and solve the problems you've already solved with this different group of people.

After taking a total of three bullet points in my notebook, the host of the second session I attended about hypergrowth was having trouble taking notes and listening to/facilitating the session. I volunteered to jump in to take the notes. Here's what I made:

Notes from hypergrowth session

I was trying to capture what people said, change the background colors, and move the notes around to put similar ideas next to each other. It was hard to know in the moment whether all of that is legible or helpful for anyone else. Luckily people in the session and in the Discord channel later thanked me for my work. I'm still struggling with how to impart any note-taking skill onto others, particularly when this can be seen as a gendered task.

Men: Wow, you are so good at taking notes!
Me: You would be too if you got stuck being the one doing it all the time!

But I digress, back to hypergrowth: The things that worked before COVID and at smaller orgs don't work at a bigger org today. Find your people within the smaller org - your department, your mentor, your friends - so you can feel stable even as things around you grow and change. Communicate your expectations clearly, for yourself and for your team. Keep a journal of what you've accomplished to feel more anchored in the value of your work.

After lunch, I started off in the wrong room because I didn't notice the sessions had been moved, but found my way to the one about coaching your peers. The conversation floated from a definition of terms to how to make space for mentoring. In order for people to mentor, there must be consent from the mentee. Ideally there is also some expertise in the organization around what is good, and someone (mentor or not) to teach the mentee what is good. The conversation reminded me of a talk I saw from Marianne Duijst; here's the moment she started talking about the difference between coaches, sponsors, teachers, and role models, and how they can support you in different ways.

Anytime Maaret Pyhäjärvi is running an exploratory testing session, go to it. You will be better for it. This time, we tested a simple web application. We ended up spending most of the time adding scenarios to the automated tests Maaret had already set up in Robot framework. One of the questions it brought up for me: do you leave failed tests in your automation suite to highlight unexpected behavior? I had this problem this week: I'd updated a schema to validate against the current behavior while the story was in progress, instead of the behavior we wanted when it was done, so I could verify other fields in the output. Next time I think I'll separate them into two tests and leave them both as pending (marked as xfail in the pytest framework) so it's clear something needs to change before the software is released.

Thank you for the SoCraTes organizers for, as I learned a coach does, holding this space for us to examine ourselves and reflect. As someone who is much more fluent in emoji than GIF, I'm grateful to have won the hearts of the participants with this "letting the cat out of the bag" GIF in the GIF challenge.

Cat aggressively jumping out of a backpack

TestBash Netherlands 2019

In reviewing my notes from the TestBash Netherlands that occurred in May of 2019, two big, related themes jump out at me:

  1. keep exploratory testing
  2. learn by sharing

Andy Brown gave a talk about human factors in highly automated systems. As flying airplanes becomes more automated, pilots know less about how to switch to manual overrides in a time of crisis. You might not have more than a few hours a year of practice for when things go down. Continuing to share stories from the past and learn from them is one way out.

For Gitte Ottosen, who gave us a tester's perspective on Agile, learning never ends. Understanding the customer journey and tying your work back to the business value are essential for making informed decisions about what to test, and which subset of those things to automate. Teaching, knowledge-sharing, coaching, and pairing can get the whole team involved in advancing quality, even if they're not all strong exploratory testers.

Jit Gosai spoke about continuous testing. Practice test-driven development, story mapping, and three amigos meetings before the code is written. Improve automation test suites, use exploratory and mob testing, and incorporate feedback from the customer. When you're exploratory testing, you're not just confirming that the software functions as expected, you're testing the goal of your whole organization. Jit found that getting everybody on the team exploratory testing caught more bugs than automated tests.

Marit van Dijk hit the nail on the head with her talk: keep exploratory testing. Maintaining a consistent state of test data across mutliple teams is difficult. Rather than spend your time setting up control mechanisms for those systems, explore the systems themselves. Bugs are found not in the things we can control or know about, but in the space between systems. Pairing a developer with a tester was a lot faster than testing solo, because nobody had to go back and reproduce bugs. Spread the risk across systems by hooking them up one at a time.

Vera Gehlen-Baum and Beren Van Daele spoke about incorporating your learning into your backlog. Identify what you want to learn, and write actionable acceptance criteria for your learning. This should include sharing what you've learned individually or with the team; it doesn't have to be confined to same sprint as the work for the team. Linking personal goals to business goals will ensure that people see they're improving.

Joep Schuurkes talked us through what he was thinking as he was live-coding, which is learning and sharing at the most granular level. Separate concerns when you automate: keep what the code is supposed to accomplish separate from how it's accomplishing it. He expanded the CRUD heuristic that helps me decide what to automate, and added an extra DERR to make it CRUDDERR to ensure you can also debug, explore, run, and report on your code.

Anne Colder and Vincent Wijnen gave an experience report about their mentor-mentee relationship. Mentoring is different from teaching. Ask questions about your mentee's experience, and expect different questions from them than you'd receive from your more experienced colleagues. Mentoring stimulates reflection on both sides.

Drew Pontikis spoke about the illusion of control. Challenge your own thinking by listening to new voices. Recognize when you can affect change in a situation.

Ben Simo gave the last talk of the day about the art of scientific investigation. Design your next experiment based on your previous ones, and adapt as you go along.

There was a whole slew of 99-second talks, but the only memorable and explicable thing I wrote down from them was something Ilena said: "Understand your team doesn't function the way that you do."

The day prior to the day of talks, I helped facilitate and debrief a workshop that Joep Schuurkes developed around building an API testing framework. It was the first time we'd given it together, so that whole day was a 'learn by sharing' experience for me.

Humans Conf, June 2020

I found Humans Conf on Twitter, and found myself in a position to attend when gestures at the general state of the world moved it online. You can check out the output from the entire open space on the Notion page. It took place for a few hours on a Tuesday night. I discovered the energy it takes to listen and participate attentively in the evening affected my work for the following couple of days. I am only human!

The first session I attended was about making diversity more inclusive, and hosted by Lina Zubyte. Due to the makeup of the participants, the conversation veered towards being a woman in tech. We spoke about incorrect assumptions, how we felt about particular language (especially guys), and how much energy that drains from us all the time. I heard stories from other women that echoed situations from my experience. I recognized how telling a story from a memorably bad day helps others understand and connect. I pondered if there's a way to make people understand, without reliving everyday indignities. My takeaway from the conversation was clear: you get to decide how to spend your energy. There will be people whose minds need changing, but the energy you would expend making that happen wouldn't be worth it. Let those people who challenge the premises of the problem space fall off your radar, and invest in people where you can make a difference.

For the second session, I started off in motivating interactions, hosted Maren Baermann. I truly enjoyed the model she introduced regarding extrinsic vs. intrinsic motivation, and tying those to our basic needs. But when the Zoom gods kicked me out of the free room, I took it as a sign to go search for a less interactive session where I could just listen. I landed in a session relating couples therapy to your team dynamics, hosted by Daria Dorda. The conversation covered some national stereotypes, how people get distracted from the real message, and how to get the necessary distance to engage with a topic. My takeaway here was: if you don't set boundaries, others will for you. Amen.

I'm grateful to Benjamin Reitzammer who hosted the open space, and all the others who made it run smoothly. Seeing just a handful of familiar faces set me at ease. I'd never seen Zoom chat used effectively, but the rush of answers to the closing questions at the end of the evening gave me that warm & fuzzy feeling like I'd actually attended a conference in person. I needed that, so thank you.

I don't think I can bring myself to join the next Humans Conf on Wednesday, 15 July. I've got a week off, and I need some time away from Zoom to be feeling more human. :)

Test Automation Day 2018

I got a free ticket to Test Automation Day in 2018, just after I'd moved to Rotterdam. I was overwhelmed by the confluence of events: Angie Jones keynoting, Ard Kramer running the show, and meeting Amy Phillips in real life. (Neither of us were sure it was the first time we'd met because we'd been following each other on Twitter for so long!)

My most shocking note from Angie's keynote is "clicker, no notes," because of course Angie had her talk down pat. In a talk that anticipated the current, urgent conversation in AI and machine learning, Angie recognized that we can't agree on what human ethics should look like. Figure out who you're advocating for, and tie the bugs back to that business value. You're not going to be able to define all the business requirements up front; expect the unexpected.

Amy Phillips spoke about how tests in a DevOps environemnt allow you to get fast feedback. Like Agile, this style of working is not about minimizing the pain and struggle in developing software, but rather about bringing that pain forward. DevOps allows us to become aware of problems sooner, so we can act on them sooner. Running more tests is not necessarily better. Do not accept flaky tests. Free yourself from an overreliance on end-to-end tests and tests that cover non-critical paths to get your build time down. Rather than running tests on every commit, improve your monitoring.

I've got some quotes from other talks and the panel that day:

  • "A tester is someone who believes things can be different." ~ Jerry Weinberg
  • "The team was not mature enough to determine priorities."
  • "Maintain the relationships you want to build."
  • "Are you just doing it because you can?" (regarding UI automation)

I can't tell everybody how welcome and in the right place I felt by being able to jump in this day on short notice.

Start With Belief

Imagine a scenario where you find a software bug. You go to another colleague. They perform the exact same reproduction steps you did. But the bug doesn't happen on their machine. What now?

Works on my machine

Your colleague may not believe you found a bug, or they may not be sure if you did. They may blame you for doing something you shouldn't have. They may insist that most users have a machine more like theirs than yours, and it doesn't matter if it doesn't work on your machine. They may think it's too much trouble to track down what's happening on your machine, and leave the burden to you to figure it out. They could have a fixed mindset, and think that you, your machine, the software you're running never change. (Read more about fixed vs. growth mindsets in this brilliant Brain Pickings article.)

Does not work on every machine

Instead, they could have started with belief. They could commend you for uncovering something they themselves could not. They could be curious about how your machine and software are different from what they have running, and look into how many other users this is affecting. They could pair with you to come up with ideas about how to stop the issue from happening. If they have more access to the underlying systems, they could look into the code and configuration settings. They could have a growth mindset, and think that your machine, the software you're running, and most of all you, can change.

Start with belief

Now, imagine a different scenario. Imagine someone describes being mistreated by the police. They were doing something that is fully within their rights, and the cops said they weren't allowed to.

I believe you.

Start with belief. Do not think that because you've had different interactions with the police, that the police must not treat people in the way you're hearing. Do not think that your individual circumstances, particularly the color of your skin, means that you'll be fine. (Think more about how racism is fascism applied to a particular category.) Take the time to shut up, listen, and discover how you can help. Believe that things can change. It starts with belief.

Joe Brusky/flickr

Trying Out Open API Editors

I was editing an Open API with multiple layers of inheritance recently. I kept uncovering errors long after I created them because of the way they display in editor.swagger.io.

This great talk from Lorna Jane Mitchell about Open APIs made me realize how many other tools I could use to edit these specs. There's a whole list at openapi.tools. During my crafting days, I resolved to try some new editors. I decided to see what it was like to edit the existing complicated spec, and write a spec we were missing for a very simple API (two GET calls).

TL;DR

I'm going to use the Visual Studio Code Open API plugin to write and navigate through our specs. We're rendering our specs with editor.swagger.io, so I'm going to keep running it through there to confirm they appear as expected for our stakeholders.


Editors

Spotlight Studio

Lorna mentioned this one in her talk. It's got a web editor, but I downloaded the Mac client. It's a GUI interface, so rather than writing YAML, it's more like filling out a form. It turns out I would rather write YAML than fill out a form! Good to know.

The way it organized hierarchy did not suit my mental model for what I was trying to accomplish. I thought about writing a new spec for two GET calls more in a hierarchy (Things both API calls shared. like security > endpoint for the call > parameters). Spotlight Studio grouped adding any new thing into one menu: API, endpoint, parameter, whatever.

Spotlight Studio has a git integration feature, where you can switch branches within the application. I'd connected it to my remote repo, so it couldn't see the local branch I'd created from my Terminal. When I wanted to save what I had so far (no auto-save??), I found save buried deep inside a menu without a keyboard shortcut to save it. I wasn't interested in changing my workflow to accomodate the tool.

Cmd + S it's not that hard

In looking at my existing, complicated API with inheritance, I didn't find a way to see everything in the same view. You had to click through to see inherited sections. Viewing descriptions required a mouse hover.

The final straw for Spotlight Studio was the error panel. Although thoughfully displayed to be informative without alarming, the line numbers didn't reflect where the issue was.

Overall: The GUI was getting in my way rather than helping me. Pass.

Senya for IntelliJ and Kaizen for Eclipse

I couldn't get either of these installed on my Mac! The instructions were essentially "Install from the marketplace, restart the IDE, and it should just work!" My machine enjoys sending me on fruitless adventures in debugging, but I chose to give up on these tools rather than trying to figure it out.

Swagger Hub

SwaggerHub came recommended by my colleague Reinout, so I signed up for a free trial to give it a shot. I'm still not completely sure if this is within the security guidelines our company has for creating accounts and sharing data with third parties, so I deleted the data I'd put in at the end of my session.

It's a lot like editor.swagger.io, but with more bells and whistles. It does have a separate error panel, which seems like it would be what I want for my big complicated API. But when I was writing my new API in it, the error panel would pop open to remind me about missing fields whenever it decided to auto-save. Not cool.

Two and a half hours after confirming my email address to use the product, an account manager reached out to me to find out if I had time for a quick phone call. No, I did not, I was in the middle of trying to ignore your pop-ups while writing my damn API spec! Their Twitter person didn't understand my complaint about the errors that appeared in the panel. The one I tweeted about was trying to tell me that request parameters get labelled individually with required: true or required: false, while you can throw all the required response parameters in a list.

Memes will not save you

Overall: If I had a license, I'd use SwaggerHub to look at existing APIs, but not to write new ones. I didn't look into using it to run a mock server, but I bet that'd be useful.

OAIE Sketch

Have you ever used sequencediagram.org to create a UML diagram and thought "What if this looked more 90's?" Well, you're in luck! OAIE Sketch will make you nostalgic for Windows 95. After cloning the github repo and open the .html file locally (whatever works I guess?), you'll see something like this.

😍😍😍

I liked the way it was built for you to either update the YAML or the visualization, then decide when to push changes to the other side. But I couldn't figure out how to paste in a spec I had somewhere else and get the visualization to update.

Overall: Might be a good way to think about shared outputs if it updated?

Visual Studio Code Open API plugin

This put me back where I started: the plugin for Visual Studio Code. It's got syntax highlighting. (It made me realize that my export from SwaggerHub added style and explode fields to my request parameters! I guess I'll save figuring out if I should keep those for another day.) It's got a schema, so I can navigate around the spec based on how the things are connected without having to remember line numbers. It's got error messaging that is clear enough without being invasive: red squigglies appear on affected lines and red trangles appear next to the line number on the left. They're small enough to ignore if you're in the middle of writing, but easy enough to find and notice without going on for too long. I'm sticking with this.