Reflecting on Let’s Test 2017

Good ideas come back around. As I sit here re-reading my notes from Let’s Test 2017, I remember the thrill of coming across so many new ideas there, and realize how much these three things stick out to me even now.

Valley Lodge & Spa, Magaliesburg, South Africa

  1. Causal loop diagrams

It’s possible I’d seen a causal loop disgram before, but it wasn’t until I went to Jo Perold and Barry Tandy’s “Visualize your way to better problem solving” workshop that I knew the name for them. Here’s an example of how connecting nouns with verbs via bubbles and letters really clears things up.

During the workshop, Jo and Barry talked about how drawing a diagram of your software in this way can help you discover interaction points between systems. Sharing the diagram is one way to take the invisible software you’re building into a visible space. That allows us to have a conversation about the model, discover if we’re on the same page, and take steps to improve the model and ultimately the software. The more visualizations I see of process and influence on the job, the more I realize they’re exposing Conway’s Law.

Elisabeth Hendrickson wrote in Explore It! about how causal loop diagrams can help you discover interesting areas to test. She pointed out that transitions from one state to another take time, and there’s lots to be discovered during the moments of transition about interruptions, errors, and incomplete states.

Repenning, Nelson & Sterman, John. (2003). "Nobody Ever Gets Credit for Fixing Problems That Never Happened: Creating and Sustaining Process Improvement." Engineering Management Review, IEEE. 30. 64- 64. 10.1109/EMR.2002.1167285.

Nelson Repenning and John Sterman used a causal loop diagram to display a human problem: not prioritizing time for improving. Read their whole article to discover terrifying news about how much time you’re wasting all the time by not stopping to improve!

2. Metaphors

Leo Hepis and Danie Roux’s “Frames at Work” workshop blew my mind. I would tell you they were talking about how metaphors and context framing shape the way we think about our work. And they did that. But the meandering and level of lost I felt on our path there was unprecedented.

Our table selected “Rope of Awesome” and I stand by that answer.

One of the exercises had us listening to a story about someone’s work and only writing down the metaphors. There seemed to be one every sentence, so many that I thought the speaker was throwing them in there on purpose. (See: throwing. They’re everywhere!) He talked about “pulling” and “dragging” his team along. Imagine instead if he were “pushing” or “building” his team. How would he think differently about his work? How different would his work be? Listen more closely to the words people are using to discover terrifying news about how they perceive their work!

3. Reflection and Learning

Alison Gitelson hosted a session to help the conference attendees think more about what they’d just learned. I was still reeling from the metaphors and framing session, so I wrote down these questions for myself:

  • How do I realize when I’m stuck inside my own head?
  • Does noticing where a behavior would be useful make it less irritating?
  • Was there a better discussion because active disagreement was encouraged?
  • Who in my life can help me reframe?
  • Why do I feel my effort has to achieve something?

Reflect on these questions yourself for terrifying news about how stuck you can be sometimes!

Very slight double rainbow to the right.

Originally published on Medium.

How not to name self-verifying test data

Self-verifying test data makes it easy to determine if it’s right. You don’t need to go check somewhere else. It is its own oracle. Here’s an example:

Address: My last name

Something is wrong. It’s obvious from the data itself. We don’t know exactly where the problem is or how to fix it, but we can tell there’s something funky with either Address or Last Name or both. If you feel like there must be more to it than this, read Noel Nyman’s 13-page paper to see that there isn’t.

Let’s look at a real-life example: I’m writing an automated test for an API. First, I create (POST) an item, then I read (GET) the database to see if it’s there. One of the fields in the body of the POST call is a Universally Unique Identifier, or UUID.

A UUID is a 128-bit identifier; we’re using them as unique keys in our database. Here’s an example of a version 4 UUID: 74ee94d-d32f-4844-9def-81f0d7fea2d8. (If you’re a human that can’t read regex, you might find this useful.) I generated that using the Online UUID Generator Tool.

I wanted to see what would happen if I tried to POST with a UUID that wasn’t valid. If I’d taken my example UUID and removed some characters to make it 74e4d-d32f-4844-9def-81f0ea2d8, it would have been invalid. My test would have behaved as I expected. But I wouldn’t have been able to tell at a glance if this was a valid UUID or not. It wouldn’t have been self-verifying.

I decided to name my UUID This is not a valid UUID.I wanted to easily be able to tell from the GET call if it succeeded, or the error message in the POST call if it failed. It would be clear when running or debugging the test what value was being passed in, why, and to which field it belongs.

Or so I thought.

I ran the test. This was the output.

I sat starting at it for a while. The first line where the red AssertionError starts looks confusingly similar: The left side looks like the right, so the assert should have passed instead of failed. The message below had is not a valid UUID twice. Huh? Finally, I realized what I did, and highlighted the part you now see in blue. I gave my self-verifying test data a name too similar to the error message. Let me boil this down:

Error message I expected: UUID not valid
Error message I got: Published service UUID {{insert UUID here}} is not a valid UUID.

Unfortunately, I’d named my UUID This is not a valid UUID. so the whole invalid input error message was:

Published service UUID This is not a valid UUID. is not a valid UUID.

Fans of 30 Rock may recognize this problem:

My self-verifying test data would have worked fine if the error message was exactly how I expected it. The test would have passed, and I may not have taken a closer look at the text of the error message. But of course, my developers had changed it to be more meaningful and give more context, the bastards. Thus, I uncovered my perfect-turned-nonsensical name. I changed the UUID in my test data to be calledFAKE UUID. It may not be the perfect name, but at least the code is better.

Calling things names related to the thing they are: great!

Calling things names too similar to the thing that you’re trying to test: confusing.

Originally published on Medium.

Agile Testing Days 2018: A Reflection

I was beyond excited to attend Agile Testing Days in Potsdam, Germany for the first time a year and a half ago. Anywhere I went, I met women who I’d previously only known from the internet. It was refreshing.

Based on the pages of notes I’ve got, I can tell you that the lessons I took away from that week have seeped into the way I work everyday.

“Humans should not be regression testing.”

Jez Humble kicked off the conference with a session about continuous delivery. He described the barriers of organizational culture and software architecture that can prevent you from getting to a point where you can deliver continuously. Previous places where I worked made this feel like an insurmountable feat; now when I imagine continuous delivery, I can imagine concrete steps we could take to get there.

“Let’s create a small habit everyday to trigger me to learn more.”

Lisi Hocke certainly took this idea from her talk and ran with it. She’s been pair testing remotely with people around the world and learning so much from it. I’ve taken pairing on a smaller scale, with my colleagues or in-person. There are still times when it tests my patience, but the benefits of being able to more precisely explain what I’m doing and what I expect of the software vastly outweigh that investment. All my notes from Lisi’s talk have me nodding my head, like these are the most obvious things in the world. The biggest I’d come across about a year beforehand: having a growth mindset rather than a fixed mindset. This explanation from Brain Pickings sticks with me.

Sanssouci Palace in Potsdam.

“I am curious why they’re doing what they do.”

Gitte Klitgaard and Andreas Schliep had an improvised conversation about good and evil. You know, like you do with your friends for fun, but on stage. It can be so hard to believe that people are acting with good intentions at heart. But remembering to have empathy for the situations people find themselves in will help you choose to be the person to repair relationships when things go awry. If you believe in people, they can be better.

“People don’t want to collaborate with you when you have twelve spreadsheets for them to go through.”

I’m sure Angie Jones had other, more profound takeaways from her talk. But this one sticks to my bones. I think of it anytime I open a spreadsheet with more than one sheet in it. I think of it when I’m deciding on a tool to use, and wondering not what’s easiest for me to set up, but what’s easiest for my fellow collaborators to use. Thank you for this gem Angie.

Some dramatic structure in Sanssouci Park.

“Get ready to fire people to maintain the culture you want.”

Poornima Vijayashanker spoke about concrete ways to successfully onboard new employees. But I’m curious about this provocative statement. I haven’t ever worked at a place bold enough to get rid of managers whose direct reports displayed a pattern of escaping them that no one could ignore. I wonder what kind of company is bold enough to take this step.

“Only put off until tomorrow what you are willing to die having left undone.”

Kim Knup said Pablo Picasso came up with this doozie. At my first job, we used a physical board with sticky notes. If the sticky note had been moved around too many times or stuck in one place too long, it would literally fall off the board. At the time this felt like a failure. Now I see it for the blessing it was. Forgetting is a part of life, even if our digital tools would prefer us to forget that.

“Do what you say you will. Integrity is important.”

Rob Lambert spoke about behaviors of effective Agile teams. It’s resonating with me again now because it’s something I’m addressing in a talk I’m giving about how to build trust. I’m digging into authenticity, which I think goes a step beyond integrity. Doing what you say you will is being externally congruent. Authentic people are also internally congruent; the vision they have of themselves is the one they present to the world.

I forget what this is but it’s across from the museum downtown and damn the light was lovely.

“If you never get feedback, you have one year of experience ten times.”

This came out in Huib Schoots and Alex Schladebeck’s workshop on dissecting your testing and discovering the skills present in your exploratory testing. We practiced observing the skills we were using on the meta-level. It allowed me to both see and share how much a year of mob testing for an hour every day had expanded at least two things. First: my field of vision for how to dig and explore software had grown. Second: I was able to explain what I was thinking such that the other people present could understand, contribute to, and question which path we’d take next. It was life-affirming!

“When we set our own limits, we can change them.”

This came up in the context of Natalie Wenert’s talk about cross-team functionality. She chided organizations for relying on hero-worship and fire-fighting over breaking down silos and contributing to the whole. One of my conference buddies was frustrated at Agile Testing Days because they viewed so much of the content as “work therapy.” They weren’t wrong.

“As a user, I want to be locked out of the system after three incorrect password attempts.”

David Evans presented a memorable talk about how the template we stick to for writing user stories does not serve us well. This particular example made me laugh out loud. This story gets the “why” wrong. It’s about security of our system and the user’s data. Being honest about why we’re building the software would make the user story less absurd, and hopefully get us on the path to making better software too.

This doozie is in the Museum Barberini, which is worth checking out if you're in Potsdam.

“Uncertainty is more stressful than inevitable pain.”

Emily Webber spoke about team interactions and organizational change. (Shout out to all the people who’ve tweet me instead of this other brown-haired white lady with glasses!) I’m on my fifth team in a year at my current company. I’m tired of the change. I know how important it is to build relationships with the people you work with. I’ve expressed that knowing who’s on my team is more important than having the perfect set of people. I look forward to more stability there because I don’t envy the alternative.

“Mistakes were made.”

Liz Keogh spoke about how to deliver failure messages. Her message was essentially: don’t. Pointing out the mistake without pointing fingers is enough. Encourage good existing behavior and create more options so that failure can occur safely.

“Are we advocating for those doing a good job?”

Ash Coleman and Keith Klain had a late-night after-dinner (over-hyphenated?) bonus-keynote to talk about how culture is a mindset and what we can do to change it. They encouraged allies in the majority to stop talking, and start listening, so you can do something. If you’re uncomfortable, good. You’re learning.

Originally published on Medium.

You build it, you run it, and you fix it

During a meeting of our unit at work today, we were asked if we wanted to become a member of the elite squad of people that are on-call for our software. Our philosophy is: we built it, so we know the most about keeping it up and running. In my next meeting, somebody asked if we ever write bug reports for ourselves. Both reminded me that I wanted to use and fix up a piece of software I wrote.

$ python3
There are 15 links outside of the 200 or 300 range of http responses on your site.

After using ScreamingFrog software to scan the pages for http error response codes, I decided I could build something easier-to-use myself and test it using my own website, I wrote a draft that worked initially, but did some things that weren’t great Python. Thanks to my friends Davida and Becky who reviewed and improved my code. You can see what they suggested in the older tickets in my Github repo.

Here’s what I have now:

# Mission: Find http codes that aren't in the 200 or 300 range for all the links on a single page

import os
import errno
import http
import requests
import ssl
import string
import urllib.request
from bs4 import BeautifulSoup
from bs4 import SoupStrainer

MY_SITE = ""
my_site_response = requests.get(MY_SITE)
only_external_links = SoupStrainer(target="_blank")
page = str(BeautifulSoup(my_site_response.content, "html.parser", parse_only=only_external_links))

def get_url(website):
  start_link = website.find("a href")
  if start_link == -1:
    return None, 0
  start_quote = website.find('"http', start_link)
  end_quote = website.find('"', start_quote + 1)
  stripped_url = website[start_quote + 1: end_quote]
  return stripped_url, end_quote

except OSError:

count = 0
with open('list_of_all_links.txt', 'a') as file:
  while True: 
    url, n = get_url(page)
    page = page[n:]
    if url:
        req = urllib.request.urlopen(url)
      except urllib.error.URLError as explanation:
        file.write(str(explanation) + " " + url + '\n')
        count += 1
      print("There are " + str(count) + " links outside of the 200 or 300 range of http responses on your site.")

It looks for the external links on my website, tries to open them, and writes them to a file if the response code isn’t in the 200 or 300 range. There are things I’d like to improve. I’ve noted some here. But tonight’s scope is: run it and fix it.

I run the file on my machine. Fifteen sites I link to come back with error responses. Here’s the file it generated:

HTTP Error 403: Forbidden
HTTP Error 403: Forbidden
HTTP Error 403: Forbidden
HTTP Error 403: Forbidden
HTTP Error 403: Forbidden
HTTP Error 403: Forbidden
HTTP Error 404: Not Found
HTTP Error 404: Not Found
HTTP Error 403: Forbidden
HTTP Error 403: Forbidden
HTTP Error 403: Forbidden
HTTP Error 403: Forbidden
<urlopen error [Errno 8] nodename nor servname provided, or not known>
HTTP Error 999: Request denied
HTTP Error 403: Forbidden

I notice most of the error codes were 403 responses, so I try a few of those pages manually. Those few succeed, so I don’t bother checking the rest. A403 status is Forbidden access, so I think it has something to do with these sites having logins. But I don’t need to logins to see the pages I’m linking to. Then I notice that some of the pages are pointing to http instead of https. I don’t know exactly what’s wrong. It’s getting late, so rather than diving in, I write two bugs: one about investigating 403s, one about updating http to https.

Next, I look at the other error codes. The 999 one I’ve seen before. It’s some weird LinkedIn thing. I don’t add a bug because it’s not interesting to fix. One site I’m not able to reach the domain of at all, so I message the owner to see if it’s still being maintained. The 404 codes are from sites that still exist where the pages have been taken down; fixable, but frustrating. They prove I spoke at these conferences. When these pages die, so does proof of my hard work. Sigh. I remove those links from my site, reload to confirm the fix made it to production, and run the script again. We’re down from 15 to 13 errors, as expected.

$ python3
There are 13 links outside of the 200 or 300 range of http responses on your site.

In looking at this code again, I’m reminded of the original motivation and my vision for myself: run it against any site, and know when links break on I added some issues that I’ll pick up another day. I’ll be ready to build again. But for now, goodnight.

Originally published on Medium.

Don’t let JIRA stop you from visualizing dependencies

My team is at the beginning of a project. We’ve got a lot of potential features. Our task yesterday was to start breaking down big dreams into specific pieces of work we can pick up.

As we started to define what we wanted to build, we came across items that had to come first: come up with a proposal before we meet to review it with our stakeholders. Other items weren’t necessarily “blocked,” but would make more sense to pick up in a sequence. As my developers watched me painstakingly searched for completely forgettable JIRA story number so I could mark each story as “blocked” or “is blocked by,” one of my developers asked one of the best questions I heard all day: “Is there a way we can see this visually?”

My developer searched for JIRA solutions to this problem and came across a few that required admin access or JIRA version 8. We spent a few minutes getting lost in the text and subsequently interactive credits on the About JIRA page. None of us noticed this yesterday, but the start screen for the game gave us the answer we needed: we have JIRA 7 (Roman numerals on the title page), not JIRA 8.

JIRA credits: A surprising diversion in our work day.

Without a big sheet of paper (we had post-its, but no where to stick them) or a whiteboard in the conference room we were crammed into, I pulled up my go-to tool for visualizations: MindMaster. I’ve got other recommendations for mind mapping software at the bottom of my article here. I’m currently stuck on MindMaster since it’s free and not web-hosted.

I added a bunch of Floating topics and connected them with Relationship arrows. We outlined the first group of stories that we’d collected into an epic. We fiddled a bit with aligning the stories that could be picked up in parallel so they appeared at the earliest point we could pick them up. We came back to refine and add a couple items as we outlined other epics. The few minutes we dedicated to creating this diagram gave us enough information to decide what order we should pick up work for the next week or two.

We may be looking at our sprint board in the coming days to review how all the work is going. But I know that no developer is going to trace all the “blocks” and “is blocked by” links in the stories. They’re going to look at this diagram to know when to pair or mob because we can’t pick more things up.

Moral of the story: Don’t let your tools constrain you.

Originally published on Medium.