Trying Out Open API Editors

I was editing an Open API with multiple layers of inheritance recently. I kept uncovering errors long after I created them because of the way they display in editor.swagger.io.

This great talk from Lorna Jane Mitchell about Open APIs made me realize how many other tools I could use to edit these specs. There's a whole list at openapi.tools. During my crafting days, I resolved to try some new editors. I decided to see what it was like to edit the existing complicated spec, and write a spec we were missing for a very simple API (two GET calls).

TL;DR

I'm going to use the Visual Studio Code Open API plugin to write and navigate through our specs. We're rendering our specs with editor.swagger.io, so I'm going to keep running it through there to confirm they appear as expected for our stakeholders.


Editors

Spotlight Studio

Lorna mentioned this one in her talk. It's got a web editor, but I downloaded the Mac client. It's a GUI interface, so rather than writing YAML, it's more like filling out a form. It turns out I would rather write YAML than fill out a form! Good to know.

The way it organized hierarchy did not suit my mental model for what I was trying to accomplish. I thought about writing a new spec for two GET calls more in a hierarchy (Things both API calls shared. like security > endpoint for the call > parameters). Spotlight Studio grouped adding any new thing into one menu: API, endpoint, parameter, whatever.

Spotlight Studio has a git integration feature, where you can switch branches within the application. I'd connected it to my remote repo, so it couldn't see the local branch I'd created from my Terminal. When I wanted to save what I had so far (no auto-save??), I found save buried deep inside a menu without a keyboard shortcut to save it. I wasn't interested in changing my workflow to accomodate the tool.

Cmd + S it's not that hard

In looking at my existing, complicated API with inheritance, I didn't find a way to see everything in the same view. You had to click through to see inherited sections. Viewing descriptions required a mouse hover.

The final straw for Spotlight Studio was the error panel. Although thoughfully displayed to be informative without alarming, the line numbers didn't reflect where the issue was.

Overall: The GUI was getting in my way rather than helping me. Pass.

Senya for IntelliJ and Kaizen for Eclipse

I couldn't get either of these installed on my Mac! The instructions were essentially "Install from the marketplace, restart the IDE, and it should just work!" My machine enjoys sending me on fruitless adventures in debugging, but I chose to give up on these tools rather than trying to figure it out.

Swagger Hub

SwaggerHub came recommended by my colleague Reinout, so I signed up for a free trial to give it a shot. I'm still not completely sure if this is within the security guidelines our company has for creating accounts and sharing data with third parties, so I deleted the data I'd put in at the end of my session.

It's a lot like editor.swagger.io, but with more bells and whistles. It does have a separate error panel, which seems like it would be what I want for my big complicated API. But when I was writing my new API in it, the error panel would pop open to remind me about missing fields whenever it decided to auto-save. Not cool.

Two and a half hours after confirming my email address to use the product, an account manager reached out to me to find out if I had time for a quick phone call. No, I did not, I was in the middle of trying to ignore your pop-ups while writing my damn API spec! Their Twitter person didn't understand my complaint about the errors that appeared in the panel. The one I tweeted about was trying to tell me that request parameters get labelled individually with required: true or required: false, while you can throw all the required response parameters in a list.

Memes will not save you

Overall: If I had a license, I'd use SwaggerHub to look at existing APIs, but not to write new ones. I didn't look into using it to run a mock server, but I bet that'd be useful.

OAIE Sketch

Have you ever used sequencediagram.org to create a UML diagram and thought "What if this looked more 90's?" Well, you're in luck! OAIE Sketch will make you nostalgic for Windows 95. After cloning the github repo and open the .html file locally (whatever works I guess?), you'll see something like this.

I liked the way it was built for you to either update the YAML or the visualization, then decide when to push changes to the other side. But I couldn't figure out how to paste in a spec I had somewhere else and get the visualization to update.

Overall: Might be a good way to think about shared outputs if it updated?

Visual Studio Code Open API plugin

This put me back where I started: the plugin for Visual Studio Code. It's got syntax highlighting. (It made me realize that my export from SwaggerHub added style and explode fields to my request parameters! I guess I'll save figuring out if I should keep those for another day.) It's got a schema, so I can navigate around the spec based on how the things are connected without having to remember line numbers. It's got error messaging that is clear enough without being invasive: red squigglies appear on affected lines and red trangles appear next to the line number on the left. They're small enough to ignore if you're in the middle of writing, but easy enough to find and notice without going on for too long. I'm sticking with this.

If a test falls in a forest...

The saying goes "If a tree falls in a forest and no one is around to hear it, does it make a sound?" I have similar question that shapes the way I think about software testing: If a test is performed but no one takes action on the results, should I have performed it? I think not.

If the answer to "Who cares?" is "No one," don't perform that test. If you're not going to take action on the results of your testing in the coming hours, days, or weeks, don't perform that test. The world around you will change in the meantime, and the old results will not be as valuable.

One of the 12 Agile Principles is simplicity, or maximizing the work not done. Testing on an agile team provides information to help decide what work should picked up in the coming iteration(s). But without meaningful collaboration or feedback, testing is a pile of work for no reason. Work is not meant to produce waste. Save your time and your sanity by thoughtfully analyzing what should not be done, and coming to an agreement with your team about it.

My team gets scared about the quality of our product and skeptical about how I'm using my time when I describe what I'm not testing, or which automated tests I'm not going to run. "But isn't testing your job?" says the look on their faces. "But then what are you going to do?" is what they manage to say. Rather than capitulating for appearances, to just "look busy," I take this as a challenge to make my exploratory testing and other work I'm doing for the team more visible.

Risk-based testing

In her TestBash Home talk, Nishi Grover Garg asked us to think about estimating impact and likelihood (with possible home intruders as an example). I'd have trouble pinning down our no-estimates team on concrete numbers for undesireable software behavior.

Slide from Nishi's TestBash Home talk

But it does reflect the conversations Fiona Charles's test strategy workshop encouraged me to spark on my team. We do talk about "Yes, this would be a problem, but customers can use this work-around." Or "Yes, we could dive in and investigate whether than could ever happen, but is that more important than picking up the next story?" Being able to identify risks and have thoughtful conversations about their threat to stakeholders allows us to make informed decisions about how we should be spending our time. In testing, we don't always want the most information, we want to discover the best information about the product as efficiently as we can.

Examples from my current project

Cross-browser testing

We were preparing our web application for a big marketing presentation. The presenter had Firefox as the default browser on their PC. We had a script of the actions they'd perform on stage, and which pages the audience would see. I happened to find bugs on pages we weren't showing, or in the way the scroll bars behaved in Chrome rather than Firefox on my Mac.

I did not add these issues as bugs in our tracking system, or dig into them further. I knew that they did not pose a risk for the presentation, and a new design would be coming along before customers would potentially use those pages in Chrome on a Mac.s

The pipeline

We have a pipeline. It runs the tests we've automated at the API and the browser levels against the build in our test environment. I hoped it would inspire the team to think about what the next step could be: getting the tests to run against before merging into our main line, setting up an environment where we're not dependent on the (shared) test environment, looking at the results to see where our application or tests need to change.

But we don't look at the results. We don't have alerts, we don't open the page during standup, we don't use them as a reference when we're debugging, we don't have a habit of looking at the results. If we do happen to look at the results, we don't take action on it. Building the stability of our feedback loop is not seen as high-priority a task as building new features.

We don't need to run this pipeline. It's using up AWS resources. Looking at the long line of red X's on the results page only provides alert fatigue. We would be better served by not running these tests.

Minimum viable deadline

We promised to deliver a feature to a dependent team by a sadline. (A sadline is a deadline without consequences.) In the week before the sadline, three stories were left. On the first story, I found a mistake the developer declared "superficial" when he was lamenting our lack of deep testing. He decided to review the automated tests I'd written for the second story. He found a couple of use-cases that would require a very particular set of circumstances to occur. I wanted to encourage the behavior of reviewing the tests and thinking about what they're doing more deeply, so I spent the last hour and a half before a holiday weekend automating these two cases.

I'd drafted some basic automated tests for the third story, but the last feature went relatively unexplored. I should have used my scant time to test the third story more thoroughly instead. The complicated tests for the second story could have waited until next week. While we would be curious about the results, it would not have stopped our delivery of the feature. I should not have written them.

Far Side cartoon


You may be scared to say no to testing things that don't matter, where the performance will not reveal any risks or cause any follow-up actions to take place. It may be tempting to spend a bunch of time testing all the things you can think of, and only reporting on the tests that yield meaningful results.

But life is not about keeping busy. Make your time at work meaningful by executing meaningful work and declining to do things that aren't important right now.

TestBash Home (Spring 2020)

Typing to the people you usually see in person can have the same energy as tweeting at them from across a big room.

TestBash Home, put on by the Ministry of Testing, was a 24-hour around-the-world tour of talks, panels, and coaching from the best of the best. I missed purposely and prudently skipped a big chunk of the schedule to get a good night's sleep and properly absorb what I could attend.

I was definitely overloaded with places to chat (Slack, the video streaming app Crowdcast, Twitter, or the app built for the occasion) simultaneously with the video stream, the same way I would be at an in-person conference. I got that same "I want to hang out more but I'm so tired" feeling I get in real life. Though as Gwen correctly noted, this is real life now.


Cost-Benefit of Automated Tests

João Proença and Michaela Greiler weighed the costs and benefits of continually updating and running automated tests. Coming from two different angles (cognitive biases vs. Ph.D.-level number-crunching), they shared a similar quadrant model to make better decisions about your automation.

João's model on my actual TV screen Michaela's model

João took a stronger "if it's not providing value, do something about it" approach, whether that's editing the test or deleting it altogether. He asked us to consider: If you were asked to write the same test from scratch today, would you do it the same way? When you need fast feedback, what's the opportunity cost of fixing, or even just running, a particular test? João reminded us to ask "what is being tested?" and decide if that still matters before jumping in to fix a failing test.

Michaela brought a perspective from a much larger software company (Microsoft) than I've worked at before. Her approach left open another option for the fate of tests with questionable value: only run them at certain stages. Only run the tests now where the cost of running them at a later stage is too high. Running tests at the wrong stage can increase false alarms and diagnosis time. Running tests that exercise unchanged code should be avoided.

Michaela's stages


Learning & Teaching

Veerle Verhagen's 99-second talk got me rethinking some learning experiences I've had.

It boiled down to this: learning is easy and teaching is hard. Try everything you've had trouble learning again, but with a better teacher this time. Whoa.


Evil User Stories

Every security talk I've heard tells you to look at the OWASP Top 10 and make a threat model. Anne Oikarinen told us what to do next: make an evil user story. If I were a person who made a mistake, shouldn't have access at all, or received more access to the system than anticipated, what would happen? How could we mitigate those risks?

If your team doesn't know where to start with web security, Anne suggested adding static code analysis tools and exploratory testing to your practice. If you've got outside penetration testers exploring your software, add logging around the issues they trigger so you know better what's happening next time.

I use the big list of naughty strings to test for security vulnerabilities I don't always completely understand. Anne also recommended Hacker 101, a free class about web security.


Live Coaching

James Lyndsay's coaching session reminded me how valuable gathering information and stepping back to ask "what will I learn by performing this test?" are when you're stuck. I've got to go back to my notes from a workshop I took with him a couple years ago to reinforce some of these lessons for myself.


Continuous Delivery Survival Guide

Amy Phillips might be the first person I met where neither of us could remember if we'd met in person before, or if we just knew each other from Twitter. Her talk about surviving continuous delivery from 2017 lives on an essential onboarding guide for testers today.

Amy wants you to have enough context about your new team and their values before you jump into ideas about what you could change. People probably won't come to you to hand things over for the testing phase, so you have to figure out for yourself where you fit into their process. Amy recommended what I think of as "digital archaeology": reading through all the team artifacts to get a sense of their culture. Even if you don't get write access, looking at the customer support tickets, team backlog, and git commits can show you how the team actually works, rather than how they tell you they should work.

Amy really made me laugh when she described learning about the different development environments. Ask what the developers use. Ask what sales uses. Ask how many environments there are, how they're different, and what people's expectations are around them.

You can ask questions without knowing what a good answer is.

Even without knowing how to implement something, you can ask questions about it to trigger developers' wheels to start turning. Looking at the structure of the code and what's been changed may give you ideas about what hasn't been covered.


Burnout

Maryam Umar spoke about being burned out from juggling too many priorities, combined with unmedicated anxiety. Maryam called out how difficult it can be to start from scratch and build a support network when you relocate.

Seemed appropriate to watch this one from the garden

I worried a lot about building a support network before I relocated, but not much since. I'm so glad I've been able to recognize the "this is too much" feeling for myself. Saying no to big but overwhelming opportunities has left space in my life for things I know I want, and greener pastures I couldn't have imagined.


Leadership

This leadership panel made me want to have Nicola Sedgwick, Alessandra Moreira, and Shey Compton in my corner as I transition to a manager position. Obviously (to me at this point in my career and after various other trainings I've participated in) leaders do not have to be people managers, and vice versa. People need to give you permission to lead them. Leaders have a vision they can communicate up and down the chain of command.

The best way to be a leader is to lead by example. Sponsor other people at your organization who you see can be better leaders. Having the technical chops will allow people to believe in your value. It's easy for testers to underestimate our ability to influence behavior, but bug advocacy is a lot about that.

Nicola's framework for sharing feedback is something I'm defintely going to try in a 1-on-1 meeting this week: share an observation, recount the accompanying behavior, and describe the impact.


Morale

Jenny Bramble spoke about bad and good metrics, specifically that morale was the only good one. Morale encompasses psychological safety, emotional health, contentment, pride, deilght, and core values. It's hard to measure, but that also means it's hard to game.

If your team isn't one where people can have negative emotions, disagree, or talk about mistakes or risks, you're doing it wrong. Genuinely asking people how it's going and taking action on the results is the best thing you can do to improve morale.

Possible survey questions to help measure morale

I look forward to Jenny's proposed next talk about the oral history of bugs on teams.


Titles

I'd seen this talk from Martin Hynie on the Ministry of Testing Dojo, and worked with Martin in the past. So I was expecting to be reminded that testing isn't a straightforward observe -> evaluate -> report operation. I'd seen him take on challenges outside the perceived role of tester, and I'd done so myself. I knew that it's easier to create a good artifact if you start by creating an imperfect one and ask people to correct it, rather than starting from scratch.

My biggest takeaway was in the Q&A: Someone's association and past experiences of people with my job title are more meaningful than the job title itself in setting up my working relationship with them.


Optimism

My favorite 99-second talk from the second set was Jen Kitson's about optimism. While we discover and often have to report a lot of bad news as testers, optimism is essential to testing. We notice things, report bugs, and pushing for fixes because we can imagine a world that can be better.


I'm so glad I got to attend TestBash Home. I would listen to Vernon Richards talk about sports balls I don't understand. Gwen Diagram gives me life and energy in a way that I cannot fully explain. There were people I got to see or chat with that I haven't encountered for months, or years, and it felt good. It felt like home.

Errors You Might Encounter While Editing an Open API Specification

One of the tasks for my team last week was updating an existing GET API call in our specification with some new fields. The Open API Specification, formerly known as Swagger, allows you to provide the details for building in API in a compact, informative way. When you've got the authentication set up correctly, you can use the examples to actually call the API right from the spec!

My team builds with a framework that has the power to auto-generate API specs in this format. We've chosen to write them ourselves rather than have them auto-generated so we can be more specific in what kinds of errors and error messaging people will encounter for different calls. For example, a 404 Not Found might make sense on a GET call for a specific resource, but not for a search call.

When you open http://editor.swagger.io/, you should see a two-pane view of the editor on the left and the rendering on the right. If you've opened this URL before, your browser session will remember and display your most recent changes. If it's the first time you've opened it, the example specification of the Swagger Petstore should appear like this:

Petstore example spec

I hope that in describing the errors I encountered, you can keep an eye on them as you're editing rather than having to go back through the specification at the end to figure out what went wrong.

Errors I encountered

Red errors in the box at the top and next to line numbers in editor

Indentation errors and using reserved characters (I found square brackets, dashes, colons at least fall into this category) in unexpected ways will likely give you an error in a red box between the navigation and the title of the spec.

If you're lucky, the error corresponds to the line it mentions, you find a red X on that line, and you'll be able to figure out what went wrong there.

If you're less lucky, the line mentioned in the red box will refer to the beginning of the next code block that's unparsable because of the syntax error, or the first place where the reserved character you've used incorrectly is actually used correctly.

The hardest part about these errors is that you may not notice them. If you're scrolled down the page far enough, you won't see the error box or the red X as you're creating the error.

Spinning without loading

If you're getting a spinner where a part of the specification should be loading, you've got a issue with the reference to a schema. Schemas allow you to chunk out and reuse part of the spec, with a reference to them in another place.

I kept getting the spinner when I referred to a schema that didn't exist, either because I'd updated the name (but not the reference to it) or because I'd screwed up whether it was singular or plural. Fixing the error isn't always enough to make this particular error disappear. Reloading the page will make it re-evaluate what you've got.

Could not render this component, see the console.

What a fun and exciting mistake you've made in the specification, to cause this very comforting and reassuring error message.

Like the endless spinning error, this one means something very specific: you've designated something as an array, but you haven't explained what kinds of items appear in the array. Adding a description or reference in the items section should do the trick.

Editing without the Open API editor

It's possible the Open API web editor was not the best tool for this job.

The Visual Studio Code Open API plugin did make the red errors obvious enough that you could see them from anywhere in the document. It also gave me the collapsed version of the longer spec in alphabetical order. This allowed me to navigate around without remembering the line numbers of where I dumped the schema separately from the overall spec. Unfortunately the extension didn't catch when I referred to a spec that didn't exist, but I expect seeing the list of schemas on the side would help discover this mistake. The extension also didn't notice if I didn't define the objects in an array.

There's also an Open API spec tool for offline use, but the instructions went beyond the interest I had for this blog post. Try it out yourself, and maybe I will the next time I've got to edit these specs.

Remembering TestBash Brighton 2018

TestBash Brighton was one of 10,000 things I had to do in the weeks just before I left my whole life (family, friends, job, apartment, belongings) and moved across the ocean into the unknown. It was the first place I'd gotten to share that big news in person, with people who would become larger parts of my life once I moved. Visiting the city where I'd studied during university and first thought "I could leave the United States" brought things full circle for me.

I paired on presenting a brand-new workshop and talk, with two different people. This would have been a lot, even without all the uncertaincies and distractions swirling in my life those days. After hustling to adjust the schedule of and present our morning workshop, I distinctly remember choosing to skip the afternoon one I'd planned on attending in order to fill out immigration and relocation paperwork. I'm shocked to find I was able to focus enough on some other peoples' talks to take coherent notes about them.

Royal Pavilion in downtown Brighton


Anusha Nirmalananthan's talk about sharing a chronic illness sticks with me today. One of the things I love to jump into is troubleshooting. I hear about a problem, and I'm already thinking of ways to solve it, and asking about what you've tried already. Anusha reminded us that listening and not saying anything can be more helpful and powerful than all the patronizing "Have you tried...?" questions in the world.


Emily Weber spoke about communities of practice, which have always been billed as "guilds" in places where I've worked. Emily encouraged us to connect with people around our organizations in our discipline in a supportive, voluntary group without a hierarchy or an end date. While the occasional guild meeting I've attended has turned into a groan-fest, dedicating time and energy to fostering change (to code, to job descriptions, to your team's way of working, etc.) gives me that "I did something today" feeling. I'm grateful to be able to make time to learn with my colleagues, and build a support network for when I need advice from outside the bubble of my team.


I loved Rosie Hamilton's talk about logic in testing because it drove me back to the basics. How do we decide what is true? How do I describe my thought process? When the availability of relevant cases prevents effective inductive reasoning (determining a heuristic), we have to move to abductive reasoning (determining the likeliest explanation from the available information). Realizing when you're doing this and what other information might be available to you elevates your testing.


Looking back at my notes from Aaron Hodder's talk on structured exploratory testing make me realize how much freedom I can have in my testing when working with an inattentive team. His suggestions about making for easier reporting, fewer rabbit holes, and predictability of a time-table suggest that someone is really breathing down your neck about the status, progress, and depth of your work. The biggest advantage I've had in sharing my testing charters with my team is that I find out which ones aren't valuable before I spend time executing them. Actively choosing not to test something when we don't care about the outcome or the risk it poses is very effective work.


Alan Page spoke about the modern testing principles he'd been shaping on his podcast for a while. My notes boil it down to: testers should do less testing and more coaching. This has certainly served me on teams producing too much for me to personally go as deep as I'd like in testing, but it also pays off when I'm out of the office or unavailable at the office. Working on a team that knows how to test means I get to look at higher-quality code, with more interesting bugs.


Ash Winter gave a talk immediately after mine, so perhaps I did not gather everything from it. But I did save these tid-bits: a pipeline is built to prove that something shouldn't go out. A pipeline can provide a massive amount of information, but without a strategy, too much data doesn't help humans make better decisions. Small things you can do to make huge improvements: deploy regularly, and learn source control.


Reflecting on the open space, the social events, and the atmosphere at TestBash Brighton 2018 makes me wish for the experience we all missed at the now-cancelled event in 2020. Jumping into the unknown seemed so doable when I knew there'd be so many people to share with and learn from on this side of the Atlantic. I don't know when I'll see you all again, but I look forward to that possibility.

Can't go to Britain without a proper tea

Reflecting on Let’s Test 2017

Good ideas come back around. As I sit here re-reading my notes from Let’s Test 2017, I remember the thrill of coming across so many new ideas there, and realize how much these three things stick out to me even now.

Valley Lodge & Spa, Magaliesburg, South Africa

  1. Causal loop diagrams

It’s possible I’d seen a causal loop disgram before, but it wasn’t until I went to Jo Perold and Barry Tandy’s “Visualize your way to better problem solving” workshop that I knew the name for them. Here’s an example of how connecting nouns with verbs via bubbles and letters really clears things up.

During the workshop, Jo and Barry talked about how drawing a diagram of your software in this way can help you discover interaction points between systems. Sharing the diagram is one way to take the invisible software you’re building into a visible space. That allows us to have a conversation about the model, discover if we’re on the same page, and take steps to improve the model and ultimately the software. The more visualizations I see of process and influence on the job, the more I realize they’re exposing Conway’s Law.

Elisabeth Hendrickson wrote in Explore It! about how causal loop diagrams can help you discover interesting areas to test. She pointed out that transitions from one state to another take time, and there’s lots to be discovered during the moments of transition about interruptions, errors, and incomplete states.

Repenning, Nelson & Sterman, John. (2003). "Nobody Ever Gets Credit for Fixing Problems That Never Happened: Creating and Sustaining Process Improvement." Engineering Management Review, IEEE. 30. 64- 64. 10.1109/EMR.2002.1167285.

Nelson Repenning and John Sterman used a causal loop diagram to display a human problem: not prioritizing time for improving. Read their whole article to discover terrifying news about how much time you’re wasting all the time by not stopping to improve!

2. Metaphors

Leo Hepis and Danie Roux’s “Frames at Work” workshop blew my mind. I would tell you they were talking about how metaphors and context framing shape the way we think about our work. And they did that. But the meandering and level of lost I felt on our path there was unprecedented.

Our table selected “Rope of Awesome” and I stand by that answer.

One of the exercises had us listening to a story about someone’s work and only writing down the metaphors. There seemed to be one every sentence, so many that I thought the speaker was throwing them in there on purpose. (See: throwing. They’re everywhere!) He talked about “pulling” and “dragging” his team along. Imagine instead if he were “pushing” or “building” his team. How would he think differently about his work? How different would his work be? Listen more closely to the words people are using to discover terrifying news about how they perceive their work!

3. Reflection and Learning

Alison Gitelson hosted a session to help the conference attendees think more about what they’d just learned. I was still reeling from the metaphors and framing session, so I wrote down these questions for myself:

  • How do I realize when I’m stuck inside my own head?
  • Does noticing where a behavior would be useful make it less irritating?
  • Was there a better discussion because active disagreement was encouraged?
  • Who in my life can help me reframe?
  • Why do I feel my effort has to achieve something?

Reflect on these questions yourself for terrifying news about how stuck you can be sometimes!

Very slight double rainbow to the right.

Originally published on Medium.

How not to name self-verifying test data

Self-verifying test data makes it easy to determine if it’s right. You don’t need to go check somewhere else. It is its own oracle. Here’s an example:

Address: My last name

Something is wrong. It’s obvious from the data itself. We don’t know exactly where the problem is or how to fix it, but we can tell there’s something funky with either Address or Last Name or both. If you feel like there must be more to it than this, read Noel Nyman’s 13-page paper to see that there isn’t.


Let’s look at a real-life example: I’m writing an automated test for an API. First, I create (POST) an item, then I read (GET) the database to see if it’s there. One of the fields in the body of the POST call is a Universally Unique Identifier, or UUID.

A UUID is a 128-bit identifier; we’re using them as unique keys in our database. Here’s an example of a version 4 UUID: 74ee94d-d32f-4844-9def-81f0d7fea2d8. (If you’re a human that can’t read regex, you might find this useful.) I generated that using the Online UUID Generator Tool.

I wanted to see what would happen if I tried to POST with a UUID that wasn’t valid. If I’d taken my example UUID and removed some characters to make it 74e4d-d32f-4844-9def-81f0ea2d8, it would have been invalid. My test would have behaved as I expected. But I wouldn’t have been able to tell at a glance if this was a valid UUID or not. It wouldn’t have been self-verifying.

I decided to name my UUID This is not a valid UUID.I wanted to easily be able to tell from the GET call if it succeeded, or the error message in the POST call if it failed. It would be clear when running or debugging the test what value was being passed in, why, and to which field it belongs.

Or so I thought.

I ran the test. This was the output.

![](/images/posts/2019/output.png)

I sat starting at it for a while. The first line where the red AssertionError starts looks confusingly similar: The left side looks like the right, so the assert should have passed instead of failed. The message below had is not a valid UUID twice. Huh? Finally, I realized what I did, and highlighted the part you now see in blue. I gave my self-verifying test data a name too similar to the error message. Let me boil this down:

Error message I expected: UUID not valid
Error message I got: Published service UUID {{insert UUID here}} is not a valid UUID.

Unfortunately, I’d named my UUID This is not a valid UUID. so the whole invalid input error message was:

Published service UUID This is not a valid UUID. is not a valid UUID.

Fans of 30 Rock may recognize this problem:

![](/images/posts/2019/single-dropping.jpg)

My self-verifying test data would have worked fine if the error message was exactly how I expected it. The test would have passed, and I may not have taken a closer look at the text of the error message. But of course, my developers had changed it to be more meaningful and give more context, the bastards. Thus, I uncovered my perfect-turned-nonsensical name. I changed the UUID in my test data to be calledFAKE UUID. It may not be the perfect name, but at least the code is better.


Calling things names related to the thing they are: great!

Calling things names too similar to the thing that you’re trying to test: confusing.

Originally published on Medium.

Agile Testing Days 2018: A Reflection

I was beyond excited to attend Agile Testing Days in Potsdam, Germany for the first time a year and a half ago. Anywhere I went, I met women who I’d previously only known from the internet. It was refreshing.

Based on the pages of notes I’ve got, I can tell you that the lessons I took away from that week have seeped into the way I work everyday.

“Humans should not be regression testing.”

Jez Humble kicked off the conference with a session about continuous delivery. He described the barriers of organizational culture and software architecture that can prevent you from getting to a point where you can deliver continuously. Previous places where I worked made this feel like an insurmountable feat; now when I imagine continuous delivery, I can imagine concrete steps we could take to get there.

“Let’s create a small habit everyday to trigger me to learn more.”

Lisi Hocke certainly took this idea from her talk and ran with it. She’s been pair testing remotely with people around the world and learning so much from it. I’ve taken pairing on a smaller scale, with my colleagues or in-person. There are still times when it tests my patience, but the benefits of being able to more precisely explain what I’m doing and what I expect of the software vastly outweigh that investment. All my notes from Lisi’s talk have me nodding my head, like these are the most obvious things in the world. The biggest I’d come across about a year beforehand: having a growth mindset rather than a fixed mindset. This explanation from Brain Pickings sticks with me.

Sanssouci Palace in Potsdam.

“I am curious why they’re doing what they do.”

Gitte Klitgaard and Andreas Schliep had an improvised conversation about good and evil. You know, like you do with your friends for fun, but on stage. It can be so hard to believe that people are acting with good intentions at heart. But remembering to have empathy for the situations people find themselves in will help you choose to be the person to repair relationships when things go awry. If you believe in people, they can be better.

“People don’t want to collaborate with you when you have twelve spreadsheets for them to go through.”

I’m sure Angie Jones had other, more profound takeaways from her talk. But this one sticks to my bones. I think of it anytime I open a spreadsheet with more than one sheet in it. I think of it when I’m deciding on a tool to use, and wondering not what’s easiest for me to set up, but what’s easiest for my fellow collaborators to use. Thank you for this gem Angie.

Some dramatic structure in Sanssouci Park.

“Get ready to fire people to maintain the culture you want.”

Poornima Vijayashanker spoke about concrete ways to successfully onboard new employees. But I’m curious about this provocative statement. I haven’t ever worked at a place bold enough to get rid of managers whose direct reports displayed a pattern of escaping them that no one could ignore. I wonder what kind of company is bold enough to take this step.

“Only put off until tomorrow what you are willing to die having left undone.”

Kim Knup said Pablo Picasso came up with this doozie. At my first job, we used a physical board with sticky notes. If the sticky note had been moved around too many times or stuck in one place too long, it would literally fall off the board. At the time this felt like a failure. Now I see it for the blessing it was. Forgetting is a part of life, even if our digital tools would prefer us to forget that.

“Do what you say you will. Integrity is important.”

Rob Lambert spoke about behaviors of effective Agile teams. It’s resonating with me again now because it’s something I’m addressing in a talk I’m giving about how to build trust. I’m digging into authenticity, which I think goes a step beyond integrity. Doing what you say you will is being externally congruent. Authentic people are also internally congruent; the vision they have of themselves is the one they present to the world.

I forget what this is but it’s across from the museum downtown and damn the light was lovely.

“If you never get feedback, you have one year of experience ten times.”

This came out in Huib Schoots and Alex Schladebeck’s workshop on dissecting your testing and discovering the skills present in your exploratory testing. We practiced observing the skills we were using on the meta-level. It allowed me to both see and share how much a year of mob testing for an hour every day had expanded at least two things. First: my field of vision for how to dig and explore software had grown. Second: I was able to explain what I was thinking such that the other people present could understand, contribute to, and question which path we’d take next. It was life-affirming!

“When we set our own limits, we can change them.”

This came up in the context of Natalie Wenert’s talk about cross-team functionality. She chided organizations for relying on hero-worship and fire-fighting over breaking down silos and contributing to the whole. One of my conference buddies was frustrated at Agile Testing Days because they viewed so much of the content as “work therapy.” They weren’t wrong.

“As a user, I want to be locked out of the system after three incorrect password attempts.”

David Evans presented a memorable talk about how the template we stick to for writing user stories does not serve us well. This particular example made me laugh out loud. This story gets the “why” wrong. It’s about security of our system and the user’s data. Being honest about why we’re building the software would make the user story less absurd, and hopefully get us on the path to making better software too.

This doozie is in the Museum Barberini, which is worth checking out if you're in Potsdam.

“Uncertainty is more stressful than inevitable pain.”

Emily Webber spoke about team interactions and organizational change. (Shout out to all the people who’ve tweet me instead of this other brown-haired white lady with glasses!) I’m on my fifth team in a year at my current company. I’m tired of the change. I know how important it is to build relationships with the people you work with. I’ve expressed that knowing who’s on my team is more important than having the perfect set of people. I look forward to more stability there because I don’t envy the alternative.

“Mistakes were made.”

Liz Keogh spoke about how to deliver failure messages. Her message was essentially: don’t. Pointing out the mistake without pointing fingers is enough. Encourage good existing behavior and create more options so that failure can occur safely.

“Are we advocating for those doing a good job?”

Ash Coleman and Keith Klain had a late-night after-dinner (over-hyphenated?) bonus-keynote to talk about how culture is a mindset and what we can do to change it. They encouraged allies in the majority to stop talking, and start listening, so you can do something. If you’re uncomfortable, good. You’re learning.

Originally published on Medium.

You build it, you run it, and you fix it

During a meeting of our unit at work today, we were asked if we wanted to become a member of the elite squad of people that are on-call for our software. Our philosophy is: we built it, so we know the most about keeping it up and running. In my next meeting, somebody asked if we ever write bug reports for ourselves. Both reminded me that I wanted to use and fix up a piece of software I wrote.

$ python3 httperrors.py
There are 15 links outside of the 200 or 300 range of http responses on your site.

After using ScreamingFrog software to scan the pages for http error response codes, I decided I could build something easier-to-use myself and test it using my own website, elizabethzagroba.com. I wrote a draft that worked initially, but did some things that weren’t great Python. Thanks to my friends Davida and Becky who reviewed and improved my code. You can see what they suggested in the older tickets in my Github repo.

Here’s what I have now:

# Mission: Find http codes that aren't in the 200 or 300 range for all the links on a single page

import os
import errno
import http
import requests
import ssl
import string
import urllib.request
from bs4 import BeautifulSoup
from bs4 import SoupStrainer


MY_SITE = "http://www.elizabethzagroba.com"
my_site_response = requests.get(MY_SITE)
only_external_links = SoupStrainer(target="_blank")
page = str(BeautifulSoup(my_site_response.content, "html.parser", parse_only=only_external_links))


def get_url(website):
  start_link = website.find("a href")
  if start_link == -1:
    return None, 0
  start_quote = website.find('"http', start_link)
  end_quote = website.find('"', start_quote + 1)
  stripped_url = website[start_quote + 1: end_quote]
  return stripped_url, end_quote


try:
  os.remove('list_of_all_links.txt')
except OSError:
  pass

count = 0
with open('list_of_all_links.txt', 'a') as file:
  while True: 
    url, n = get_url(page)
    page = page[n:]
    if url:
      try:
        req = urllib.request.urlopen(url)
      except urllib.error.URLError as explanation:
        file.write(str(explanation) + " " + url + '\n')
        count += 1
    else:
      print("There are " + str(count) + " links outside of the 200 or 300 range of http responses on your site.")
      break

It looks for the external links on my website, tries to open them, and writes them to a file if the response code isn’t in the 200 or 300 range. There are things I’d like to improve. I’ve noted some here. But tonight’s scope is: run it and fix it.

I run the file on my machine. Fifteen sites I link to come back with error responses. Here’s the file it generated:

HTTP Error 403: Forbidden https://www.mendix.com/
HTTP Error 403: Forbidden https://medium.com/@ezagroba
HTTP Error 403: Forbidden https://medium.com/@ezagroba/have-i-tried-enough-weird-stuff-7ed4105ae994
HTTP Error 403: Forbidden https://medium.com/@ezagroba/doubt-builds-trust-9cee937dc5d1
HTTP Error 403: Forbidden https://conference.eurostarsoftwaretesting.com/conference/programme/2018/#Wednesday-c67b
HTTP Error 403: Forbidden https://www.phpconference.nl/
HTTP Error 404: Not Found http://www.romaniatesting.ro/sessions/succeeding-as-an-introvert/
HTTP Error 404: Not Found http://www.ministryoftesting.com/training-events/testsbash-philadelphia-2016/
HTTP Error 403: Forbidden http://www.associationforsoftwaretesting.org/training/courses/
HTTP Error 403: Forbidden http://codeacademy.com/
HTTP Error 403: Forbidden http://hackdesign.org
HTTP Error 403: Forbidden http://testobsessed.com/
<urlopen error [Errno 8] nodename nor servname provided, or not known> http://developersbestfriend.com
HTTP Error 999: Request denied http://www.linkedin.com/in/ezagroba/
HTTP Error 403: Forbidden https://medium.com/@ezagroba

I notice most of the error codes were 403 responses, so I try a few of those pages manually. Those few succeed, so I don’t bother checking the rest. A403 status is Forbidden access, so I think it has something to do with these sites having logins. But I don’t need to logins to see the pages I’m linking to. Then I notice that some of the pages are pointing to http instead of https. I don’t know exactly what’s wrong. It’s getting late, so rather than diving in, I write two bugs: one about investigating 403s, one about updating http to https.

Next, I look at the other error codes. The 999 one I’ve seen before. It’s some weird LinkedIn thing. I don’t add a bug because it’s not interesting to fix. One site I’m not able to reach the domain of at all, so I message the owner to see if it’s still being maintained. The 404 codes are from sites that still exist where the pages have been taken down; fixable, but frustrating. They prove I spoke at these conferences. When these pages die, so does proof of my hard work. Sigh. I remove those links from my site, reload to confirm the fix made it to production, and run the script again. We’re down from 15 to 13 errors, as expected.

$ python3 httperrors.py
There are 13 links outside of the 200 or 300 range of http responses on your site.

In looking at this code again, I’m reminded of the original motivation and my vision for myself: run it against any site, and know when links break on elizabethzagroba.com. I added some issues that I’ll pick up another day. I’ll be ready to build again. But for now, goodnight.

Originally published on Medium.

Don’t let JIRA stop you from visualizing dependencies

My team is at the beginning of a project. We’ve got a lot of potential features. Our task yesterday was to start breaking down big dreams into specific pieces of work we can pick up.

As we started to define what we wanted to build, we came across items that had to come first: come up with a proposal before we meet to review it with our stakeholders. Other items weren’t necessarily “blocked,” but would make more sense to pick up in a sequence. As my developers watched me painstakingly searched for completely forgettable JIRA story number so I could mark each story as “blocked” or “is blocked by,” one of my developers asked one of the best questions I heard all day: “Is there a way we can see this visually?”

My developer searched for JIRA solutions to this problem and came across a few that required admin access or JIRA version 8. We spent a few minutes getting lost in the text and subsequently interactive credits on the About JIRA page. None of us noticed this yesterday, but the start screen for the game gave us the answer we needed: we have JIRA 7 (Roman numerals on the title page), not JIRA 8.

JIRA credits: A surprising diversion in our work day.

Without a big sheet of paper (we had post-its, but no where to stick them) or a whiteboard in the conference room we were crammed into, I pulled up my go-to tool for visualizations: MindMaster. I’ve got other recommendations for mind mapping software at the bottom of my article here. I’m currently stuck on MindMaster since it’s free and not web-hosted.

I added a bunch of Floating topics and connected them with Relationship arrows. We outlined the first group of stories that we’d collected into an epic. We fiddled a bit with aligning the stories that could be picked up in parallel so they appeared at the earliest point we could pick them up. We came back to refine and add a couple items as we outlined other epics. The few minutes we dedicated to creating this diagram gave us enough information to decide what order we should pick up work for the next week or two.

We may be looking at our sprint board in the coming days to review how all the work is going. But I know that no developer is going to trace all the “blocks” and “is blocked by” links in the stories. They’re going to look at this diagram to know when to pair or mob because we can’t pick more things up.

Moral of the story: Don’t let your tools constrain you.

Originally published on Medium.