From API Challenges to a Playwright Cookbook

Soon after Maaret Pyhäjärvi and Alex Schladebeck began their endeavor to practice testing APIs using the API Challenges from Alan Richardson (aka The Evil Tester), they looped me into their periodic practice sessions. Why? To make Past Elizabeth jealous, presumably.

API Testing Challenges

We gathered for an hour every few weeks to work through the challenges. The tools we were using (pytest, the Python requests library, and PyCharm) were like home for Maaret and me. I'd been writing in a framework with these tools for my job for a few years already.

I wasn't the only one. These tools were free to use and available for a number of years already. What the three of us combined couldn't figure out by trial-and-error, reading the error message, reading the darn description of what we were supposed to do again, or relying on patterns from previous exercises, we were able to Google. With one notable exception of course, as we are testers after all:

It may not seem like you'd need three people to do the work that one person could do. But I assure you, having extra pairs of eyes to catch a typo, remember whether we were expecting it to pass or fail this time, see immediately that it's a whitespace issue making PyCharm angry, crack a joke, or help decide whether to keep going in the same direction makes the work go more smoothly.

More than once, we'd end a session a few minutes early because we were stuck and lost, only to come back a couple weeks later with fresh eyes, able to understand where we were stuck and what to do about it. After several months meeting infrequently, we got through all of the API Testing Challenges!

Then we were like...now what? We like learning together, but we'd achieved our goal.

Starting out with Playwright

After a bit of brainstorming, we landed on a skill Alex and I were both still building: UI automation. Naturally, Maaret was way ahead of us, and pointed us towards Playwright framework and a practice site from Thomas Sundberg of all the greatest hits: radio buttons, drop-downs, alerts, you name it.

Our experience with UIs, DOMs, automation, Selenium, exploration helped us, but didn't prevent every pickle we got ourselves into with Playwright. Though their documentation will tell you a lot of what you need to know (if you've correctly selected Python instead of Java or Node.js at the top), our desperation kept exceeding our patience. We escalated to the Playwright champion Andrew Knight and the Playwright community Slack channel.

Several times, it wasn't only the code that needed changing, but our perception of how Playwright wanted to interact with the website. These are a few I remember:

  1. an API response from a browser context can't be collected from a page context
  2. setting different contexts for a page and an alert on that page
  3. having that alert knowledge not help us when we also had to fill in a prompt
  4. expecting something in the DOM to tell us when an item in drop-down was checked

For the first three, wrapping our heads around a different way of thinking got us through the problem. For the last on, we lowered our expectations about what we could check. (Pun intended.)

Playwright Cookbook

We've tested what we can and should test on our first practice site. In upgrading to a more challenging one, we realized that we'd benefit from the knowledge our past selves gained. And that you could too.

We've published our progress on github as the Playwright Cookbook. It's a Python repository of what we found that worked for different UI situations. It's one step beyond the Python documentation on the Playwright website, it lets you compare an actual page to a test where we were able to select the element.

Fun was had by all

Trying to quickly get something done with a new UI automation tool had been my white whale, something I knew was annoying enough that I wouldn't know how to get unstuck. Working in an ensemble meant either (1) the knowledge we needed was in the room and just had to be shared, or (2) two brilliant, successful ladies known for their testing prowess also didn't have a clue what was happening. Either way, it made things better and achievable.

I am notoriously opposed to fun. But this has been fun.

What's next

What is next for us? We know we want to:

Have we reflected on what's valuable and not valuable to test on an API? Will we share more about this beyond this blog post? A conference talk or workshop? A Twitch stream?? Only time will tell. For now, enjoy the github repo. :)