Soon after Maaret Pyhäjärvi and Alex Schladebeck began their endeavor to practice testing APIs using the API Challenges from Alan Richardson (aka The Evil Tester), they looped me into their periodic practice sessions. Why? To make Past Elizabeth jealous, presumably.
Continuing on the api challenges from @eviltester with @alex_schl and @ezagroba after weeks of being otherwise engaged, always happy to have see where we left off with a test. Simple thing for our future selves in group learning activity. pic.twitter.com/D99nC2PDVX— Maaret Pyhäjärvi (@maaretp) December 9, 2021
API Testing Challenges
We gathered for an hour every few weeks to work through the challenges. The tools we were using (pytest, the Python requests library, and PyCharm) were like home for Maaret and me. I'd been writing in a framework with these tools for my job for a few years already.
I wasn't the only one. These tools were free to use and available for a number of years already. What the three of us combined couldn't figure out by trial-and-error, reading the error message, reading the darn description of what we were supposed to do again, or relying on patterns from previous exercises, we were able to Google. With one notable exception of course, as we are testers after all:
It may not seem like you'd need three people to do the work that one person could do. But I assure you, having extra pairs of eyes to catch a typo, remember whether we were expecting it to pass or fail this time, see immediately that it's a whitespace issue making PyCharm angry, crack a joke, or help decide whether to keep going in the same direction makes the work go more smoothly.
More than once, we'd end a session a few minutes early because we were stuck and lost, only to come back a couple weeks later with fresh eyes, able to understand where we were stuck and what to do about it. After several months meeting infrequently, we got through all of the API Testing Challenges!
Then we were like...now what? We like learning together, but we'd achieved our goal.
Starting out with Playwright
After a bit of brainstorming, we landed on a skill Alex and I were both still building: UI automation. Naturally, Maaret was way ahead of us, and pointed us towards Playwright framework and a practice site from Thomas Sundberg of all the greatest hits: radio buttons, drop-downs, alerts, you name it.
Our experience with UIs, DOMs, automation, Selenium, exploration helped us, but didn't prevent every pickle we got ourselves into with Playwright. Though their documentation will tell you a lot of what you need to know (if you've correctly selected Python instead of Java or Node.js at the top), our desperation kept exceeding our patience. We escalated to the Playwright champion Andrew Knight and the Playwright community Slack channel.
I'm not sure about that. Can anyone from @playwrightweb help answer?— Pandy Knight (@AutomationPanda) April 5, 2022
Several times, it wasn't only the code that needed changing, but our perception of how Playwright wanted to interact with the website. These are a few I remember:
- an API response from a browser context can't be collected from a page context
- setting different contexts for a page and an alert on that page
- having that alert knowledge not help us when we also had to fill in a prompt
- expecting something in the DOM to tell us when an item in drop-down was checked
@maaretp @alex_schl A kind gentleman in the Playwright Slack pointed us here: https://t.co/e12v4nhth5— Elizabeth Zagroba (@ezagroba) May 24, 2022
It returns the value `milk` but not the visible text `Milk` of what's selected. Perhaps that's good enough!
For the first three, wrapping our heads around a different way of thinking got us through the problem. For the last on, we lowered our expectations about what we could check. (Pun intended.)
Learning more on #playwright with @ezagroba and @alex_schl and coming to realisation we are modelling the world wrong and thus having harder time discovering info on the API documentation. We need to remodel our worlds.— Maaret Pyhäjärvi (@maaretp) June 16, 2022
We've tested what we can and should test on our first practice site. In upgrading to a more challenging one, we realized that we'd benefit from the knowledge our past selves gained. And that you could too.
We've published our progress on github as the Playwright Cookbook. It's a Python repository of what we found that worked for different UI situations. It's one step beyond the Python documentation on the Playwright website, it lets you compare an actual page to a test where we were able to select the element.
Fun was had by all
Trying to quickly get something done with a new UI automation tool had been my white whale, something I knew was annoying enough that I wouldn't know how to get unstuck. Working in an ensemble meant either (1) the knowledge we needed was in the room and just had to be shared, or (2) two brilliant, successful ladies known for their testing prowess also didn't have a clue what was happening. Either way, it made things better and achievable.
I am notoriously opposed to fun. But this has been fun.
The added fun and energy in learning hands-on things together creates an environment where we want to show up and learn. Social software testing for learning, FTW!— Maaret Pyhäjärvi (@maaretp) June 30, 2022
What is next for us? We know we want to:
- keep learning together
- add more recipes for our next testing target
Have we reflected on what's valuable and not valuable to test on an API? Will we share more about this beyond this blog post? A conference talk or workshop? A Twitch stream?? Only time will tell. For now, enjoy the github repo. :)