Exploratory Testing with the Chrome Network Tab

I needed to the loading time of a login over a slow network. The internet connection I had was too fast to see all the visual behavior and the backend redirects happening during the process. I opened the Network tab in the Chrome developer tools and switched the throttling option to Slow 3G. (A yellow triangular yield symbol appeared next to the Network tab to remind me that I’d throttled my network.) Running over Slow 3G allowed me to see what someone trying access the site from a phone or tablet might experience.

The screenshots below are from the login on hackdesign.org, a program I highly recommend for getting up-to-speed on user experience design.

With the Network tab open, I could do a few things:

  1. I could see what API calls were being made. I looked at the bottom of the Name column to see how many calls there were overall, and sorted it to discover if we were retrieving things from the server that I expected to be cached. I clicked the Preserve log checkbox before I started so I could see what happened even after I went to another page.
  2. I could see which calls were redirects. The Statuscolumn had numbers in the 300 range for redirects. I love httpstatuses.com for what each one means more precisely. Redirects might indicate something could be optimized.
  3. I could tell how much time each of those network calls took. The Time column allowed me to sort by milliseconds to find the call that took the longest.

What I used in the Network tab of the Chrome developer tools. HackDesign.org login only took 6 seconds.

I discovered logging in and logging out took about 20 seconds on the test environment over Slow 3G. (This time appeared in the Load text in red at the bottom of the Network tab.) Was this too slow? To answer this question, I needed an oracle.

I decided to compare the behavior on the test environment to our production environment. On production, login took 60 seconds! When I sorted the network calls by Time, I could see that the bulk of loading time was spent retrieving messages to display on the logged-in page. Both 20 and 60 seconds for a login seemed unacceptably slow to me, so I took it to my team.

My team agreed that this behavior was bad. Unfortunately, we decided to prioritize users on fast networks over users on slow ones, and changing this behavior wasn’t a priority for our release.

When I sorted the network calls by Name, I found some unexpected URLs I did not expect to be involved during a login. I tested a bit more around the feature in different places, found a bug in the behavior, and asked around a few different teams before I got the bug reported to the correct team.

Moral of the story

You have a powerful web performance testing tool at your disposal. Give it a try and see what you find.

Originally published on Medium.

This Too Shall Pass: Disposable Test Automation

A few different times, we wrote some Python code to help us test our products. And then we threw the code out.

We had the infrastructure in place to add tests to our continuous integration pipeline in Jenkins. It would have been as simple as merging the branch of our code into master. But it had served its purpose already.

Example 1: web feature integrating with desktop software

Our team owner a web-based product. It had lots of features, but the two we were concerned with for this were: it created an account and a project. These would be used in a desktop product built by different teams at our company. For this story, a flag would be set when you created a project in our product to allow for something new in the desktop software.

Our testing stack was built and maintained by our team alone. It was set up to look at the web UI and APIs, but not the desktop software. We had APIs to create projects and change this new project flag. We didn’t have an automated way to see exactly what would happen in the desktop software under these different circumstances.

We wrote tests to query the APIs to see that the settings we set were coming back as expected. Those went into the pipeline. We also wrote some Python code to create projects in each of the five different states. Then, we manually went into the desktop software, used each of the projects we created, and looked at what happened in the desktop software. The information we discovered was enough to determine that the work for our team and the work for the desktop software teams was complete.

We did not add these tests to the pipeline. The branch got removed from the project without getting merged into master once the story was completed.

Example 2: crude performance test

We wanted to simulate the load placed on our product by a different internal app. Unfortunately the owner of the internal app was unavailable in the short period of time we had to complete this task. To do this, we took existing feature tests we had running on our staging environments, parallelize them, and run them on a clone of our production environment.

Our production clone was available during the few days we were doing this test. It would not be available thereafter, considering the time and money we would have to invest in maintaining it. Our other staging environments had a different enough capacity that running a performance test there would not be meaningful. Our production environment would give us the information we needed once we released this build because the internal app ran there. We maintained a branch for a few days while we were writing and using the performance test, but without an environment to run it on, we threw it out.

Example 3: audit trail Excel export

We added an audit trail to our profile information for GDPR compliance. Our system could display the information in the UI and export it to Excel. We added tests to our pipeline for the UI bit. The exporting to Excel bit we didn’t. We wrote a test that ended by providing us a username and password. Manually, we’d login, go to the page with the Excel export, and confirm that the data in the file matched the changes the test had made.

The Excel exporter wasn’t a piece of code our team maintained. If this test failed, it would have likely been in that functionality, since we also had a UI test for the data integrity. We weren’t changing anything about the Excel export. The audit trail report was an important enough feature that we knew we’d smoke test it manually with every release, so we didn’t add this code to the repository.

What we asked ourselves when throwing out our automation
  • What would we be asserting at the end of the test?
  • If those asserts succeeded, would they give us false confidence that the feature was covered when we couldn’t account for the consequences?
  • If these asserts failed, would that give us information about what to fix in our product?
  • Would checking the code into the automation repository expose sensitive data about production?
  • Would running these tests against our staging environments give us the information we needed?

Originally published on Medium.

Authorization & Authentication: A Tale of Government Security Clearance

My first job after college was with a contractor to the United States Navy. I had to get a security clearance from the U.S. government to have access to classified data, the lowest category. One piece of the application was getting fingerprinted. From there, federal agencies like the FBI would check my fingerprints against criminal records databases to see if any matches turned up.

I walked down to the local police station in my college town of Waterville, Maine, which is not exactly a bustling metropolis. I expected a police officer to compare my picture on the driver’s license I’d brought with my actual face to confirm my identity, before taking my fingerprints and shipping them off to the appropriate authority.

But they did not.

flickr/cogdog

First, when I arrived at the station, I was told to wait. “We’re questioning a suspect in the booking room!” This was clearly the most action these officers had seen in a while. They were psyched. But I was puzzled about why this was an impediment, so naturally I asked more questions: “Can you tell me more about the ink pad? Is it secured to the table in some fashion? Would it be possible to continue questioning the subject, but take my fingerprints out here in the hallway?”

After explaining this to two different officers and lots more waiting, they discovered that the ink pad was indeed able to be transported by human hands to interact with mine. They filled out the form with my name and some other details I provided verbally and took my fingerprints. I left.

It was only after I got back to campus that I realized the crucial step the police officers missed: they forgot to look at my driver’s license. They were so thrown off from their usual routine, both by having a suspect in custody and by moving the fingerprint pad to a different location, that they forgot to verify my identity.

I have no criminal record, I was only at the job for a short time, and my security clearance expired after five years without getting renewed, so the mistake was of no consequence.

But consider the alternative: I have a criminal record. I send my sister or a friend who knew enough about me to the police station. They impersonate me, and the fingerprint check of their prints turns up no suspicious activity. Now I have security clearance to access classified data about the American military when I shouldn’t.

The moral of this story: authentication and authorization go hand-in-hand. Authorization could have been circumvented by an unauthorized impersonator without the additional check of authentication. Putting someone in a particular security group is not enough: you’ve got to check that it’s them who’s supposed to be there.

Originally published on Medium.

Nurture the Bright Spots: Dutch Exploratory Workshop on Testing 2018

A peer conference is different from a regular conference. If you’re one of the lucky few to get invited, you find yourself in an environment surrounded by the best and the brightest. There will be presentations to spark a discussion, but the facilitated discussion is where the real learning occurs. It certainly puts the “confer” back in conference.

The Dutch Exploratory Workshop on Testing (or DEWT for short, rhymes with newt) in the fall of 2018 was the first one of these I attended outside of work. The theme was “Developing expertise in software testing.” The format of the gathering itself paired nicely with one of the main conclusions I took away from the weekend: choose the best people, foster an environment where they have the energy to learn and inspire others, and set aside time to reflect.

Gorgeous sunrise on our morning run


Here are some of the things I think about at least once a week back at work:

“You let people annoy you by not pushing back.”

People cannot read your mind. If something is bothering you and you never bring it up, how are they supposed to figure it out? Try to find something small to be firm about, and build from there.

“People relax when you say it’s time-boxed.”

Sometimes I’d rather stop fighting than win. I think it’s clear that a bug should be investigated, but someone else believes it’s not worth any time. Instead of deciding yes or no based on insufficient information, I’ve found “let’s look at this for two hours and decide if we should keep going” gets more people on board and allows the conversation to move on.

“When you ask a difficult question, sometimes people ask a simpler question in their head and answer that.”

One of the questions I was trying to help answer this week was “should we release next week?” The team and I found it easier to answer adjacent questions instead of the big one we needed. We could find answers to “are there pieces of work we need to complete before the release?”, “are those pieces of work estimated?”, “could the tests that aren’t passing expose risk to our company’s bottom line?”. I can see how some of the answers to these smaller questions can support an answer to the larger one. My worry is that getting “yes, this is fine” to something small will accidentally result in a “yes, this is fine” to the overall question.


I don’t have more to add or break down regarding these thoughts, but I can tell you that I nodded knowingly when they were said.

  • “Regression sprint.”
  • “I can’t change other people for them.”
  • “If it’s overwhelming, take a smaller step.”
  • “An expert is someone who learns the fastest.”
  • “My standards are not necessarily ‘the’ standard.”
  • “Internal conferences help people see good work.”
  • “As knowledge workers, we are the means of production!”
  • “Sometimes you’re only doing enough work not to get fired.”
  • “How do I tell you I disagree with you without ending the conversation?”

These are the books that were both (a) mentioned explicitly enough for me to find them and (b) worthy of adding to my exponentially-growing Goodreads backlog.

(If you’re looking for books you can’t miss about sociology and psychology, follow Maaike Brinkhof on Goodreads.)


The retrospective asked us a big question that I wasn’t prepared to answer in the waning hours of the conference: What were we going to change in our jobs, in our lives, in our coaching of our colleagues when we got back to work on Monday? I only realized later that someone proclaimed a mission statement that I’d like to adopt:

I will bring you discomfort. I will stay critical when everyone else thinks we’re done.

I’ll start there. But I’ll also make time. I’ll step back. I’ll think about what you’ve done. Only then can I grow.

Thanks to Jean-Paul Varwijk and Joris Meerts for organizing DEWT 2018, and Klaartje van Zwoll, Claire Goss, and Maaike Brinkhof for reminding me how refreshing peers can be. If you don’t have an entire weekend to discover deep truths with your peers, try watching Vera Gehlen-Baum’s A Software Tester’s Guide to Expertise instead.

Originally published on Medium.

Update your Mac Terminal to display your current git branch and status

TLDR; You can copy all the code from my github.

I spent the better part of a crafting day at the office updating my .bash_profile on my Mac. If I’m in a git repository, with every command prompt, I see the branch name and an asterisk if there are un-committed changes. The original prompt for the machine, the repository name, and the branch name each appear in different colors. Here’s what it looks like:

It may look like this one line in my .bash_profile file is where the magic happens, because this is where the colors are set:

export PS1="\[\033[36m\]\u\[\033[32m\]\w\[\033[33m\]\$(git_branch)\$(parse_git_dirty)\[\033[00m\]$"

Unfortunately the colors, branch, and status still need to be variously activated and calculated.

This function calculates if the branch has new changes:

function parse_git_dirty {
  [[ -n "$(git status -s2> /dev/null)" ]] && echo "*"
}

The last thing is fetching the branch name:

git_branch() {
  git branch 2> /dev/null | sed -e '/^[^*]/d' -d 's/* \(.*\)/ (\1)/'
}

It took a lot of internet searching and help from colleagues to set it up. I had the git status and the branch name going separately, but combining them failed. Typing past the width of the window in the command prompt would send text to overwrite the current line instead of wrapping onto a second. This solved that for me. There are also other color options. The slashes \ help end commands, but it was a lot of trial and error to get them all in the right spots. I was editing my .bash_profile intially in Sublime; any plain-text editor would have worked. I moved to nano in the Terminal when I wanted to see the changes more quickly. I did still have to close all open Terminal sessions before seeing the latest changes.

This lovely command prompt doesn’t prevent every mistake, but it does help call my attention to the status so I don’t:

  • open the wrong repository (easy to do when you’re migrating from one to another)
  • try to commit before the changes are saved
  • commit to master instead of a branch

If nothing else, I enjoy a bit of color in an interface that doesn’t have any to start.


Thanks to Maik Nog for suggesting I share this in a more shareable format. It’s available on github.

*Originally published on [Medium](https://medium.com/@ezagroba/update-your-mac-terminal-to-display-your-current-git-branch-and-status-471c017436a2).*

Have I Tried Enough Weird Stuff?

I was testing a piece of software that collected a person’s addresses for shipping within the United States. My developer had tried zip codes in the direct vicinity of our office in Manhattan, which all started with 1. I tried the zip codes for my hometown in New Jersey and the college I attended in Maine, both of which started with 0. Together we determined that the zip codes (and other address fields) needed to be stored differently so leading zeros would not be cut off. But it got me thinking: what other things might occur that were outside the direct experience of me and my developer?

So I asked the internet.

That’s when I first came across Falsehoods programmers believe about addresses. We were constrained to collecting American shipping addresses, so things like “are the odd street numbers all on the same side?” weren’t our concern. But plenty of them were. Was our form going to allow people whose shipping address was any of these?

  • a post office box
  • outside one of the fifty states (Washington D.C., Puerto Rico, Guam, etc.)
  • on an American military base
  • a fractional number

As I tested inputs on other applications, I kept wondering if I was only thinking of things I already knew about, or if the problem space was bigger than I could conceive. I’ve come across a few lists that I love to review with my developers before they start building an input field (or an API parameter) so we can agree on what kind of validation we’re going to do.

The Test Heuristics Cheat Sheet provides a great jumping-off point specific inputs for text fields on the first page and different ways to try inputting them on the second page.

The Big List of Naughty Strings collects different kinds of characters (languages with non-Roman characters, emojis, Javascript that might trigger script injection, etc.) in one place so I don’t have to search for each of these cases individually. I usually copy-paste the ones we’ve agreed we want to support from here. [Note: I recommend bookmarking this repository so you’re not accidentally getting NSFW results after searching “naughty strings.”]

Searching for “Falsehoods programmers believe about {input type}” is my go-to for more specific types of inputs. There’s a list of a bunch of them, but these are some of my favorites:

I encourage you to keep asking “have I tried enough weird stuff?” and deciding together with your developers what constitutes “weird.”


Thanks to Trish Khoo and Anne-Marie Charrett for the impetus to publish this, and Joep Schuurkes for pointing out that my headline falls under Betteridge’s law.

Originally published on Medium.

Regression Testing Wisdom from Agile Testing Days 2017

At my current job in a regulated industry, our software goes through a validation testing period before it goes live. A build is cut. It gets deployed to the validation environment, integrated with our other products. Depending on the size of the product or service, the testers have a few days to a few weeks to generate documentation that describe the features and the environment in case of an audit. Ideally, we’re regression testing during this time. While we continue to look for unexpected behavior, finding anything out of the ordinary means a headache for more people than I care to admit.

Sansoucci Park in Potsdam, Germany

I left my team in the middle of a validation cycle to attend Agile Testing Days in Potsdam, Germany November, so I had regression testing on the brain. Here’s some of what stuck with me.


Jez Humble: Humans should not be regression testing.

I have this in RED in my notes. His talk about continuous delivery emphasized the speed of delivering customer value through continuous delivery. Enough said I think.

Lisi Hocke: Get developers to automate regression.

Lisi had an inspired idea to get this going: invite your developers to help you manually regression test. After a few hours of repetitive tasks, they’ll get motivated to write something that makes both your lives easier.

Angie Jones: People don’t want to collaborate with you when you have twelve spreadsheets.

I get very worried when I see a regression checklist that involves multiple sheets within the same spreadsheet. I start counting up how long it might take to get through, assuming we don’t have to circle back and investigate, ask questions, or report bugs. I wonder if things have changed since items were added. I wonder who was consulted about which items were high-priority, or necessary. I wonder about what might be assumed about the environment setup, or left off entirely. Rather than building confidence or feeling at peace, multiple spreadsheets of regression scenarios raise my blood pressure.

Poornima Vijayashanker: Give new employees a particular task.

Guided or paired regression testing can be one way to get to know a product when you’re new to a company or a team.

Rob Lambert: Be brave. Integrity is important.

In the year I’ve been in my current role, I’ve only found a bug that required a deploy in the midst of a validation period once. Raising it was terrifying. This would be the first release of the service, so it was hard for me to tie the bug to customer value. I had to ask a developer to work under extreme time pressure, only to stick myself with the testing of it. Less time to explore meant less confidence after the fix than before. But as a tester, it’s my responsibility to provide relevant information in a timely manner, even if that requires more than my average amount of bravery.

Mark Winteringham: The product is large. I cannot observe every change.

I’ve worked with people who insisted I caught everything. I didn’t. No one could. Be terrified, but also liberated.

Liz Keogh: Tell people failure is expected.

Everyone else goes into validation testing expecting success. I go in expecting failure. What if I told people we would fail? What if we planned for it? What if our deadlines accounted for failure?

Originally published on Medium.

Choose One Thing: Copenhagen Context 2017

All of the talks I attended at Copenhagen Context included this message: change one thing. Try something, see what happens, and go from there. It’s good advice, but it can be hard when it feels like everything needs changing at once.

Copenhagen, in context

In a lightning talk, Huib Schoots identified one thing every tester can change: practice test reporting.

Keith Klain told us how to kick off a change at an organization: choose one thing, and stop doing it. Stop fear-based management; consider how people are motivated and compensated. Stop focusing on your process; consider the business value of what you’re building. Stop assuming every problem is a technical problem; look inward when the information you provide isn’t valued. Keith picked on test managers who didn’t do much hands-on testing in particular, which was interesting to hear in a room full of test managers I work for. I heard Keith give this talk a few months ago, but there was enough to take away that I still have two pages of notes (different from the last time) full of new ideas to consider.

Jyothi Rangaiah spoke about gathering requirements. Rather than wait for other roles to write a story, Jyothi suggest one thing testers can change: start suggesting requirements. Go to the development architecture meeting. Setup a three amigos meeting. Considering all the things testers can do that aren’t listed in their job description, and you’ll start looking to hire passionate people who can fulfill more that just the requirements.

This is what 'prepared' looks like.

I spoke about being an introvert. This seemed to be the biggest takeaway from the talk, which luckily is something you can change:

When Ash Winter’s devops team wanted to upgrade all of their products at once, Ash suggested they choose one product and upgrade that first. Rather than shouldering the burden of testing all the products, a small safe-to-fail experiment would give the team enough information to decide if they should proceed. Ash’s talk was about infrastructure testing, and he noted that testers tend to identify infrastructure problems at the wrong level: in the application. If he had to choose one thing to look at to diagnose issues better, it would be production logs.

Helena Jeret-Mäe spoke about observing your process. She suggested changing one thing and taking notes. She described Jerry Weinberg’s MOI(J) model for teamwork, where the J is for jiggling, or changing one thing. To improve your process, Helena recommended slowing down and stepping away from your default. By challenging your interpretations and considering other explanations, you can more effectively deal with difficult situations.

Nancy Kelln told a great story about “Nancy-driven development,” or when she tried to change a testing process all at once. When people gave her advice and templates to direct her, she realized she had no credibility and she needed to change one thing instead of changing all the things.

I’m so grateful for the time I got to spend learning from the speakers and attendees at Copenhagen Context. It was a pleasure to spend time with my tester friends from around the world. It’s a testament to the hard work of the organizers that there were no duds of presentations, and I missed many worthwhile talks.

Back at the office this week, I resolved to change one thing: I want to get better at sharing knowledge with my colleagues. What’s the one thing you’re going to change?

Statue in Nørrebro, Copenhagen

Originally published on Medium.

Doubt builds trust.

I don’t trust people who always say yes. “Yes, I can finish that today.” “Yes, I tested it and it works and I didn’t find any bugs.” “Yes, I captured what you said in these notes.” “Yes, I assert I have accomplished all the things I listed on my resume.”

If a person consistently lived up to their word, I would be inclined to trust them. But as soon as they weren’t able to finish a task on time, or bugs were found after they tested it, or notes were lacking crucial details, my trust would wane.

I trust the person who expresses doubt. “I’m going to look into this complexity today, and I believe I’m on track to finish in this sprint if I don’t have to rely on another team.” “I’ve tested it on this environment with this kind of data, but it’s possible that a different data set will uncover something else.” “Here’s what I wrote down, is there anything you said that I missed?” Those are people I can trust.

Doubt builds trust.

This is not revolutionary, even within software testing. Michael Bolton taught me that safety language helps accurately communicate risks. It shifts the burden of “Is it done?” or “Does it work?” to the person asking the question. Fiona Charles has explored how to grow our tolerance for uncertainty. She suggests using uncertainty as a place to begin a conversation. Paul Holland taught me to tell a story not just about the status of the product, but about how I tested and how deep the testing was. I come back to these ideas when I’m not sure the questions my team is asking about my testing are the right ones.

Lately, I’ve been interviewing candidates to join various teams around Medidata. I get resumes full of buzzwords. Each skill is listed with equal confidence and importance. But not all candidates I meet live up to how they appear on paper. Some can’t provide a specific example, or aren’t sure how to explore a product when we’re pair testing. (Frequent listeners of the Judge John Hodgman podcast will know that specificity is the soul of narrative.)

I’ve watched candidates confidently proclaim, “No, nothing will happen when I click there” about a website they haven’t touched before. This erodes my trust in their curiosity and their ability to accurately report a bug.

I can’t recommend hiring someone who when I say “Huh, I’d expect something to happen when you click there; let’s try it,” isn’t curious enough about what would happen to click there. I don’t expect someone new to a website to know how it works. I only expect them to distinguish between what they’ve explored and what they haven’t. I want them to be able to articulate the circumstances they’ve just experienced, realize that their memory or perception might be fallible, and be clear about what they haven’t yet been able to establish.

It is terrifying to be asked a question in an interview and respond “I’m not sure,” “I have to think about that some more,” or “I haven’t encountered that situation before.” I’ve said all of these things in interviews. It’s possible that I’ve lost out on jobs to other candidates who exaggerated their abilities. But the jobs I’ve been offered have been from places that value uncertainty and understand the current state of my skills. Colleagues know they can come to me for real answers because I will tell them when I don’t know.

Who would you trust more, the person with this resume? Or this one?

redisant/flickr

Originally published on Medium.

Romanian Testing Conference 2017

I’d heard good things about the Romanian Testing Conference from previous years, so I was excited and honored to speak at this year’s event. I was happy to catch up with people I knew, put a face to people I’d only seen on the internet, and meet the testers who worked on software I’ve both valued and loathed.

View from my hotel room in Cluj.

Goranka Bjedov spoke about scaling up the hardware in Facebook’s data centers. Her childhood in a power-rationed Yugoslavia makes her an ideal candidate to consider the balance between power and performance and reign in the Silicon Valley dream of just adding another server. (In California and around the world, resources are finite!) On the infrastructure side, Goranka knows she has to plan for features to be massively popular, even when they don’t seem like they’ll take off. Photo tagging and slideshow retrospective videos surpassed her expectations.

The things she was able to tell about were as interesting as the things she wasn’t. She could tell us the measurements Facebook took on IBM machines a few years ago, but not more recently. IBM was using the data to price the machines! She could tell us how many machines could fit on a rack before the spinning shook it too hard, but she couldn’t tell us how many racks there were in each data center. The data centers use evaporative cooling, literally pools of water on top of the racks, rather than air conditioning to save money. I left thinking “that was all interesting, but when am I going to have input on a data center?” It wasn’t until I got back to the office that I remembered we also host our own data centers. It might be sooner than I thought.

Adam Knight directed his testing at a new company after giving the business side a risk questionnaire, inspired by one a financial advisor gave him. Adam considered examples where humans didn’t accurately evaluate risk. This one stood out to me: Grouping several problems together into one problem underestimates risk.

Integrity is the most important quality we can demonstrate as testers.

While it was hard to write unbiased questions, Adam found some interesting results from the questionnaire. Everyone agreed they needed specialist testers, but they didn’t have any on staff at the time! They’re hiring. So is Medidata. We build software for clinical trials at Medidata. I think we have a very low tolerance for bugs in production, but more flexibility around scope and deadlines. Adam got me curious about whether our business leaders would agree.

Someone described me as 'relaxed' here.

I spoke next! Sanda and Raluca kept me calm before I went on. Stefan managed to capture photographic evidence of my face looking normal. I’m grateful for everyone who came and sat scattered among the rows by themselves, or was eager to engage with me afterwards. Here’s a previous but similar performance if you’re curious. I appreciated all the follow-up questions in front of the group and individually afterwards. In future performances, I need to do a better job of noting that no one’s an introvert all the time, and the extrovert/introvert distinction isn’t going to be useful for everyone.

I tried to hide for a few minutes because facing the hundreds of people in the lunchroom was too overwhelming after just coming off stage. Unfortunately the interview crew found me! Note for next year: hide in my room and not in the lobby. (Also: find out why people were taking so many desserts.) I missed the beginning of Keith Klain’s talk because I eat a lot. (Get over it.) Apparently, it began with a joke about him being a money launderer that I both originated and plan to perpetuate. The hundreds of Romanians did not appear to be amused. According to sources familiar with the matter, they rarely do.

Keith noted that testers who can’t connect a bug to the business impact in production are doing it wrong. Publicly traded American companies have series 10 forms outlining their business risks. Both Adam and Keith wanted testers to ask the business what quality meant to them. You can become more efficient by doing less testing if you are transparent with what you’re testing and avoid repeating tests.

Testing that adds a lot of value is hard. Good testing raises more questions than it answers.

Carmen Sighiartau spoke about testing from a developer’s perspective. Until she worked with a tester that caught things she didn’t, she saw testing as a crappy job anyone could do. Like Keith, she saw a lot of redundant testing that didn’t provide value. Now, she wants to work with testers because she sees the unique perspective they can bring to the table. Her growth as a developer reminded me to care about my craft and think about my work without running on auto-pilot.

Nothing in life is mandatory.

Carmen encouraged us to interview candidates for how they work on a team. It’s easier to educate technical skills than social skills. Huib Schoots expanded on how to recognize professional testers in his blog. When I’m interviewing testers to join one of the teams I’m on, I try to ask one question with a straightforward answer I can verify (“What did you talk about with the previous interviewers?”) so I can make sure they describe a situation clearly. I also ask about teamwork or empathy (“Tell me about a time you had to deliver bad news.”)

Nicola Owen’s talk posed the question:

If you were leaving your job soon, would you test differently?

If you’re staying in your job, it can be acceptable for a bug report to need more clarification, or for a colleague to ask you where a document is. Once you’re gone, your abandoned colleagues can’t rely on those valuable conversations. Write things down and put them in an obvious place. Like Carmen, Nicola spoke of developers who’d only had negative experiences with testers. You can show your value by passing on your knowledge before you leave.

Harry Girlea spoke last to the entire crowd and really put us all to shame. I can’t imagine telling a bunch of adults about my favorite hobby when I was 12 years old. He handled the Q&A like a pro. When someone asked him how he balanced gaming with school, he said he would probably play too much, but his mom keeps him in check.

The organizers and volunteers at the conference made the logistics a breeze, which left lots of time for conferring. Thanks in particular to Rob Lambert, Ard Kramer, and Beren Van Daele for lending me their ears. I appreciate all the ideas and I hope I have the opportunity to join this wonderful group again in the future!

Originally published on Medium.