Surviving the Conference Social Scene

I’ve attended and spoken at conferences. I’ve participated in great sessions and made meaningful connections. And I’ve had terrible times where I go to useless sessions and meet no one. I empathized with this Twitter thread about going to a conference where you know no one.

I don’t think there’s a right answer for everyone or every conference. Here are some things that I’ve found helpful in surviving really intense days full of strangers:

  1. Listen to these smarty pants.
    Other people have a lot more public speaking and networking under their belts than I have. Benefit from it. Rob Lambert wrote The Blazingly Simple Guide to Surviving a Conference. It’s directed at speakers, but it’s useful for any conference attendee. How To Win Friends and Influence People has more advice about connecting with people quickly without seeming sleazy.
  2. Connect with people ahead of time.
    Let people know you’re going to the conference. Tell people which sessions you’re speaking at or attending so they can find you. Write to people who’ve said they’re going. Tell them you’re excited to see or meet them. Find out which speakers they think are valuable so you end up in better sessions.
  3. Take a day off before the conference. 
    Get enough sleep. Get the lay of the land. Give yourself a chance to distance yourself from the minutiae of your work inbox. Don’t talk to anyone for a day so you’re itching to jump into conversations when you arrive. Think about what you’re struggling with at work and how the people you’re about to meet might help you solve it. I spent one conference asking almost everyone I met “Can you think of a way to have a computer tell if a live audio stream is playing a radio show or an ad?” No one did, but even that was helpful information.
  4. Get adopted.
    This has been the luckiest and most fruitful thing for me at conferences: people adopt me. People who saw me ask a good question. People who noticed I was rolling my eyes. People who saw a speaker be mean to me. People who spent a whole day with me in a tutorial. (I’m looking at you Claire Moss, Diana Wendruff, Martin Hynie, Lars Sjödahl.) They found me during breaks and introduced me to the brilliant people they knew.
  5. Find people at a meal. 
    People respond to flattery. If you approach a speaker at lunch and say something like “Hi, I’m Elizabeth. I was just in your session about [whatever] and [this thing] really resonated with me. May I sit here?” most speakers will say yes. They’re already seated and eating, and you’ve made it clear that you were listening and impacted by what they said. You can also join a group that already seems to know each other. Try just listening. You may find out which sessions they’re attending, blogs they’re reading, or build up the courage to ask which bar they’re heading to later.
  6. Use Twitter to find after-hours conferring.
    When the conference is in a city or spread over a wider location, physically finding people can be harder. Check out the Twitter hashtag for the conference. Find someone you’d like to speak with again and check out their Tweets & Replies tab. This works especially well at Twitter-heavy conferences like TestBash and CAST. (Note: As a young lady, I don’t think I’m perceived as a creepy stalker. I’m not sure this would be the case for everyone.)
  7. Write about the people you met.
    Let people know that you enjoyed connecting with them, or follow-up with people you couldn’t find again. Tweet at them, write them an email, or add them on LinkedIn. They may be a person you can reach out to with a question or a job listing. If nothing else, writing to them will help you both remember each other’s names.

I’m still learning how to get the most out of a conference experience. Sometimes I fail to follow some of the advice I’ve outlined above, but I forgive myself and keep trying to make an effort.

Originally published on Medium.

Connections from TestBash Philadelphia

TestBash Philadelphia began on November 10th. On November 9th, I woke up and realized the election I witnessed the night before was not a nightmare, a figment of my imagination. I wondered how I could speak on a topic so trivial as introversion when there was important work to be done. I wondered how anyone could listen to what I had to say, or care.

It turned out TestBash was the reset I needed. Smart, supportive, open-minded people listened to me. I encountered new ideas that made me stop and think. Over dinner, Abby Bangser suggested setting up my deployments before setting up my development environment. I put down my fork and relived my past nine months on a project with an alternate ending. I had many moments like this.

I loved how similar ideas serendipitously popped up in different talks:

I’m grateful I got to connect with old friends, people I’d admired from afar, including Melissa Eaden and Stephen Janaway, and new people I just met. I met fewer people at TestBash Philadelphia than I have at conferences in the past. Perhaps it’s because I announced to the entire room that I hate small talk and then went on a podcast to reinforce the point.

Thanks to Rosie Sherry, Richard Bradshaw, FringeArts, and everyone who helped make TestBash Philadelphia a success. All the presentations are available on the Ministry of Testing Dojo. I appreciate everyone who gave me feedback on my talk. I’m looking for resources about extroverts interacting with introverts before I present it again. If you come across something useful, pass it along.

We can’t change the outcome of the election. We can listen to each other. We can affect incremental change with receptive audiences. We can consider where there might be errors in our thinking. We can improve. We can amplify the voices we admire. We can move forward.

Originally published on Medium.

Some Takeaways from Let’s Test 2016

I had the great honor and privilege to speak at Let’s Test in Sweden this year. In the time since the conference, I’ve had a chance to reflect on the presentations and see what’s resonated with me back at work.

The Only Thing it Really Depends on is Money — Scott Barber

I wish all of my colleagues from the advertising agency where I work could have been in the room for this discussion. Every contract our company signs, every feature we develop, every time we choose a severity level for a bug, our decision comes down to the value we’re providing a client. With this in mind, software testers should aim to:

  • Deliver what the client wanted, not what they asked for.
  • Provide informative testing metrics. When a client asks for a metric that isn’t useful, provide them with a different metric that still provides enough information to evaluate progress and make decisions. One useful question is “What shouldn’t I do if I were to demo this to the client today?”
  • See what people are using the most in the product. Make those things better.
  • Bundle the cost of testing with the cost of the product. Nobody wants to pay separately for testing.
  • It doesn’t matter if a product is fast if nobody buys it.

But the one thing I think about every time that I write a bug or ping a colleague is this:

Testing as an isolated activity has no inherent value. Nobody pays for testing. People pay for the information testing provides.

It’s useless if I find a bug and I only tell the other testers, or the developers don’t understand the bug I wrote, or the severity isn’t properly evaluated. The bug fix won’t find its way back to the client to make the product better. To keep the most important issues top of mind, schedule a periodic bug triaging meeting with the product owner.

These other thoughts that came up during the discussion weren’t directly related to the topic of the presentation:

  • Use the few extra minutes after scrum to define some terms with your product owner.
  • For job satisfaction, you want to be a trusted advisor. You want important people asking you interesting questions.

Context Eats Process for Breakfast — Patrick Prill

Patrick walked us through a couple exercises about describing your morning breakfast routine:

  1. Describe the perfect cup of coffee.
  2. Draw how you make toast.

Like some other participants, I had no idea what made a perfect cup of coffee. My perfect cup of coffee is tea. Here’s my version of toast:

1. Break off a piece of baguette. 2. Put it in the toaster oven for five minutes when it’s a darker color. 3. Serve on a plate. 4. Butter is too hard to draw.

I drew my toast with a few steps involving a baguette and a toaster oven. Most others had more steps that involved money, stores, butter, sliced bread, and a conventional toaster.

Patrick used our descriptions and drawings as a springboard to distinguish tacit (unstated, assumed) from explicit (stated, known) knowledge. By asking questions to clarify the assignment and our audience, we could have made the task more explicit and come closer to the tacit expectations.

Just as with breakfast, testers need to be able to question assumptions and accurately describe their processes in the course of their work like when they:

  • Describe steps to reproduce a bug.
  • Write a training manual or other technical documentation.
  • Code a test script.

Providing too little context confuses the audience. Providing too much context distracts them. Finding a balance is key.

The Human Factor: Exploring Social Engineering — Dan Billing

Dan’s presentation changed how I want to test and generally live my life. I thought I was comfortable with how much personal information I shared, both on the internet and in person. Dan had us try to find out as much as we could about someone else in the room by Googling them. And showering them with flattery. I was able to get a lot out of people. It was probably enough information to guess a password.

Before the end of the presentation, I was completely paranoid about what every salesperson, person around the office, or person near my calendar or phone could be learning about me. But Dan shared a deeply personal story with us. He trusted us. So after destroying my faith in humanity, Dan helped restore it. Thanks Dan!

Dan shared his slides and more information about social engineering here.

Gaining Consciousness — Fiona Charles

Fiona’s closing keynote centered around a paired exercise. One person was the business owner, explaining their context to the tester. The other person was a tester, asking questions about the context. After a few minutes of brainstorming, the pair compared notes.

Ideally, the business owner would have given the tester all the right details to answer the tester’s questions. But no pair had a complete matching set of information. Overwhelmingly:

  • Testers did not ask about the business model.
  • Business owners didn’t provide information about the users or politics of the people on the project.

So testers and the business owners rarely start on the same page. To get more in sync, Fiona recommended having a periodic sanity check of your test strategy with the business owner. We need a space to say “Here’s what I’m thinking about. What do you think?”

Other presentations

I’m still thinking and talking about all the presentations I attended. Thanks to Rob Sabourin, Anne DeSimone, Bolette Stubbe Teglbjærg, Andreas Cederholm, Christopher Lebond, Helena Jeret-Mäe, Erik Brickarp, Rob Bowyer, Martin Hynie, and Damian Synadinos. I’m enormously grateful to all the presenters at Let’s Test, the conference for giving me the opportunity to speak, and everyone I talked to over those three days. Learning more about your journeys is what keeps me in this profession.

Originally published on Medium.

CAST 2015: How I’m Using Reason And Argument in My Testing

The debate-style “Should Testers Code?” presentation at CAST 2015 was one of the best attended talks of the conference. And with good reason: This question is everywhere. But after more than an hour of Henrik Andersson defending testers who don’t code and Jeff Morgan defending testers who do code, I was convinced. Everyone should stop asking this question.

Yes. Testers should know how to code.

At CAST, Jeff Morgan boiled it down to this: testers who can code are more flexible. There is a larger, more diverse market for their skills. Elizabeth Hendrickson found this by counting job descriptions. Testers who can code serve their teams in more ways and jump in to solve problems and ask questions that only a developer can. Rob Lambert argues that you need coding or some other niche to get a testing job. Paul Gerrard notes that adding more skills to your repertoire can only add to your knowledge, not subtract. A skilled tester can put on many hats — user, business analyst, client — and adding developer to that mix can help.

Testers who can code are treated with respect by their developers.

Marlena Compton worries that individuals with less power have been pushed into testing rather than development. Michael Bolton notes that a tester’s empathy grows when they are able to gain a greater insight into the software environment and the problems developers face.

Testers are constantly asking themselves if the task they’re performing is providing a higher value than some other task. A tester who can code has one more tool in their tool belt to help eliminate bottlenecks along the way. When a tester has the engineering skills to craft an automated suite of checks, they’re often able to provide more value to their team than a tester checking the same boxes manually.

For all these reasons, they make more money.

More interesting work, more respect, and more money: What more do you want?

Originally published at on September 23, 2015 and duplicated on Medium.

CAST 2015: How I’m Using Reason And Argument in My Testing

Scott Allman and Thomas Vaniotis condensed an introductory logic course into an hour-long presentation at CAST this year. Their focus on deductive reasoning was a great template for how to write a solid bug report or how find the crux of an issue when talking to a colleague. Scott and Thomas’s statements are in bold and my takeaways for how I’m applying it to my work follow below:

Assume your opponent is attempting to construct a valid argument.
Assume the developer read the ticket, implemented the feature in a way that made sense to them, and pushed the code to the testing environment. What could you be missing? Have you downloaded the most recent build or cleared your cache? Do you need to be logged in? Are you on the right page?

When you’re trying to prove a premise is invalid, provide evidence.
If a developer tells you a feature works on their machine, attach a screen shot or a log file of an instance when a feature did not work on your machine. Include relevant environment information and steps to reproduce to determine which premises you don’t share.

What kind of argument would someone construct to disagree with you?
If you’re writing a bug that says something’s taking too long, say how long it should take and why. If you’re writing a bug that says something is the wrong color, cite the style guide or use the WebAIM contrast checker to prove the item is not accessible to color blind people.

Use as few premises as possible so your argument and conclusion shine through.
Look at the steps to reproduce you’ve included in your bug report. Is there anything you can remove? Are there any crucial steps your developer may not have taken that you did?

I’ve never seen such an engaging presentation where the presenters were reading off pieces of paper. The mind map below includes more of what I enjoyed about the presentation and about testing software.

In the open season after the session, Scott and Thomas went into other types of reasoning (inductive for example) testers use when investigating software. Most follow-up questions were about soft skills. Scott and Thomas suggested that examples would be better received than lingo-heavy accusations.

Originally published at on August 24, 2015 and duplicated on Medium.

STAREast 2014: Days 3 & 4

The second half of STAREast was in classic conference style: keynotes, lectures with brief Q&As, and scurvy-inducing food. Thomas Cagley gave me great tools for personality analysis. Jason Arbon gave me his book to read and the most actionable ideas for breaking our app and prioritizing our bugs. Florin Ursu gave me the most advice about how to document my testing, integrate myself into my team’s process, and have my team take ownership of the quality of our products. Playing dice games and other puzzles with James Bach, Michael Bolton, and Griffin Jones got me thinking about my strengths and weaknesses as a tester.

Below are my key takeaways from all the sessions I attended.

Randy Rice’s keynote about key testing concepts
It doesn’t matter how good your tests are if you’re testing the wrong version. Take the time to clear your cache, install the new build, or go to the developer’s branch.

Zeger Van Hese’s keynote about testing focus and distraction
Making lots of decisions drains mental energy and causes the next decision to be more difficult. Rather than list all the things you are going to do this month, list the things you’re not going to do so you can forget about them.

Bob Galen’s session about Agile testing
Calling a project “Agile” to compensate for a lack of requirements will not make it go faster. Make sure you’re building the right thing by frequently communicating with the stakeholders during the development process. Leave enough time in the process for refactoring and bug fixing.

Erik van Veenendaal’s session about risk-based testing
Features people use all the time are more important than rarely-used features. Figuring out where bugs are more likely to arise can help focus your testing, be it inexperienced team members, distributed team members, or new technology. Not all bugs are equally risky; you can start to order them by their relative importance. Items identified as high impact and high likelihood of occurring must be tested and should include a definition of “done.”

Thomas Cagley’s session about cognitive biases
The zero-risk bias causes us to reduce small risks down to zero rather than mitigating the highest risks first. The illusion of control causes us to overestimate our influence in external events. The illusion of transparency causes us to overestimate how well others understand us, especially on long-standing teams. Noting the biases you and your team exhibit can help you avoid being manipulated by them.

Theresa Lanowitz’s keynote about extreme automation
Do public relations for your testing within your organization. Take advantage of the debacle to gain parity between development and testing. Provide a high-quality experience on the customer’s desired platform so they don’t leave for a competitor.

Jason Arbon’s session about the secrets of mobile app testing
The best way to crash an app is to open and close it a bunch of times, change its orientation, and remove permissions from the phone’s settings instead of the app’s settings. Change your description in the App Store to tell your users when you’re fixing bugs. Look at App Store reviews to decide what you should be testing today. Make sure your crash SDKs are installed from the first build. There is no correlation between the number of shipments and the quality of the app. Teams that actually fix their bugs have higher quality apps.

Florin Ursu’s session about lightweight testing documentation
It’s hard to find where you’ve missed something when there’s a lot of documentation. Presenting a mind map to stakeholders helps them take ownership of the process and keeps you from being the gatekeeper to production. Mind maps help focus on the big picture so you don’t get bogged down in individual test cases. Add attachments, progress clocks, happy/sad faces, links to tickets, or notes for a more complete document of your testing.

Lloyd Roden’s session about challenges in gathering requirements
Bad requirements can be non-existent, in flux, too few, too numerous, too vague, or too detailed. Error states and performance expectations don’t need specific requirements because you know it when you see it. Too much detail in requirements squashes creativity and leads to useless testing. Find out what features are used the most often and test there the most.

Kamini Dandapani’s session about testing on production at eBay
Load balancers, caching, and monitoring tools will be more robust on production. Manufacturing data in test environments doesn’t account for legacy requirements. Track your progress by measuring the time from a bug is first reported to when a fix is available on production.

Originally published at on September 9, 2014 and duplicated on Medium.

STAREast 2014: Days 1 & 2

Michael Bolton and James Bach are friends with the same people: Cem Kaner, Doug Hoffman, Paul Holland, and James’s brother Jon. The material in their full-day tutorials at STAREast 2014 covered some of the same material I was exposed to at CAST 2013 and the BBST Foundations course I just finished. Luckily, Michael and James were good enough speakers that the techniques still felt fresh and motivating. I would have tweeted the following if this conference provided WiFi beyond the registration desk.

On prioritizing and finding bugs:

  • Your clients are your boss, the development team and the customer in that order.
  • Usability problems are also testability problems because they allow bugs to hide.
  • Create a test plan before looking at the specification to generate more ideas.
  • Describing your entire test plan allows others to participate in your thinking.
  • The slowest test you can do is the test you don’t need to do.
  • Testers should look for value in the software, not bugs.
  • Memorizing types of heuristics and techniques will help you internalize them.
  • Automated tests are like a check engine light; they only tell you where more investigation is needed.
  • No amount of testing proves that the product always works.

On reporting bugs:

  • Report bugs concisely.
  • Treat boundary requirements as rumors.
  • Separate observation from inference with safety language (i.e. “It appears to be broken” instead of “It’s broken”).
  • Some things are so important or so embedded in the culture (tacit knowledge) that we don’t need to write them down (explicit knowledge).
  • Testers should report bugs and issues. Bugs threaten the value of a product to someone who matters. Issues threaten the value of the testing or business.
  • Testing reports should include the status of the product, how you tested it, and how good that testing was.

Important questions to ask without accusing:

  • Can I ask you lots of questions?
  • Is there any more information?
  • Do you have any particular concerns?
  • Is there a problem here?
  • Can I help you with tasks you don’t like so you have more time to answer my questions?

On the testing profession:

  • Testing is learning about a product through experimentation. And creating the conditions to make that happen. And building credibility with developers.
  • Testing stops when our client has enough information to make a shipping decision.
  • You will never have a complete specification.
  • In Agile, everyone should be willing to help each other answer questions, not abandon their roles or expertise.
  • It is emotionally draining to constantly be fighting with someone who can make your life difficult.
  • Testing is the opposite of Hollywood: The older you are, the better it gets.
  • Testers can’t fear doubt or complexity.
  • A tester’s main job is not to be fooled.
  • Bureaucracy is what people do when everyone has forgotten why they’re doing it.
  • Our job is to tell developers their babies are ugly.
  • Music and testing are a performance. Sheet music : music :: documents : testing.
  • Apple makes you forget its products don’t work. Microsoft keeps reminding you.

*Originally published at []( on May 7, 2014 and duplicated on [Medium](*

Writing Clear and Effective Bug Reports

You’re looking at an existing product and you think you’ve found a bug. You want to get the bug fixed, so you collect the necessary information and get into the hands of someone who can do something about it. Now you need a clear and effective bug report.

Clear Bug Reports

A clear bug report includes (1) what the feature is, (2) how you’re expecting the feature to work, and (3) what the feature is doing at the moment.

1. What the feature is: For the front end of the website, the admin view of the CMS, or an API point, a URL can be sufficient. If it’s a certain part of a page, call out the title of the section and include a screenshot. If it’s a visual thing rather than a data thing, try it on more than one browser or mobile device. Specify the environment where you saw the error.

2. How you’re expecting the feature to work: This turns out to be where miscommunication most often occurs. It’s also the easiest section to leave out of a ticket. It will feel like you’re writing exposition that everyone already knows. Unfortunately, you can’t read the minds of designers, developers, or users, nor can they read yours. Developers probably can’t remember what you said if you mentioned what you had in mind at a scrum. Write your expectations down so everyone can refer back to them and debate them together.

3. What the feature is doing at the moment: Screenshots are great. If a hard error is returned, include the stack trace or a link to the stack trace. If it’s working in one place and not in another, take a screenshot of the two URLs side-by-side. If your screenshot includes features you’re not addressing, take a smaller view or draw an arrow to the point you’re talking about. For transitions and scrolling issues, record a screen capture or call a second team member over to make sure the way you’re describing the experience makes sense. If it takes more than visiting the URL or mobile app screen to see what you’re talking about, include how you’re able to reproduce it and how often it occurs if it doesn’t happen every time.

These guidelines apply to any user, internal or external, trying to communicate with the development team. If you’re a software tester, all of the above is contained in the description and attachments of the ticket. Filling out the rest of the fields make the difference between a ticket getting the appropriate attention instead of languishing in the backlog.

Effective Bug Reports

Before you decide to create a new ticket in JIRA, see what other tickets already exist. Find other tickets about the same feature. Find the ticket where the same thing happened but in a different environment. Find the ticket that was closed because no one could reproduce it. Find the ticket that’s already open that a developer wrote and you didn’t understand until you saw the bug yourself. Collect all these tickets and link them to the new ticket you wrote. Explain how your issue is different from these existing issues so no one closes your ticket as a duplicate.

In the JIRA at my current job, we use a limited number of Projects (Web, Mobile Apps, and a few particular internal departments) and Issue Types (Epics to group tickets within projects, Bugs for most other things). Choosing the Issue Type as New Feature or Task can give a project manager a better idea of how long a ticket will take, but since Bug is the default we end up using that most often. We completely ignore the Due Date and Component fields. We only use Affects Version and Fix Version for the mobile apps, where there are clearly delineated versions.

The Summary is best when it addresses (1), (2), and (3) as described above. Given the implicit character limit of the width of an Agile board (70–80 characters), it makes more sense to address (1) and either (2), (3), or the specific environment where you were able to reproduce the bug. If possible, include a salient keyword so it’s easy to refer to the bug at meetings and search for it in JIRA.

Set the Priority as the lowest possible option unless a project manager or a business owner suggests otherwise. Software testers can help everyone understand the pros and cons of prioritizing bugs, but we’re not the only ones making those decisions. Bother the project managers, not the developers, if your bug isn’t getting the attention it deserves.

If there’s a project currently in development and it’s clear a particular developer’s work caused the bug, make that developer the Assignee. If it’s not clear which developer caused the bug, make the lead developer the Assignee and add suspect developers as Watchers. If it’s not clear how the feature was intended to work or how the bug should be fixed, assign the ticket to the lead UX designer. If it’s not clear if the project is being worked on, assign the ticket to the project manager. Add yourself as a Watcher to all tickets so JIRA will email you when someone goes rogue from the workflow.

While Environment seems like it should be used for the browser version, OS version, or model of phone, that information gets lost unless it’s included in both the Summary and the Description. I use Environment for an important but too often forgotten part of the software development lifecycle: listing the people who were affected by the bug so you know who to email when you fix it. It’s in a location on the ticket that’s both easy for project managers to find and easy for developers to ignore.

See the Clear Bug Reports section at the top for what you need to include in the Description and Attachments. If there’s more than one attachment, change the filename to what differentiates them from each other (before and after, browser version, date, production vs. test server) before you upload them.

Add the ticket to the current Sprint if the priority has been set higher than the default and the project manager requests it. If there’s an upcoming sprint with a theme that the bug falls into, put it there. Otherwise leave it in the backlog for the project and product managers to prioritize. Include the ticket in an Epic if the Project designation is too broad or the ticket will get lost in the backlog without it.

JIRA automatically assigns a unique ticket number to each new ticket using the Project slug and the next available integer that corresponds to the {permalink_domain}/browse/{project_slug}-{integer}. When tickets are moved between projects, old URLs redirect to new ones.

Originally published at on February 24, 2014 and duplicated on Medium.

EuroStar Webinar: Delivering Unwelcome Messages

A big part of my job as Quality Assurance Manager is delivering bad news. The EuroStar webinar “I think we have an issue — Delivering Unwelcome Messages” hosted by Fiona Charles on Tuesday reinforced the communication conventions we should practice when delivering the bad news but don’t consciously consider all the time. We have to (1) deliver the bad news (2) to the right person/people (3) at the right time (4) with facts rather than emotion.

1. Deliver the bad news.

Fiona began with a counterexample of what happens when the first step — actually delivering the bad news — is ignored.

My favorite slide

When the king of Sweden was building a fancy ship and nobody dared to tell him that the basic safety test failed (the top-heavy design wouldn’t allow for people to run back and forth without tipping), the ship was launched anyway and sank almost immediately.

2. Deliver the bad news at the right time.

The right time to deliver bad news is at a meeting with your bad news on the agenda. If it’s really bad news, make it the only thing on the agenda. And schedule the meeting sooner rather than later. Make sure the recipients of the bad news can hear you. Get a conference room, or at least cut the multi-tasking.

People don’t like hearing bad news.

3. Deliver the bad news to the right person.

Next, deliver the bad news to the decision makers. As the software tester, you are not the only decision maker. If you don’t interact with the decision makers, speak to someone who can. Confirm the workflow before problems arise so you know who to call when the time comes. If the bad news is big, bring an ally to your meeting. Consider whether the recipient of the bad news will believe you. A person who you don’t interact with everyday or haven’t met in person would prefer to ignore your bad news than deal with it.

4. Deliver facts, not emotions.

Present the recipient of the bad news with facts so they believe you. Stick to those facts rather than giving your opinion. If you do give your opinion, make it clear that it’s separate from the facts. If you need to gather more facts, give them a “let me get back to you on that.” Explain the problem without assigning blame to anyone. Other people have feelings. Do not embarrass them or they won’t want to listen when you have more bad news.

In summary, be sure to (1) deliver the bad news (2) to the right person/people (3) at the right time (4) with facts rather than emotion. If the recipient believes your information is valuable, you’ve done your duty.

Many thanks to Fiona Charles for hosting the webinar. Check out the EuroStarConferences website for the complete slides and video archive of the webinar.

Originally published at on February 17, 2014 and duplicated on Medium.

My Biggest JIRA Pet Peeve

The never-ending to do list in my department is managed by the web app JIRA, built by Atlassian. Each JIRA ticket is one feature we’re building, one question we need to look into, or one problem we need to solve. As the one QA manager supporting a thirteen-person development, project management, and design team along with a six-person data news team and over one hundred internal producers, I write a lot of tickets. Every ticket is assigned to a person. When a ticket is created or edited, the Reporter (most often me) and the Assignee get an email. Once you add Watchers to the ticket, they also get an email.

I want to add Watchers when I create a ticket in JIRA. I want my project managers to get an email when I find a bug so I don’t have to chat them the URL. I want my UX designers to get an email so I make sure the behavior I’m expecting is the behavior they’re expecting. I want to copy more than one developer when I don’t know who created the bug. I want to copy the lead developer so he knows how his developers are spending their time. I don’t want to create a ticket, click through to the ticket before the green notification disappears from the top of my window, add the Watchers, and then change the Priority or the Summary so all relevant parties know this new ticket exists.

Others agree with me. There are 232 Watchers and 417 Votes on this Atlassian ticket requesting the feature. Every six months, Atlassian reevaluates the tickets in the backlog and lets us know that this ticket won’t be addressed in the next twelve months. As a Watcher, I get an email about it.

Originally published at on February 10, 2014 and duplicated on Medium.