How do I: write a good bug ticket

FoeNyx Bug WikipediaRecently, I was working with the business development team and some of our other marketing staff who were testing new application we’re working with on the product side. As you would expect with any release there were some problems that confused, annoyed or otherwise sidetracked people from being able to effectively use the product.

In the past everyone throughout the regional office has haphazardly emailed their reactions to me or one of my colleagues where we vet their responses and work with the Dev Team to tidy up the problems. Rarely has this proven efficient and even rarer have the underlying flaws actually been addressed using this system. Out of frustration once, I attempted to implement a system to streamline this workflow. It never really caught on.

However, for me as a Product Manger, this wasn’t always the case. Back when the bane of a developer’s existence was working with IE6 I was introduced to a bug monitoring template by a colleague that was highly effective. It worked for a number of reasons but it can all be boiled down to clarity.

Developers were no longer frustrated by being unable to replicate bugs that testers couldn’t remember how they produced in the first place which allowed developers to hone in more quickly on the fix as well as dedicate resources not necessarily to just fixing the thing called a bug but being able to see patterns in use cases that resolved the actual underlying issue.

Immediately, I began drafting together the template from what I remembered, tweaking and tailoring it to the team’s needs and experience here, briefly trained everyone on it and finally tested its use against our most recent release.

Just under two weeks later and the results are fairly strong.

The whole process requires a couple of components to work, the first of which is always a team interested in a better process.

Producing buy-in by any team on any process is an entire post in-and-of itself but the short order of mine was to find one of the least willing testers and inquire to them what the maximum effort they would put out was followed by talking to the Dev Team’s workflow manager and determining what the minimum information necessary would be to complete a task effectively. Although I had a hunch on both, and in the end it was correct, the conversations went a long way into producing a process the users of felt they contributed to, as opposed to foisting a finished one on them to use.

The result looked something like this as a template, which we made a version that had examples and extra instructions on and a pared down one for users who were already comfortable with the format.

Here’s the detailed form example:

      WHAT YOU WERE USING:
      Please tell us more about the device you are using. If you are unsure of your device specifications in most devices you can go to “Settings” > “About” and copy the information about Model, Operating System, and internet connection.
      An example might look like:
      HTC 1X; Android 4.0.4; AT&T LTE 4G (wifi disabled)
      Please tell us more about the app you were running when this happened – this includes if you were testing a WebApp through the device web browser. If you aren’t sure which version you were looking at on many devices you can go to “Settings” > “Apps” > and click on the app you used to copy the name of the app and the version number.
      An example might look like:
      Native Internet App 4.1.22
      Was this your first session using the application ever? Y/N
      WHEN DID IT HAPPEN:
      Date and time of your experience so we can check the logs.
      WHAT HAPPENED:
      Please tell us as best you can what you were doing previous to your experience going awry and how it went wrong. Provide as much detail as you feel comfortable with explaining.
      An example might look like:
      Opened the app. From the Home Screen tried to swipe the page as instructed. It didn’t swipe to anywhere any direction. Tapped the button at the bottom of the page. App crashed. Didn’t bother to reopen.
      Do you have a screen shot of the problem? Y/N
      A video? Y/N
      Were you able to make it happen multiple times? Y/N/DK
      EXPECTED CORRECT BEHAVIOR
      What would you have liked to have happened instead? Be honest, there’s no wrong answer here.
      From the above example, this might look like:
      I expect when I swipe it goes somewhere obviously. And when I tap it goes to the page I was tapping to and not closing the app
      WHO ARE YOU – just in case we need to contact you to get the fix right!

You’ll notice a couple of key points in this that it seemed help foster adoption of the feedback form. The biggest one being take the pressure off the end-user in the test. If you make them feel like they are in the wrong, such as Steve Jobs telling iPhone users their poor reception was a function of their own hands (and not a faulty antenna design) you discourage them from participating in the feed back loop. The last two question in particular are aimed directly at that. Secondly, even though you’re trying to be thorough also be brief so it’s not intimidating to complete. The doc five main sections and only about 10 pieces of vital information total, plus we encourage users when they can to even skip most of the doc and just attach a video showing us what they were doing. Surprisingly enough we received quite a few private you tube links and attached video files.

Next we set up a system to collect these feedbacks. Again, what system you use for this will be dependent on the team’s comfort level and your company resources.

For this test we set up a form on Google Docs that collects the information into a spreadsheet within Google Docs. The sheet has two extra fields on it in addition to what is in the form. The first field is a verification field so we could keep track of which bugs were reviewed and if someone from the Product Team or our QA Team was able to replicate the bug. The second field is a reference field to our internal project management system so we know that a ticket was created and assigned to Dev for review.

As I noted there’s a split assignment to review the document by either Product or QA which we were doing daily. These daily views are more mechanical, transferring info from the doc to the Workflow manager for assignment. Understanding the broader implications of the document, spotting trends within the feedback, etc. was tacked onto our Dev Team’s morning meeting schedule. We took part of one of the meetings each week so far to review the doc. It’s not a problem solving or strategy meeting though, it s only purpose is to spot the trends, if any and get everyone thinking more holistically based on user feedback.

Since we’re still iterating on the process we haven’t formalized a strategy session for dealing with identified trends or aligning them to the product roadmap (for example, one common user issue we had scheduled for a later iteration with the upfront knowledge the incomplete feature could be confusing). This is in the plan for the next step of the experiment and probably has more to do with a variant of a DSDM type of project feedback loop management then it does in collecting the right bug info anyhow.

The other aspect that has not been pulled into this is aligning with Customer Care for collecting their feedback. Currently this product is in beta so it’s not widely distributed or being handled by our Care team. Furthermore, CC already has a short form for assisting on bug or feature suggestion reporting that is somewhat functional for now. Not that I anticipate they would resist change to a new one, but moreso whatever we design needs to both combine their needs and the learnings from this experiment. We can probably pretty quickly create a hybrid of the two forms and create a short training manual for coaxing this information out of participatory users during their care sessions. It’s just something we’ll undertake in the near future and not right now.

The two biggest take aways I found so far were

1. Developers were less frustrated by the feedbacks because they could tackle them head on quicker. They had more information than ever to identify what was going on and potentially how to resolve it. They have less interaction with the testers in many ways, but get much more out of the little bit they have which saves them time and energy.

Are there still moments of, “well I can’t make it happen again, it’s not the apps fault.” Sure, there always will be, but it’s much less than before.

2. Users are giving more feedback. Because they have structure it is easier for them to say what’s wrong and one even said it feels like less effort filling out the form than imagining what to write up in an email so he was inclined to keep submitting because it was easier (it helps when the form remembers certain inputs leaving the user only like two field to fill out after the first time). Also, because they aren’t made to feel like idiots they feel encouraged to share rather than discouraged by the process. I’d venture this is, in part, due to the fact that they aren’t being badgered by socially awkward programmers after sending feedback wanting to know more info.

Advertisements

About thedoormouse

I am I. That’s all that i am. my little mousehole in cyberspace of fiction, recipes, sacrasm, op-ed on music, sports, and other notations both grand and tiny: https://thedmouse.wordpress.com/about-thedmouse/
This entry was posted in business commentary, Opinion, Product Management. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s