How to create a good Bug report
One ticket, one bug
In order to be able to track bugs efficiently, and in order to permit engineers to properly address them, only one issue should be included per ticket. You might be tempted to create a report that includes several seemingly related issues to save time, captures, etc, but this is often not recommended. If an issue persists or reappears after a regression run, it quickly becomes hard to trace exactly what part of a report is fixed and which one requires further work from the engineering team. If there is only one issue per ticket, that ticket can be quickly reassigned to the engineers for evaluation.
Give it a meaningful, descriptive name
When an engineer or a tester is presented with the list of bugs, the first thing they will look at is the name. It is important that the bug has a descriptive, meaningful name that summarizes exactly what you are observing. Furthermore, it is usually a convention to include where you observe the issue (environment, platform, browser, device).
Examples of good naming are, i.e: undefined
Desktop - Chrome [latest version] - Homepage > Log in > User is not able to input their password
Mobile - Safari [ios11] - Checkout Flow - Paypal modal stretches out of view
Examples of poor naming are: undefined
The button is not working
Image is missing
When I click on the title the field dissapears
It is quite easy to see that the first group of titles clearly gives anyone a quick idea of what is going on and where, so when the engineer has to look for the bug, or the tester has to retest after it has been addressed, they can quickly figure out what they will be working on.
Include steps to reproduce the issue
It is widely accepted to write anywhere between 1 and 10 steps, just make sure to keep it as accurate and brief as possible. Assume that the person reading the steps to reproduce has never tested in their life, and they require as much detail as possible. Engineers and testers alike need to reproduce (or attempt to reproduce and confirm the fix) the noted behavior that originated the bug report.
Example of a good set of steps to reproduce: undefined
On Windows, on Chrome v. 62
Go to http://www.powr.io
On page load, click on the "Log In" button
When the login modal shows up, input your valid username and password
Click on the "Log in" button within the modal
Expected behavior: The user is redirected to their dashboard Current behavior: The user is not redirected and the console logs an error every time the button is clicked.
Example of a poor set of steps to reproduce: undefined
The user tries to login but nothing happens. I tried three times.
Again, it is not hard to notice in these examples where the differences are, but it is a good excercise to notice the fact that the first report includes more information regarding platform, browser, location of where the issue was noticed, the interactions the tester carried out to define there was a bug, the fact that the tester is using valid credentials, and that the tester was able to appreciate there was definitely a problem since the console alerted them. Finally, an expected behavior and a current behavior where listed, to give whoever reads the ticket a notion of the tester's criteria on the issue.
On the poor example, even if we are all aware that we are testing the login because it states so, we do not know if it happens on the live site, on staging, if it happens in a mobile device, on a pc, if the tester carried out the correct flow or if, for example, they were trying to login by hitting the enter key (which would probably constitute a valid bug).
Always make sure that you are describing the steps as if you were to forget about them completely (you likely will if you are testing a bunch of apps) and when you come back to it, you need to immediately know what to do to make sure the issue was fixed.
Include as much support material as possible
Images, screen captures, screen recordings, error logs, these are all perfect examples of support material that will help you convey the exact behavior you observed. Images and screen caps can be edited to highlight exactly where you see a problem (like a typo, a broken element, spilling content, etc) and screen recordings can be used to outline a series of steps so you can have a recording of how the flow went and where it encountered the issue. Finally, if you are able to obtain any sort of error logs, these usually help the engineers narrow down the possibilities of what could be causing the observed bug.