Page 1 of 1

Severity, Priority, and Risk

Posted: Dec 30, 2018 12:05 am
by Starbuck
I'm looking for a way to assess priority based on risk. Risk in this case is defined as complexity, difficulty, resource availability, and other factors. When we know the Risk associated with a ticket, at least by someone who can determine a value, we are in a better position to determine the priority. Similarly, when looking at the risk and the priority, the priority makes more sense, it's more justified. We can look at the risk and make an assessment about whether the priority is appropriate.

For example:
- A bug is given a high priority by the user. It's then assigned a high risk factor by the developer. That might be because a lot of code needs to be changed and it will take months to test properly. So the manager gives the ticket a low internal priority. While the user priority is high it's more obvious why we aren't working on it compared to other high priority tickets. The discrepancy becomes a matter of value to the company, the relationship with the client, etc.
- In another ticket, a feature request is low priority to the reporter, and has been assigned a very low risk. Maybe it's just a matter of clarifying a few words in the docs. It costs almost nothing to implement, so the manager says let's do it now, and gives it a high priority.
- Compare that to another feature request, also low priority to the user, to write a new chapter in the doc. That's high risk with many resources required over a long period of time. At least we have a metric to recognize that both of these tickets are low priority enhancements but one of them has a higher "near-term" priority or approval over the other.

Why not use Severity?

That's probably the right answer. Severity is documented as a multiplier for developer performance. One could argue that the same metric for performance should be used to help establish priority. The more difficult it is for someone to do the task, the more risk it presents compared to others in determining prioritization. In other words, with all other factors like benefits to the company being equal, many managers will choose the lowest cost item.

The user can determine severity, as in, the program is crashing, but I personally don't think that's a good guide for a multiplier for developer reporting. The user might say the issue is a crash but it's not. The developer might devalue a severity from major to minor - and that took a lot of insight, so maybe they should still get the multiplier for a major complexity issue.

I use severity to indicate Why a problem is an issue: This bug causes a crash. This bug is blocking other activity. But it's possible a bug to be both Crashing AND Minor, or a Feature AND a Major Rewrite.


The user's priority is not the same as the product owner's. So sometimes we need two types of priority. Changing the priority is not appropriate because to the client an issue is still a high priority even if the company says it's low.

Similarly, the user's determination of severity can be very wrong. The user might think some change is Minor when it's actually Major. So again, we might want two values for severity, one for the reporter and one for the handlers.

There is still this concept of risk that needs to be quantified. I think Bugzilla has this built-in.

So how do MantisBT admins configure this?

Do you add one or more custom fields that help to assess this concept of risk? Maybe a value from 1-100?
How about an enum or checkbox where the values are multipliers?
Do you do this outside of the tracker?