What is Prioritisation?

Product Management works very closely with the engineering teams to build digital products which engage the world. One of the major items of this relationship is to figure out what needs to go in first and what features or functionality the users will need to keep them engaged.

To decide on these in product management we have Prioritisation. Essentially to determine what goes first but also to say what doesn’t go in first. We have frameworks that help us determine in this regard.


Prioritisation Frameworks

1. The MoSCoW Method

MoSCoW analysis is the most used framework for Agile Project management to prioritise the features in product development.

It’s a particularly useful tool for communicating to stakeholders the reasons for choosing a particular feature set

The name is an acronym of four prioritisation categories: Must have, should have, could have, and won’t have.

Must have

‘Must have’ represents the features that you absolutely should not launch without.

‘Must have’ are the features in the product that solve the core problem and the product cannot be released without these features.

These reasons can be legal, safety, or business. Any slippages here can cause huge impact in overall product experience and thus impacting the business.

To categorise a feature to be Must Have, the product success should be dependent on it.

Should have

‘Should have’ is the next category where the features can create a good impact on the end user but might not impact the overall success of the product.

Could have

The line between the features in Should have and could have is very thin. These are mostly the features that can be included if there are additional resources or time in hand. The impact on the end customer will be minimal in this case.

Won’t have

The features in this category are enhancements for the features or add-ons that might help the end users experience but will not disappoint if they are not included. These features can also be taken up in the later versions in the product cycle.

In any case, this prioritisation technique helps the stakeholders agree what won’t goes in now and what goes in the next release. Thus, giving greater clarity and manage expectations.

2. RICE Scoring

Another key prioritisation methodology is the RICE scoring system, which again has four categories to help assess priority; Reach, Impact, Confidence, and Effort.

Reach

Here the customer is in focus, and the analysis should be around the impact the feature will have if it is released. Having a number here on how many have been impacted over an agreed time period (it could be day, week, month, quarter) will be helpful. This number should be backed by data collected from various sources.

Impact

Here we try to analyse the impact on the customer. This could vary from customer being delighted to customer getting annoyed and later leading to discarding the product.

While there is no scientific method to measure the impact, one can be asked to rate a product feature from 1 to 5, 5 being massive impact to 1 being minimal impact.

Confidence

Another parameter that cannot be measured scientifically is confidence. Sometimes as manager one must go by instinct and gut about certain aspects of product development. We can record percentage as one such parameter. This is a prerogative of the manager and his team to increase the confidence percentage in case they feel that the feature must be included even if they do not have any data to back. It’s a risk that every manager must take at some point.

Generally, anything above 75% is considered a high confidence score, and anything below 50% is pretty much unqualified.

Effort

Here we calculate the efforts that goes into UI/UX, System design, coding testing and deploying. The entire engineering team gets involved in this part and all risks are considered and a figure is arrived at. This is also the phase where the team size and work effort per member is calculated.

The more time allotted to a project, the higher the reach, impact, and confidence will need to be to make it worth the effort.

Calculating a RICE Score

We have four numbers representing each of the 4 parameters. To calculate the score, multiply Reach by Impact, and then by Confidence. Then divide by Effort.

Your final score represents ‘Impact per time’ The higher the number, the closer you are to high impact/low effort.

3. Kano Model

Delighters:

The features that customers feel are a great value add – features that are more than expected. These will make the product stand out in the competition.

Performance features:

Performance features are the ones that make the product work, somethings that the users feel will enhance their experience.

Basic features:

These are features that the user is expecting from the product – a solution to their problem. A user might otherwise not use the product without these features.

The main idea behind the Kano model is that the focus should be on the features that come under these three buckets, the higher your level of customer satisfaction will be.

To find out how customers value certain features, we use questionnaires asking how their experience of your product would change with or without them.

Reassessment is an essential part of product life cycle as time goes by technology advances and the features that were Delighters as some point might become Basic features. So, it is imperative that there should be regular product assessments and competitive analysis for the product to be market relevant

Which Model Should I Use?

Choosing a prioritization framework is tough! Customer-centric decisions are the primary focus point of Kano model, but it can take time to carry out all the questionnaires/surveys needed for Data accuracy.

RICE is also one of the popular scoring systems that takes calculated approach to prioritisation but then there are still certain parameters that are based on instinct

MoSCoW focuses on what matters to both customer and stakeholders, which is particularly useful for Product Managers who struggle with managing stakeholder expectations. The system can be easily explained to clients and non-technical stakeholders but the temptation to put all features into Must have and should have buckets always lingers around.

Of course, these aren’t the only three methods out there. We can choose the one which suits our needs, or we can come up with something that is new

How do we gather necessary data?

  • Website polls
  • Email lists
  • Testing product on users
  • Social networks
  • Google Marketing Platform’s consumer surveys:
  • Industry reports.
  • Competition Analysis


Look out for our New Product Development white-paper where we will use some of these concepts.