You have identified a list of risks on your project. Now what? Without a way to compare them against each other, every risk feels equally urgent and nothing gets prioritised. Scoring is the step that transforms a list of worries into an ordered set of priorities your team can act on.
Risk scoring uses two dimensions: probability (how likely is this to happen?) and impact (how bad would it be if it did?). Multiply them together and you get a single number that tells you where each risk sits relative to every other risk on your register.
This guide explains exactly how to assess each dimension, avoid common scoring pitfalls, and use the resulting scores to drive better decisions.
The two dimensions
Probability: how likely is it?
Probability measures the chance that a risk will actually materialise during your project. Score it on a 1 to 5 scale:
Score |
Label |
What it means in practice |
|---|---|---|
1 |
Rare |
No precedent; would require extraordinary circumstances |
2 |
Unlikely |
Conceivable but not expected based on experience |
3 |
Possible |
Has happened on similar projects or could reasonably occur |
4 |
Likely |
More probable than not; would not be surprising |
5 |
Almost Certain |
Expected to happen; the team would be surprised if it did not |
When assessing probability, draw on three sources: historical data from past projects (the most reliable), expert judgement from team members with relevant experience, and external information (industry statistics, supplier track records, weather data).
Resist the temptation to score everything as a 3. Teams default to the middle when they are unsure, which clusters all risks together and defeats the purpose of scoring. Push yourself: is it really "Possible," or is it actually "Unlikely" or "Likely"? The difference matters for prioritisation.
Impact: how bad would it be?
Impact measures the severity of consequences if the risk happens. Score it on the same 1 to 5 scale, but calibrate the definitions to your specific project:
Score |
Label |
Schedule |
Cost |
Quality/Safety |
|---|---|---|---|---|
1 |
Negligible |
Days |
Under 1% |
Barely noticeable |
2 |
Minor |
1 to 2 weeks |
1 to 3% |
Minor rework |
3 |
Moderate |
2 to 6 weeks |
3 to 7% |
Noticeable quality reduction |
4 |
Major |
1 to 3 months |
7 to 15% |
Significant rework or injury |
5 |
Catastrophic |
3+ months or cancellation |
15%+ |
Project failure or fatality |
When a risk affects multiple dimensions (which is common), score against the dimension where the impact is highest. A supplier going bankrupt might be Minor on quality but Major on schedule. Score it as 4.
The key is to define these thresholds before you start scoring. What counts as "Moderate" on a four-week sprint is very different from "Moderate" on a two-year construction programme. Get the team aligned on the scale before you apply it.
The formula: probability × impact
Multiply the two scores and you get a risk score from 1 to 25. This is deliberately simple. There are more sophisticated quantitative methods (Monte Carlo simulation, expected monetary value analysis), but for most project risk management the 5×5 qualitative approach provides the right balance of rigour and practicality.
For a detailed visual walkthrough of how these scores map to the full 5×5 risk matrix and its colour-coded levels, see our dedicated matrix guide.
The score ranges:
Score |
Level |
Response |
|---|---|---|
1 to 5 |
Low |
Monitor periodically |
6 to 11 |
Medium |
Document a response plan; review fortnightly |
12 to 19 |
High |
Active mitigation plan with assigned owner; review weekly |
20 to 25 |
Critical |
Immediate action; escalate to senior leadership |
Worked examples
Let's score three risks from different project types to see how the framework works in practice.
Example 1: Key developer resigns during migration
You are midway through a database migration. One of the two developers who understands the legacy system has been interviewing at other companies.
Probability: 4 (Likely). There are concrete signals that this person may leave.
Impact: 4 (Major). The migration would stall for weeks while someone gets up to speed.
Score: 4 × 4 = 16 (High).
This warrants an immediate mitigation plan: documentation of the migration process, cross-training the second developer, and possibly accelerating the most critical migration tasks while both developers are still available.
Example 2: Rain delays outdoor event setup
You are organising a corporate event in a marquee during October. Setup requires two dry days.
Probability: 4 (Likely). October weather is unpredictable and rain is common.
Impact: 2 (Minor). You can adjust the setup schedule by starting a day earlier, and the marquee has a weatherproof installation process.
Score: 4 × 2 = 8 (Medium).
Despite high probability, the low impact keeps this in the Medium range. The mitigation is straightforward: build weather buffer into the setup schedule and confirm the supplier's wet-weather process.
Example 3: Regulatory change invalidates product design
You are developing a medical device and there are rumours of updated EU regulations that could affect your product classification.
Probability: 2 (Unlikely). Regulatory changes of this magnitude are infrequent and slow-moving.
Impact: 5 (Catastrophic). If the classification changes, the entire approval process restarts, adding 12+ months to the timeline.
Score: 2 × 5 = 10 (Medium).
This is a case where the number alone does not tell the full story. A score of 10 is Medium, but the catastrophic potential impact means you should still monitor this closely and have a contingency plan. Some teams apply a rule: any risk with an impact of 5 gets treated as at least High, regardless of probability.
Scoring as a team
Risk scoring should never be one person working alone with a spreadsheet. The most valuable part of the scoring process is the conversation it generates.
When you sit down with your team and someone scores a risk as Probability 2 while another person scores it as Probability 4, that disagreement is gold. It usually means one person has information the other does not. The site manager knows the ground conditions are worse than the drawings suggest. The tech lead knows the API documentation is outdated. The procurement manager knows the supplier has been missing deadlines on other projects.
A practical approach for team scoring sessions:
- Present each risk and give everyone 30 seconds to write down their probability and impact scores independently (this prevents anchoring to the first person who speaks).
- Reveal the scores simultaneously.
- Where scores differ by more than one point, discuss. The goal is not consensus for its own sake but shared understanding.
- Record the agreed score and a brief note on the reasoning.
This process typically takes 2 to 3 minutes per risk. For a register of 15 risks, that is under an hour.
Residual scoring: measuring the effect of your mitigations
Your initial score (the inherent risk score) represents the risk before any mitigations. After you put treatment plans in place, you can rescore to get the residual risk score.
For the developer resignation example above:
Inherent: 4 × 4 = 16 (High).
A
fter cross-training and documentation: Probability stays at 4 (the person may still leave), but Impact drops to 2 (Minor, because the team can now continue without them).
Residual: 4 × 2 = 8 (Medium).
Tracking both scores does two useful things. It shows stakeholders the value of your mitigation investment (the risk dropped from High to Medium because of the actions you took). And it shows you where mitigations are not working: if the residual score is still High, your treatment plan needs rethinking.
Common scoring mistakes
Letting impact bias probability. "This would be really bad, so it must be likely." These are separate assessments. A meteorite hitting your office would be catastrophic but extremely rare. Keep the two dimensions independent.
Scoring based on gut feel alone. Where data exists, use it. If your last three projects all experienced supplier delays, that risk is not "Possible" but "Likely" or "Almost Certain." Historical evidence beats intuition.
Never updating scores. Scores are snapshots, not permanent labels. A risk scored as Unlikely at project start might become Likely three months in based on new information. Rescore at every review.
Ignoring compound risks. Sometimes two Medium risks together create a much bigger problem than either one alone. A supplier delay (Medium) combined with a design change (Medium) could cascade into a project-threatening situation. Watch for risks that interact.
Precision theatre. Spending twenty minutes debating whether a risk is a 3 or a 4 is wasted effort. The scale is intentionally coarse. If you cannot decide between two adjacent scores, pick the higher one and move on. The action you take will be the same either way.
Score risks without the spreadsheet formulas. In Riskjar, set probability and impact and the score calculates and colour-codes automatically. Your heat map updates in real time. Try it free.