Risk-Based Testing
You'll never have time to test everything. That's not a failure — it's reality. The question isn't "what should we test?" It's "what should we test first?"
Risk-based testing gives you the answer.
What Is Risk?
In testing, risk has two dimensions:
Likelihood: How probable is it that this feature has bugs? New code, complex logic, areas with a history of bugs, and code written under time pressure are all higher risk.
Impact: If there IS a bug, how bad is it? A bug in the payment system is catastrophic. A bug in the "About Us" page formatting is annoying but survivable.
Risk = Likelihood × Impact
A feature with high likelihood AND high impact gets tested first and most thoroughly. A feature with low likelihood AND low impact might get tested last — or not at all.
Building a Risk Matrix
For each feature or area, rate both dimensions:
| Feature | Likelihood | Impact | Risk Level | Testing Priority |
|---|---|---|---|---|
| Payment processing | Medium | Critical | High | Extensive testing |
| User registration | High (new code) | High | High | Extensive testing |
| Search results | Low | Medium | Medium | Standard testing |
| Footer links | Low | Low | Low | Quick check |
Risk Indicators
How do you assess likelihood? Look for these signals:
- New code: Never been tested in production. Higher risk.
- Changed code: Modifications introduce regressions. Higher risk.
- Complex code: More paths = more places for bugs. Higher risk.
- Historical bugs: If it broke before, it'll break again. Higher risk.
- Integration points: Where two systems meet, bugs breed. Higher risk.
- Time pressure: Code written in a rush has more bugs. Higher risk.
Making the Hard Call
Risk-based testing means accepting that some things won't be tested. A tester who says "we should test everything" is not being thorough — they're being unrealistic.
The real skill is making informed trade-offs and communicating them clearly: "We've thoroughly tested payment and registration. We've done basic testing on search and profiles. We have not tested the admin dashboard because no changes were made. Here are the risks of shipping with this level of coverage."
That's a professional quality assessment. It's what separates testers from test executors.
The key takeaway: You can't test everything, so test what matters most. Risk-based testing is how you make that decision defensibly.
A feature has LOW likelihood of bugs but CRITICAL impact if a bug exists. How should you prioritize testing it?
Risk = Likelihood x Impact. Even when likelihood is low, critical impact (e.g., payment processing, security) means the risk is still high. A rare but catastrophic bug in payments is more important than a frequent cosmetic bug in the footer.
What is the risk formula used in risk-based testing?
Click to reveal answer
Risk = Likelihood x Impact. Likelihood measures how probable it is that a feature has bugs (new code, complexity, history). Impact measures how bad the consequences are if a bug exists (financial loss, safety, user trust).
Click to flip back
Which of these is NOT a risk indicator that increases the likelihood of bugs?
Stability with no changes is actually a LOW risk indicator. High-risk indicators include time pressure, bug history, new or changed code, complexity, and integration points. A feature that has been unchanged and stable is among the lowest risk.