The Seven Testing Principles
There are seven principles that the testing community considers foundational. They show up on certification exams, but more importantly, they reflect hard-won lessons from decades of software projects going wrong.
Let's go through each one — not as exam material, but as real ideas you'll use every day.
1. Testing Shows the Presence of Defects, Not Their Absence
You can find 100 bugs and fix them all. That doesn't mean there are zero bugs left. Testing can prove software has problems. It can never prove it doesn't.
This matters because stakeholders sometimes ask "is it bug-free?" The honest answer is always "no, but here's what we know."
2. Exhaustive Testing is Impossible
A simple login form with a username (50 chars) and password (50 chars) has more input combinations than atoms in the universe. You can't test them all.
Instead, you use risk-based testing — focus on the inputs most likely to fail or most costly if they do. This is where test design techniques like boundary values and equivalence partitioning come in (we'll cover those soon).
3. Early Testing Saves Time and Money
A bug found during design costs $1 to fix. The same bug found in production costs $100. This ratio is well-documented across the industry.
"Shift-left" means moving testing earlier in the development process. Review requirements. Question designs. Write tests before code. The earlier you find a problem, the cheaper it is.
4. Defects Cluster Together
If you find three bugs in one module, there are probably more hiding nearby. This is the 80/20 rule applied to software: a small number of modules contain most of the defects.
Smart testers track where bugs appear and focus their effort there. If the checkout flow has been problematic, test it harder than the "About Us" page.
5. The Pesticide Paradox
If you spray the same pesticide over and over, the surviving bugs become resistant. Same with testing: if you run the same tests every time, they'll stop finding new defects.
This is why you need to regularly review and update your test cases. Add new ones. Try different approaches. Rotate exploratory testing sessions across different features.
6. Testing is Context-Dependent
Testing a pacemaker is not the same as testing a social media app. Testing a startup MVP is not the same as testing banking software.
There are no universal "best practices" — only practices that are good in a specific context. The amount of testing, the techniques you use, and the risks you accept all depend on what you're testing and who it's for.
7. Absence-of-Errors is a Fallacy
Finding and fixing every bug means nothing if the software doesn't do what users actually need. A perfectly bug-free product that solves the wrong problem is still a failure.
Testing isn't just about finding defects. It's about answering the question: "Does this software serve its purpose?"
The key takeaway: These principles aren't rules to memorize — they're thinking tools. Apply them when making decisions about what to test, how much to test, and when to stop.
Your automated test suite passes 100% of its tests. Based on Principle 1, what can you conclude?
Principle 1 states that testing shows the presence of defects, not their absence. A passing test suite only means those specific checks passed — there could still be untested scenarios with bugs.
What is the Pesticide Paradox in testing?
Click to reveal answer
Running the same tests repeatedly will eventually stop finding new defects, just like pests become resistant to the same pesticide. You must regularly update and diversify your test cases.
Click to flip back
You're testing a medical device and a social media app. According to the seven principles, which statement is most accurate?
Principle 6 (Testing is Context-Dependent) tells us there are no universal best practices. The risk, regulatory requirements, and consequences of failure differ dramatically between a medical device and a social media app.