Exhaustive testing (test all cases from the input domain) is not practical in most software engineering projects due to almost infinite input domain, resource constraints and complexities of interactions. We have to make assumptions in testing and use test techniques with the assumptions to reduce the cost, time, and effort of testing while still ensuring a high level of confidence that the software behaves as expected.
Assumption 0: Inputs are independent
The value of one input doesn’t depend on the value of another input.
Consider the following example:
a = math.sqrt(b)
my_fn(a, b)
Here:
a
is not chosen independently — it depends onb
- If you choose
b = 9
, thena
must be3
. - You can’t freely mix values for
a
andb
.
If inputs are dependent, we may miss a few combinations, but it’s usually not a big risk.
Assumption 1: Validity of inputs
Case 1: Normal test design
If invalid inputs cannot happen, we say the test design is normal. This usually happens when the type system or language constraints prevent invalid input. Consider the below example:
def concat(a: str, b: str) -> str:
return a + b
print(concat('a', 'b')) # ✅ Valid
a
andb
must be strings- If you’re using a language or environment with strong static typing and type checking, then passing, say, an
int
toa
is not allowed - the invalid inputs are blocked at compile time or runtime. for this function, we assume only valid string inputs will be passed.
This simplifies testing — we only need to test normal behaviour, not invalid inputs.
Case 2: Robust test design
If invalid inputs can still occur, then we need robust test design.
This means testing not only correct inputs, but also edge cases, invalid inputs, and error handling.
Consider the below example:
This function returns True
if the age is 18 or older, False
otherwise.
def is_adult(age: int) -> bool:
return age >= 18
Here, even though age is typed as an int, that doesn’t guarantee the input is:
- in a reasonable range (e.g. negative value), or
- not logically invalid (e.g. “twenty”)
So, invalid inputs are possible, and we need to test how the function handles them. Robust test design checks how the program handles bad or unexpected inputs, helping catch more bugs.
Assumption 2: Complex faults due to parameter interaction
Weak assumption (Only simple faults)
Do you assume only one input (single parameter) causes a bug at a time
If yes → Weak Assumption You only need to test:
- Each parameter individually
- You assume that no weird bugs happen because of combination
Weak assumption is easier and faster to test. However, you might miss bugs caused by interactions between inputs.
Strong assumption (Complex faults can occur)
Do you assume bugs might occur when two or more parameters interact?
If you → Strong Assumption You have to test combinations of parameter values. You need to do:
- Pairwise testing
- Combinatorial testing
- More test cases to cover different interactions between parameters
Strong assumption is more thorough. However, more time-consuming, so we look for efficient combinations.
Example
Imagine you are testing a function that handles pizza orders:
def order_pizza(size, crust, topping):
if size == "small" and crust == "stuffed":
return "Error: Stuffed crust not available for small size"
return f"{size} pizza with {crust} crust and {topping}"
Testing under the Weak Assumption You would test inputs one at a time, assuming each along cloud cause a bug:
- size = small, medium, large
✅Test_Passed
- crust = thin, thick, stuffed
✅Test_Passed
- topping = pepperoni, cheese
✅Test_Passed
Everything seems fine individually. But there is a bug that only shows up with certain combinations:
Testing under the Strong Assumption When you try:
order_pizza("small", "stuffed", "pepperoni")
Now there is an error: “Stuffed crust not available for small size” That bug won’t show up if you only tested:
- all sizes with a normal crust
- all crusts with medium or large sizes
This is a complex fault due to the interaction between size and crust.
Back to parent page: Software Testing
Web_and_App_Development Software_Testing Software_Validation SOFT3202 Blackbox_Testing