Every defective unit has a price tag, and it grows the further that defect travels. The Cost of Poor Quality (COPQ) captures everything you spend because something wasn't built right the first time. For most electronics manufacturers, COPQ runs between 15% and 25% of revenue. Most of it is invisible.
What COPQ Includes
COPQ splits into two categories: internal failures (caught before shipping) and external failures (caught by the customer).
Internal Failure Costs
These happen inside your factory. They're painful but controllable.
| Cost Type | Example | Typical Impact |
|---|---|---|
| Scrap | PCB with tombstoned components sent to recycling | Full material + labor cost lost |
| Rework | Re-soldering a BGA after X-ray finds voids | Labor + equipment time + retest |
| Retesting | Unit fails functional test, gets retested after fix | Station time blocked, throughput drops |
| Downgrading | Unit doesn't meet Grade A spec, sold as Grade B | Revenue loss per unit |
| Failure analysis | Engineering time to diagnose root cause | Hours of skilled labor diverted |
External Failure Costs
These happen after the product ships. They're where the real damage lives.
| Cost Type | Example | Typical Impact |
|---|---|---|
| Warranty claims | Customer returns a dead power supply | Replacement unit + shipping + processing |
| Field service | Technician dispatched to replace a failed module | Travel + labor + downtime penalty |
| Recalls | Batch of units with defective firmware update | Logistics + PR + regulatory reporting |
| Customer churn | Customer switches to competitor after repeated issues | Lifetime revenue lost |
| Brand damage | Negative reviews and lost referrals | Hard to quantify, slow to recover |
The 1-10-100 Rule
This rule, originally from quality management literature, puts a ratio on when you catch a defect:
- $1 for prevention. Design the test, validate the process, set the limits. A well-written OpenHTF test with proper measurement limits costs almost nothing per unit to run.
- $10 for detection. Catch the defect on the line. You've already spent material and labor, but you can rework or scrap before shipping. This is where test stations earn their keep.
- $100 for failure. The defect reaches the customer. Now you're paying for returns, field service, warranty processing, and the trust you can't invoice for.
The ratios vary by industry. In medical devices, external failure costs can be 1000x prevention costs when you factor in regulatory consequences. In consumer electronics, it's closer to the 1-10-100 model. The principle is always the same: catch it earlier, pay less.
Where Test Coverage Fits
Test coverage is your primary tool for converting $100 problems into $10 problems, and $10 problems into $1 investments.
Coverage Gaps Cost Money
Consider a Bluetooth speaker manufacturer running only a final functional test. They check audio output and pairing. What they don't catch:
- Cold solder joints that pass at room temperature but fail after thermal cycling
- Battery cells with slightly low capacity that die early in the field
- Antenna impedance mismatch that causes range issues in certain orientations
Each gap is a field failure waiting to happen. Adding targeted tests at earlier stages closes these gaps.
Structuring Tests to Catch Defects Early
A well-structured test strategy pushes detection upstream. Here's what that looks like for an IoT sensor module:
# Incoming inspection: catch component issues before assembly
import openhtf as htf
from tofupilot.openhtf import TofuPilot
@htf.measures(
htf.Measurement("crystal_frequency")
.in_range(minimum=31.990, maximum=32.010),
htf.Measurement("sensor_ic_id_register")
.equals(0xB5),
htf.Measurement("flash_chip_capacity_mb")
.equals(16),
)
def incoming_component_check(test):
test.measurements.crystal_frequency = 32.001
test.measurements.sensor_ic_id_register = 0xB5
test.measurements.flash_chip_capacity_mb = 16
def main():
test = htf.Test(incoming_component_check)
with TofuPilot(test):
test.execute(test_start=lambda: "IOT-2024-4401")
if __name__ == "__main__":
main()# Post-assembly test: validate solder quality and basic function
import openhtf as htf
from openhtf.util import units
from tofupilot.openhtf import TofuPilot
@htf.measures(
htf.Measurement("supply_current_sleep")
.in_range(maximum=0.000050)
.with_units(units.AMPERE),
htf.Measurement("supply_current_active")
.in_range(minimum=0.015, maximum=0.035)
.with_units(units.AMPERE),
htf.Measurement("i2c_sensor_ack")
.equals(True),
htf.Measurement("spi_flash_read_write_ok")
.equals(True),
)
def post_assembly_validation(test):
test.measurements.supply_current_sleep = 0.000028
test.measurements.supply_current_active = 0.0243
test.measurements.i2c_sensor_ack = True
test.measurements.spi_flash_read_write_ok = True
def main():
test = htf.Test(post_assembly_validation)
with TofuPilot(test):
test.execute(test_start=lambda: "IOT-2024-4401")
if __name__ == "__main__":
main()# Final calibration and functional test
import openhtf as htf
from openhtf.util import units
from tofupilot.openhtf import TofuPilot
@htf.measures(
htf.Measurement("temperature_accuracy")
.in_range(minimum=-0.5, maximum=0.5)
.with_units(units.DEGREE_CELSIUS),
htf.Measurement("humidity_accuracy_pct")
.in_range(minimum=-3.0, maximum=3.0),
htf.Measurement("ble_rssi_at_1m")
.in_range(minimum=-55, maximum=-35),
htf.Measurement("battery_voltage")
.in_range(minimum=3.0, maximum=4.2)
.with_units(units.VOLT),
htf.Measurement("ota_update_success")
.equals(True),
)
def final_calibration_test(test):
test.measurements.temperature_accuracy = 0.12
test.measurements.humidity_accuracy_pct = -1.4
test.measurements.ble_rssi_at_1m = -42
test.measurements.battery_voltage = 3.85
test.measurements.ota_update_success = True
def main():
test = htf.Test(final_calibration_test)
with TofuPilot(test):
test.execute(test_start=lambda: "IOT-2024-4401")
if __name__ == "__main__":
main()Three test stages, each catching a different class of defect. Incoming inspection catches bad components before you solder them (cheapest fix). Post-assembly catches process defects before calibration effort is wasted. Final test catches system-level issues before shipping.
Using TofuPilot to Track Quality Costs
You can't reduce COPQ without measuring it. TofuPilot gives you the data foundation:
- FPY per procedure shows your first-pass yield at each test stage. A drop in FPY at post-assembly means your process is generating rework. That's internal failure cost climbing.
- Failure Pareto analysis ranks which measurements fail most often. This tells you where to invest prevention dollars. If 60% of failures are solder-related, that's a clear signal to improve your reflow profile or paste deposition.
- Yield trends over time reveal whether quality is improving or degrading. A slow downward trend in FPY is COPQ increasing before anyone notices in the financial reports.
- Unit traceability connects field failures back to test data. When a customer returns a unit, you can pull its complete test history and see whether it passed marginally or had anomalies that were within spec but near the edge.
Reducing COPQ in Practice
The path from high COPQ to low COPQ follows a predictable pattern:
- Instrument your process. Add tests at each major manufacturing step. Upload everything to TofuPilot with consistent serial numbers and measurement names.
- Identify the biggest losses. Use failure Pareto to find the top 3 failure modes. These typically account for 60-80% of your internal failures.
- Push detection upstream. If final test catches a defect, ask whether an earlier test could have caught it before more value was added to the unit.
- Tighten limits proactively. Use Cpk data from TofuPilot to identify measurements that are technically passing but drifting toward limits. Tightening process controls before failures occur is prevention, the cheapest category.
- Measure the result. Track FPY improvement over time. Every percentage point of FPY improvement translates directly to lower scrap, less rework, and fewer field failures.