Retesting eats 10-30% of test station capacity on a typical electronics production line. Every retest cycle burns station time, operator attention, and fixture wear, all without producing a single new unit. Cutting your retest rate is one of the fastest ways to increase throughput without buying more equipment.
Why Retesting Is So Expensive
The direct cost of a retest seems small: run the test again, maybe it passes. But the hidden costs add up fast.
Station throughput drops. A station with 15% retest rate effectively loses 15% of its capacity. For a station running 500 units per shift, that's 75 slots wasted on units you already tested once.
Rework creates quality risk. Every rework cycle (desolder, replace, resolder) introduces thermal stress, pad damage, and handling risk. Reworked units fail at higher rates downstream.
Data gets noisy. When operators retest units without tracking serial numbers, your yield metrics become unreliable. You can't tell whether FPY is 85% or 95% because first attempts and retests are mixed together.
Root Causes of Excessive Retesting
Before you can fix retesting, you need to know why it's happening. The causes usually fall into four categories.
Loose or Missing Test Limits
Tests without proper limits can't distinguish real failures from marginal passes. If a voltage rail is specified at 3.3V +/- 5%, but your test has no limits, operators make judgment calls about what "looks okay."
Flaky Test Fixtures
Contact resistance on pogo pins, worn alignment features, and intermittent cable connections cause random failures. The unit is fine. The fixture isn't. Operators know this, so they retest.
Operator-Initiated Re-runs
Without clear pass/fail criteria, operators develop habits: "if it fails on voltage, just run it again." This masks both fixture problems and real defects.
Environmental Sensitivity
Temperature, humidity, and power supply variation can push measurements across limits. If your test runs differently at 8 AM versus 2 PM, you have an environment problem.
OpenHTF Patterns That Reduce Retesting
Good test code prevents unnecessary retests by catching fixture issues early and setting proper limits.
Validate the Fixture Before Testing the DUT
Add a fixture validation phase at the start of your test sequence. If the fixture fails, the run aborts before logging a false failure against the unit.
import openhtf as htf
from openhtf.util import units
from tofupilot.openhtf import TofuPilot
@htf.measures(
htf.Measurement("fixture_contact_resistance")
.with_units(units.OHM)
.in_range(maximum=0.5),
htf.Measurement("fixture_supply_voltage")
.with_units(units.VOLT)
.in_range(minimum=11.8, maximum=12.2),
)
def validate_fixture(test):
# Check pogo pin contact and power supply before DUT test
test.measurements.fixture_contact_resistance = 0.12
test.measurements.fixture_supply_voltage = 12.01
@htf.measures(
htf.Measurement("output_voltage")
.with_units(units.VOLT)
.in_range(minimum=3.135, maximum=3.465),
htf.Measurement("ripple_mV")
.in_range(maximum=50),
)
def test_power_output(test):
test.measurements.output_voltage = 3.29
test.measurements.ripple_mV = 18
@htf.measures(
htf.Measurement("signal_integrity_dB")
.in_range(minimum=-3.0, maximum=3.0),
)
def test_signal_path(test):
test.measurements.signal_integrity_dB = 0.4
def main():
test = htf.Test(validate_fixture, test_power_output, test_signal_path)
with TofuPilot(test):
test.execute(test_start=lambda: "SN-2026-01105")
if __name__ == "__main__":
main()The validate_fixture phase runs first. If contact resistance is too high or the supply voltage is out of range, the test fails immediately. The failure is attributed to the fixture, not the DUT, and no false failure gets recorded against the serial number.
Set Measurement Limits Based on Process Capability
Don't guess limits. Set them from your Cpk data. If your process produces output voltage with a mean of 3.30V and a standard deviation of 0.03V, your 3-sigma range is 3.21V to 3.39V.
Your test limits should be wider than 3-sigma (to avoid false failures) but within spec (to catch real defects). A good starting point is the spec limits from the component datasheet.
import openhtf as htf
from openhtf.util import units
# Limits derived from Cpk study: spec is 3.0-3.6V, process sigma is 0.03V
# Test limits set at spec boundaries to catch real defects
@htf.measures(
htf.Measurement("regulated_output")
.with_units(units.VOLT)
.in_range(minimum=3.0, maximum=3.6),
htf.Measurement("load_regulation_pct")
.in_range(minimum=-2.0, maximum=2.0),
htf.Measurement("thermal_shutdown_temp")
.with_units(units.DEGREE_CELSIUS)
.in_range(minimum=145, maximum=155),
)
def test_regulator(test):
test.measurements.regulated_output = 3.31
test.measurements.load_regulation_pct = 0.8
test.measurements.thermal_shutdown_temp = 150How TofuPilot Tracks Retest Rates
TofuPilot links every test run to a serial number, so it knows when a unit is being tested for the second, third, or tenth time.
The dashboard shows you:
- Retest rate by station. If one station has 3x the retest rate of another running the same test, the station needs maintenance.
- Unit history. For any serial number, see every test attempt in sequence. Spot patterns like "fails once, passes on immediate retest" (fixture issue) versus "fails repeatedly on the same measurement" (real defect).
- FPY vs. output yield gap. A large gap between FPY and output yield means you're relying heavily on retesting to hit your yield target. That gap is your retest cost, made visible.
- Failure Pareto. See which test phases cause the most failures. If one phase accounts for 60% of retests, fixing that phase gives you the biggest return.
A Practical Reduction Playbook
- Measure your current retest rate. Check TofuPilot's FPY dashboard. The gap between FPY and output yield is your retest cost.
- Identify the top failure phase. Use the failure Pareto chart. Focus on the single biggest contributor first.
- Classify failures. For the top phase, check whether failures are fixture-related (pass on immediate retest) or DUT-related (consistent failure).
- Fix fixtures first. Fixture failures are the cheapest to fix. Replace worn pogo pins, tighten alignment, and add fixture validation phases.
- Review test limits. Use measurement histograms and Cpk charts in TofuPilot to check whether limits match process capability. Tighten limits that are too loose. Widen limits where Cpk shows the process can't hit them.
- Repeat monthly. Retest rate creeps up as fixtures wear and process conditions change. Make it a recurring review.