Skip to content
Test Data & Analytics

The Hidden Cost of Retesting in Manufacturing

Learn how retesting inflates costs in manufacturing, identify root causes, and use TofuPilot's unit history to track and reduce retest rates.

JJulien Buteau
intermediate10 min readMarch 14, 2026

Every retest burns station time, operator attention, and margin. Most teams don't track it because retesting feels like part of the process. It's not. It's rework that compounds silently.

This guide breaks down what retesting actually costs, where it comes from, and how to use TofuPilot's built-in unit history to track and reduce it.

What Retesting Actually Costs

A single retest doesn't look expensive. Multiply it by thousands of units per month and the numbers get uncomfortable.

Cost categoryPer-retest estimateAt 10% retest rate (10k units/mo)
Station time (test duration)$0.50-$2.00$500-$2,000/mo
Operator handling$0.30-$1.00$300-$1,000/mo
Failure analysis (if triggered)$5-$50$5,000-$50,000/mo
Delayed shipment (opportunity cost)VariesOften the largest cost
Station capacity lost1 slot per retest1,000 slots/mo unavailable

The direct cost of retesting is easy to underestimate. The indirect cost (capacity you can't use for new units) is usually worse. A test station running at 10% retest rate effectively loses 10% of its throughput.

Common Causes of Excessive Retesting

Retesting isn't always a product quality problem. Often it's a test system problem.

Root causeSymptomFix
Flaky test infrastructureSame unit passes on retry with no reworkStabilize fixtures, tighten connections, add settling time
Overly tight limitsMarginal units fail, pass on retestWiden limits using production Cpk data
Environmental sensitivityFailures cluster at shift start or temp changesAdd environmental conditioning or guard bands
Operator errorFailures correlate with specific operatorsImprove fixturing, reduce manual steps
Intermittent DUT defectUnit fails randomly across multiple retestsRoot cause at board level, check solder joints
Test software bugsSpecific phases fail inconsistentlyReview phase timeout, instrument communication

If your retest rate is above 5%, start with the test system before blaming the product.

Setting Up Tests for Retest Tracking

TofuPilot tracks retests by matching runs to the same unit serial number and procedure. For this to work, your test script needs to identify units correctly.

tests/board_fct.py
import openhtf as htf
from tofupilot.openhtf import TofuPilot

def main():
    test = htf.Test(
        check_power_rails,
        check_communication,
        measure_current_draw,
        station_id="FCT-STATION-01",
    )

    with TofuPilot(test):
        test.execute(test_start=lambda: "SN-2024-00421")

if __name__ == "__main__":
    main()

Two things matter for retest tracking:

  • procedure_id identifies the test procedure. Use a stable name that doesn't change between retests.
  • dut_id (returned by test_start) identifies the physical unit. Use the actual serial number, not a generated ID.

When the same dut_id appears multiple times for the same procedure_id, TofuPilot knows it's a retest.

Tracking Retests in TofuPilot

TofuPilot tracks every run per unit serial number. You don't need to build analytics scripts or query the API to compute retest rates.

Unit history page. Open any unit's history page to see all test attempts, ordered chronologically. A unit that was tested three times before passing shows all three runs with their outcomes. This tells you immediately whether a unit needed retesting and how many attempts it took.

Analytics tab. Use the Analytics tab to monitor first-pass yield (FPY) trends. FPY inversely correlates with retest rate: if your FPY is 92%, roughly 8% of units needed at least one retest. Tracking FPY over time shows whether your retest problem is improving or getting worse.

Filtering by procedure and station. You can filter analytics by procedure and station to isolate retest patterns. If one station has noticeably lower FPY than others running the same procedure, you've found a test infrastructure problem, not a product problem.

Retest Rate vs Cost Impact

Here's how retest rate maps to real production impact at different scales.

Monthly volumeRetest rateRetests/moEst. cost/mo (@ $2/retest)Station capacity lost
1,0002%20$40Negligible
1,00010%100$200~2 hours
10,0002%200$400~1 day
10,00010%1,000$2,000~5 days
100,0005%5,000$10,000~25 days
100,00010%10,000$20,000~50 days

At high volume, even a small retest rate improvement frees up meaningful station capacity. Going from 10% to 5% at 100k units/month recovers 25 days of station time.

How to Reduce Retest Rate

Reducing retest rate is almost always higher ROI than buying more stations.

1. Stabilize the test environment. Most retests at low maturity come from the test system, not the product. Check fixture contact resistance, instrument warm-up, cable integrity, and software timeouts. If a unit passes on retry without rework, the problem is your test, not the DUT.

2. Use Cpk to set limits. Limits derived from datasheets are often tighter than necessary. Pull measurement distributions from TofuPilot's analytics, calculate Cpk, and widen limits where you have margin. A Cpk of 1.33 or higher means the process is well within spec.

3. Add marginal bands. Configure marginal limits in your test to flag units that pass but are close to the boundary. These are your future retests. Catching them early lets you investigate before the limit becomes a yield problem.

4. Root-cause the top failures. Use TofuPilot's measurement analytics to identify which phases fail most often. Focus on the top 3. Pareto analysis almost always reveals that a small number of failure modes drive most retests.

5. Track retest rate as a KPI. Make FPY (or its inverse, retest rate) a weekly metric. Review it in TofuPilot's Analytics tab. Teams that track retest rate consistently reduce it. Teams that don't, don't.

More Guides

Put this guide into practice