Skip to content
Test Data & Analytics

Calculate True First Pass Yield

Learn the FPY formula, how it differs from rework yield and rolled throughput yield, and how TofuPilot tracks it automatically.

JJulien Buteau
beginner5 min readMarch 14, 2026

First pass yield (FPY) is the percentage of units that pass testing on their first attempt, without rework or retesting. It's the single most important metric for understanding your production quality, and most teams measure it wrong.

The FPY Formula

FPY = (units passing on first test) / (total unique units tested)

If you test 200 boards today and 180 pass on the first attempt, your FPY is 90%. The 20 that failed, even if they pass after rework and retest, don't count toward FPY.

This distinction matters. A line reporting 98% "yield" might actually have 85% FPY with the rest passing only after rework. Those numbers tell very different stories about process health.

FPY vs. Other Yield Metrics

Several yield metrics exist, and confusing them leads to bad decisions.

MetricFormulaWhat it measures
First pass yield (FPY)Pass on 1st attempt / Total unitsProcess capability at the test station
Rework yieldPass after rework / Total reworkedEffectiveness of your rework process
Output yieldTotal passing (including retests) / Total unitsFinal throughput, hides rework cost
Rolled throughput yield (RTY)FPY₁ x FPY₂ x ... x FPYₙProbability a unit passes all stations without rework

Output yield is the number most teams report. It looks good on slides. But it hides the cost of rework, the strain on test capacity, and the risk of shipping marginal units.

RTY is the most demanding metric. If you have three test stations each with 95% FPY, your RTY is 0.95 x 0.95 x 0.95 = 85.7%. That means only 86 out of 100 units flow through your entire line without touching rework.

Writing Tests That Produce Accurate FPY Data

FPY accuracy depends on your test scripts producing clean, structured results. Every run needs a serial number so TofuPilot can distinguish first attempts from retests of the same unit.

board_functional_test.py
import openhtf as htf
from openhtf.util import units
from tofupilot.openhtf import TofuPilot

@htf.measures(
    htf.Measurement("supply_voltage")
    .with_units(units.VOLT)
    .in_range(minimum=4.75, maximum=5.25),
    htf.Measurement("clock_frequency")
    .in_range(minimum=7.99, maximum=8.01),
    htf.Measurement("standby_current")
    .in_range(maximum=50),
)
def test_power_and_clock(test):
    test.measurements.supply_voltage = 5.03
    test.measurements.clock_frequency = 8.002
    test.measurements.standby_current = 32

@htf.measures(
    htf.Measurement("comms_loopback_pass"),
)
def test_communications(test):
    test.measurements.comms_loopback_pass = True

def main():
    test = htf.Test(test_power_and_clock, test_communications)
    with TofuPilot(test):
        test.execute(test_start=lambda: "SN-2026-00842")

if __name__ == "__main__":
    main()

When this test runs, TofuPilot records the serial number, all measurements with their limits, and the pass/fail outcome. If SN-2026-00842 fails and gets retested later, TofuPilot knows it's a retest, not a new unit.

How TofuPilot Tracks FPY Automatically

You don't need to compute FPY yourself. TofuPilot's analytics dashboard calculates it in real time from your test data.

What the dashboard provides:

  • FPY trend over time. See daily, weekly, or monthly FPY for any product or station. Spot drops the day they happen, not weeks later in a quarterly review.
  • FPY by station. Compare stations running the same test. If Station 3 has 88% FPY while Station 1 and 2 are at 96%, the problem is the station, not the product.
  • FPY by product revision. Track whether a new board revision actually improved yield or made it worse.
  • Unit history. For any serial number, see every test attempt. Understand whether failures cluster on specific units or spread evenly.

Common FPY Pitfalls

Counting retests as first attempts. If your system doesn't track serial numbers, every test looks like a first attempt. Your FPY will be artificially low because failures and their retests both count as unique units.

Excluding known-bad units. Some teams exclude units that "obviously" failed due to fixture issues or operator error. This inflates FPY and hides real problems.

Measuring too late. FPY at final test captures problems from every upstream process. Measuring FPY at each station (and computing RTY) gives you much better isolation.

Setting limits too tight. If your 3-sigma process capability doesn't fit inside your test limits, you'll see chronic low FPY even with a good process. Check your Cpk before blaming the line.

More Guides

Put this guide into practice