First pass yield is the percentage of units that pass all tests on the first attempt, without rework or retesting. It's the single most important metric for manufacturing test efficiency. A low FPY means wasted time, wasted parts, and hidden quality problems. A high FPY means your process works.
How to Calculate First Pass Yield
The formula is simple:
| Variable | Meaning |
|---|---|
| FPY | First pass yield (0 to 1, or 0% to 100%) |
| Units passed | Units that passed all tests on the first run |
| Total units tested | All units that entered the test process |
FPY = Units passed on first attempt / Total units tested
If you tested 1,000 PCBAs and 950 passed on the first attempt, your FPY is 95%.
For a multi-step process with N stations, the rolled throughput yield (RTY) multiplies each station's FPY:
| Station | FPY |
|---|---|
| ICT | 98% |
| Functional Test | 95% |
| Final Assembly | 99% |
| RTY | 98% x 95% x 99% = 92.2% |
RTY shows the real probability that a unit passes the entire line without rework. Even when individual stations look healthy, the rolled yield often tells a different story.
Why FPY Matters More Than You Think
The cost of retesting
Every failed unit costs more than the retest itself. Here's what actually happens when a unit fails:
- Operator time to remove, label, and log the failure
- Diagnostic time to identify the root cause
- Rework cost (soldering, component replacement, firmware reflash)
- Retest cost (the unit goes through the station again)
- Throughput loss (the station was occupied by a unit that should've passed)
A test station running at 95% FPY wastes 5% of its capacity on retests. At scale, that's a full station worth of throughput lost. Most teams underestimate this because they don't track the total cost per failure.
FPY benchmarks by industry
| Industry | Typical FPY | World-class FPY |
|---|---|---|
| Consumer electronics | 95-98% | >99% |
| Automotive electronics | 97-99% | >99.5% |
| Medical devices | 90-95% | >98% |
| Aerospace / defense | 85-95% | >97% |
| IoT / sensors | 93-97% | >99% |
These numbers vary by product complexity, but they give you a reference point. If you're below the typical range, there's likely a process issue worth investigating.
Common Causes of Low FPY
Before you can improve FPY, you need to know what's failing and why. The most common causes in electronics manufacturing:
Test-related (false failures)
- Limits set too tight. Initial limits from the datasheet don't account for real production variation. A limit that rejects 3% of units might be catching normal variation, not defects.
- Fixture contact issues. Worn probes, misaligned pins, or contaminated contacts cause intermittent failures that pass on retest.
- Environmental sensitivity. Temperature drift in the test station causes measurements to shift during a production run.
Process-related (real failures)
- Solder defects. Cold joints, bridges, insufficient solder. The #1 source of functional test failures in SMT production.
- Component placement errors. Wrong orientation, tombstoning, shifted components.
- Supplier variation. A new component lot with slightly different characteristics triggers marginal failures.
The first step is separating false failures from real ones. If a unit fails then passes on retest with no rework, it was probably a false failure. Tracking retest pass rates helps you quantify this.
How to Track FPY with TofuPilot
TofuPilot calculates FPY automatically from your test data. Every test run you upload includes a pass/fail result, and TofuPilot aggregates these into FPY metrics by procedure, time period, and station.
Basic test script
Here's a minimal OpenHTF test that logs results to TofuPilot. FPY tracking starts automatically once you have runs flowing in.
import openhtf as htf
from openhtf.util import units
from tofupilot.openhtf import TofuPilot
@htf.measures(
htf.Measurement("supply_voltage_3v3")
.in_range(3.2, 3.4)
.with_units(units.VOLT),
htf.Measurement("supply_current_idle")
.in_range(0.01, 0.15)
.with_units(units.AMPERE),
)
def test_power_rails(test):
"""Verify 3.3V rail voltage and idle current draw."""
# Replace with actual instrument readings
test.measurements.supply_voltage_3v3 = 3.31
test.measurements.supply_current_idle = 0.042
@htf.measures(
htf.Measurement("firmware_version")
.equals("2.1.0"),
htf.Measurement("self_test_result")
.equals("PASS"),
)
def test_firmware(test):
"""Check firmware version and run DUT self-test."""
# Replace with actual DUT communication
test.measurements.firmware_version = "2.1.0"
test.measurements.self_test_result = "PASS"
def main():
test = htf.Test(
test_power_rails,
test_firmware,
procedure_id="FCT-001",
part_number="PCBA-2024-A",
)
with TofuPilot(test):
test.execute(test_start=lambda: input("Scan serial number: "))
if __name__ == "__main__":
main()Each run is logged with its serial number, measurements, limits, and pass/fail status. TofuPilot uses this data to compute FPY in real time.
What you get automatically
Once test runs flow into TofuPilot, the procedure analytics page shows:
- FPY over time with daily, weekly, and monthly views
- FPY by station to compare performance across test stations
- Failure Pareto showing which measurements cause the most failures
- Control charts for each measurement with 3-sigma limits
- Cpk values showing process capability relative to your test limits
No extra code needed. These analytics are computed from the measurements and limits you already define in your test script.
How to Improve FPY
Step 1: Find the top failure modes
Open the procedure analytics in TofuPilot and sort failures by frequency. The Pareto chart shows which measurements cause the most failures. Focus on the top 3. In most production lines, 2-3 failure modes account for 80% of all failures.
Step 2: Separate false failures from real ones
For each top failure mode, check the retest behavior:
| Retest Behavior | Likely Cause | Action |
|---|---|---|
| Passes on retest, no rework | False failure (contact, noise, timing) | Fix the test, not the product |
| Passes on retest after rework | Real defect caught correctly | Improve upstream process |
| Fails again on retest | Consistent defect | Likely design or component issue |
False failures are the easiest wins. Tightening fixture maintenance schedules or adding measurement averaging can recover 1-3% FPY overnight.
Step 3: Refine test limits with production data
Initial limits from the datasheet are a starting point. After running 500+ units, use TofuPilot's control charts to see the actual distribution of each measurement. Set limits at mean +/- 3 sigma from your production data, constrained by the datasheet spec.
This catches two problems:
- Limits too tight reject good units (false failures, lower FPY)
- Limits too loose miss defective units (escapes, field failures)
TofuPilot's control charts show both the current limits and the 3-sigma values, so you can see where they diverge.
Step 4: Monitor trends
FPY isn't static. It changes when you switch component suppliers, adjust solder paste profiles, or deploy new firmware. Set up a weekly review of FPY trends in TofuPilot. A sudden drop usually points to a specific event you can trace.
FPY vs Other Quality Metrics
| Metric | What It Measures | When to Use |
|---|---|---|
| FPY | % passing on first attempt | Overall test efficiency |
| RTY | Cumulative yield across all stations | End-to-end process health |
| Cpk | Process capability vs spec limits | Individual measurement stability |
| DPMO | Defects per million opportunities | Six Sigma programs, supplier comparison |
| OEE | Overall equipment effectiveness | Station utilization analysis |
FPY is the simplest to track and the most actionable. Start here. Add Cpk when you need to analyze specific measurements. Use RTY when you have multiple test stations in sequence.