TofuPilot associates every test run with a named station. By encoding line, factory, and shift into the station name and run metadata, you get a single query surface that spans every production line and factory without custom tooling.
Why Multi-Line Traceability Matters
When a PCBA defect appears in the field, you need to answer three questions fast:
- Which production line built the affected units?
- Which shift was running at the time?
- Is the failure isolated to one station or systemic across a line?
Without structured metadata on each run, answering these questions means cross-referencing operator logs, shift schedules, and test CSV exports by hand. TofuPilot solves this by making station identity and run metadata first-class fields on every test record.
| Without structured metadata | With TofuPilot station and run metadata |
|---|---|
| Manual log cross-referencing | Single dashboard filter |
| Shift data in spreadsheets | Encoded in run at test time |
| Per-line export files | Unified API query |
| Yield comparison in Excel | Built-in per-station yield chart |
Prerequisites
- Python 3.9+
- OpenHTF installed (
pip install openhtf) - TofuPilot client installed (
pip install tofupilot) - A TofuPilot account with at least one procedure created
Setting Up Station Identity in TofuPilot
Each physical test station maps to a named station in TofuPilot. The naming convention carries all the traceability context you need.
Station Naming Convention
Use a structured name that encodes factory, line, and station number:
{FACTORY}-{LINE}-FCT{STATION_NUMBER}
Examples:
| Station name | Factory | Line | Station |
|---|---|---|---|
SZX-L1-FCT01 | Shenzhen | Line 1 | FCT station 1 |
SZX-L2-FCT01 | Shenzhen | Line 2 | FCT station 1 |
TXL-L1-FCT01 | Toulouse | Line 1 | FCT station 1 |
TXL-L1-FCT02 | Toulouse | Line 1 | FCT station 2 |
Environment-Based Station Configuration
Store the station identity in environment variables on each machine, not in the test script:
TOFUPILOT_API_KEY=tp_station_xxxxxxxxxxxxx
STATION_ID=SZX-L1-FCT01
FACTORY=SZX
LINE=L1Load them in your test script at runtime:
import os
STATION_ID = os.environ["STATION_ID"]
FACTORY = os.environ["FACTORY"]
LINE = os.environ["LINE"]This approach means the same test script binary deploys to every station. Only the environment differs.
Tagging Runs with Line, Factory, and Shift Metadata
Determining the Current Shift
from datetime import datetime
def get_current_shift() -> str:
"""Return shift label based on local wall clock."""
hour = datetime.now().hour
if 6 <= hour < 14:
return "morning"
elif 14 <= hour < 22:
return "afternoon"
else:
return "night"Full OpenHTF Test with Station Metadata
This example tests a PCBA power supply board. The station identity, line, factory, and shift are injected at test setup time and attached to every run.
import os
import openhtf as htf
from openhtf.util import units
from tofupilot.openhtf import TofuPilot
import config
from shift import get_current_shift
class PowerSupplyPlug(htf.plugs.BasePlug):
"""Controls the bench PSU over USB-serial."""
def setUp(self):
import serial
self._port = serial.Serial("/dev/ttyUSB0", 9600, timeout=1)
def set_voltage(self, volts: float, channel: int = 1):
self._port.write(f":APPL CH{channel},{volts},1.0
".encode())
def measure_voltage(self, channel: int = 1) -> float:
self._port.write(f":MEAS:VOLT? CH{channel}
".encode())
return float(self._port.readline().strip())
def tearDown(self):
self._port.close()
@htf.plug(psu=PowerSupplyPlug)
@htf.measures(
htf.Measurement("rail_3v3")
.in_range(minimum=3.235, maximum=3.365)
.with_units(units.VOLT)
.doc("3.3 V rail under 100 mA load"),
htf.Measurement("rail_5v0")
.in_range(minimum=4.900, maximum=5.100)
.with_units(units.VOLT)
.doc("5.0 V rail under 200 mA load"),
)
def test_power_rails(test, psu):
psu.set_voltage(3.3, channel=1)
test.measurements.rail_3v3 = psu.measure_voltage(channel=1)
psu.set_voltage(5.0, channel=2)
test.measurements.rail_5v0 = psu.measure_voltage(channel=2)
@htf.plug(psu=PowerSupplyPlug)
@htf.measures(
htf.Measurement("idle_current")
.in_range(minimum=0, maximum=0.350)
.with_units(units.AMPERE)
.doc("Total board current draw at idle"),
)
def test_idle_current(test, psu):
psu.set_voltage(5.0, channel=2)
current_a = float(psu._port.readline().strip())
test.measurements.idle_current = current_a
def main():
serial_number = input("Scan DUT serial number: ").strip()
test = htf.Test(
test_power_rails,
test_idle_current,
procedure_id="PCBA-FCT-001",
)
with TofuPilot(test):
test.execute(test_start=lambda: serial_number)
if __name__ == "__main__":
main()Every run in TofuPilot carries the station identity as queryable metadata alongside the standard pass/fail result and measurements.
Comparing Results Across Lines in TofuPilot
With runs uploading from all lines, TofuPilot's filtering and analytics give you direct comparison:
- FPY by station shows yield for each station grouped by line prefix. If Line B consistently trails Line A, the problem is systemic to that line.
- Measurement histograms reveal whether one line's values are shifted or have wider spread. A shifted mean suggests calibration offset.
- Failure Pareto by station shows which specific tests fail more often on each station. If one station accounts for most failures, start investigating that fixture.
- Trend charts show whether yield gaps are constant, growing, or appeared suddenly after a change.
Investigating Yield Differences
When you find a yield gap between lines, narrow down the cause systematically:
- Check measurement distributions. If one line's values are offset, it's likely calibration or equipment. If they're wider, it's process variation.
- Check by time of day. Yield drops on night shifts point to operator training or environmental changes.
- Check individual stations. Sometimes the "line" problem is actually one bad station dragging down the average.
- Check by component lot. If you track lot numbers as metadata, filter by lot to see if specific batches drive the difference.
A station with FPY below 95% while neighboring stations are at 97-98% typically indicates a fixture contact issue, cable degradation, or calibration drift.
Deployment Checklist
| Step | Action |
|---|---|
| Station naming | Follow {FACTORY}-{LINE}-FCT{N} convention |
| API keys | One key per station, stored in environment |
| Procedure ID | Same ID across all lines for unified yield view |
| Dashboard | Verify station names appear on first run before full rollout |