Skip to content
Scaling & Monitoring

Monitor Test Stations Across Factories

Track test station performance across multiple factories from a single dashboard. Compare yield, throughput, and measurement drift by site and station.

JJulien Buteau
intermediate6 min readMarch 14, 2026

When the same test procedure runs on dozens of stations across multiple factories, you need a single place to see what's happening everywhere. TofuPilot gives you cross-factory visibility without building custom dashboards or aggregating CSVs.

Why Multi-Site Monitoring Matters

A test that passes 98% at your Shenzhen factory and 94% at your Guadalajara factory tells you something is wrong. The problem could be equipment calibration, operator training, environmental conditions, or component lot variation. You can't fix what you can't see.

Station-level data also helps you spot individual machines drifting before they start producing false passes or unnecessary failures.

Tag Each Station with Metadata

OpenHTF lets you attach metadata to every test run. Use station_id to identify the physical station and add factory or site information so TofuPilot can group results.

test_board.py
import openhtf as htf
from openhtf.util import units
from tofupilot.openhtf import TofuPilot

@htf.measures(
    htf.Measurement("supply_voltage")
    .in_range(minimum=4.9, maximum=5.1)
    .with_units(units.VOLT)
)
def test_supply_voltage(test):
    voltage = 5.03  # Read from instrument
    test.measurements.supply_voltage = voltage

def main():
    test = htf.Test(
        test_supply_voltage,
        station_id="SMT-LINE3-ICT-02",
    )
    with TofuPilot(test):
        test.execute(test_start=lambda: "PCB-2026-00451")

if __name__ == "__main__":
    main()

The station_id value should be unique and descriptive. A good convention is {SITE}-{LINE}-{FUNCTION}-{NUMBER}, like SZ-L3-ICT-02 for Shenzhen, Line 3, In-Circuit Test, Station 2.

Standardize Station Naming Across Sites

Consistent naming is critical. If one factory uses ICT_01 and another uses ict-station-1, you can't filter or compare them reliably.

Define a naming convention before deployment:

FieldFormatExample
Site code2-3 letter abbreviationSZ, GDL, AUS
Line numberL + numberL1, L3
Test typeStandard abbreviationICT, FCT, EOL
Station numberZero-padded number01, 02, 12

This gives you station IDs like SZ-L3-ICT-02 or GDL-L1-FCT-05 that are readable and sortable.

Run the Same Procedure Everywhere

Multi-site monitoring only works if every factory runs the same test procedure. Version-control your test scripts in Git and deploy them to all sites from a single repository.

test_final.py
import openhtf as htf
from openhtf.util import units
from tofupilot.openhtf import TofuPilot

@htf.measures(
    htf.Measurement("output_current")
    .in_range(minimum=0.95, maximum=1.05)
    .with_units(units.AMPERE),
    htf.Measurement("firmware_checksum")
    .equals("a3f8c2d1")
)
def test_output_and_firmware(test):
    test.measurements.output_current = 1.01  # Read from load
    test.measurements.firmware_checksum = "a3f8c2d1"  # Read from DUT

def main():
    test = htf.Test(
        test_output_and_firmware,
        station_id="AUS-L2-FCT-01",
    )
    with TofuPilot(test):
        test.execute(test_start=lambda: "UNIT-88210")

if __name__ == "__main__":
    main()

When every station uploads results from the same procedure, TofuPilot automatically groups them. You can then filter by station, site, or time range to compare performance.

View Cross-Factory Data in TofuPilot

Once runs are uploading from multiple sites, TofuPilot's station dashboard shows:

  • Per-station yield so you can spot underperforming machines instantly
  • Measurement distributions by station to catch calibration drift before it causes failures
  • Throughput by station to identify bottlenecks and downtime
  • Failure Pareto by site to see whether failure modes differ between factories

Filter by station ID prefix (like SZ- or GDL-) to compare entire factories side by side. Drill into individual stations to investigate anomalies.

Handle Station Offline Detection

A station that stops reporting is a problem. It might be down for maintenance, or it might be running tests that aren't uploading. TofuPilot tracks when each station last reported, so gaps in data are visible immediately.

Build a habit of checking the station overview at the start of each shift. If a station hasn't reported in the expected window, investigate before production continues.

More Guides

Put this guide into practice