Skip to content
Scaling & Monitoring

Multi-Site Test Data Management with TofuPilot

Learn how to manage hardware test data across multiple manufacturing sites and test locations using TofuPilot's centralized platform.

JJulien Buteau
advanced10 min readMarch 14, 2026

Multi-Site Test Data Management with TofuPilot

When you test hardware at multiple locations, data silos are inevitable. Factory A has its own test system. The contract manufacturer has another. Your lab uses something else entirely. TofuPilot centralizes test data from every site into one platform, so you can compare quality across locations.

The Multi-Site Problem

Companies scale production by adding sites: an in-house prototype lab, a contract manufacturer for volume production, a second CM for geographic diversification, field test stations at customer sites.

Each site typically has:

  • Different test equipment and fixtures
  • Different data storage (local databases, CSV files, proprietary systems)
  • Different reporting formats
  • Different levels of data granularity

The result: you can't answer basic questions like "Is the CM's yield the same as our in-house line?" without weeks of manual data gathering and normalization.

Centralizing Test Data

Step 1: Standardize Procedure IDs

Use the same procedure ID at every site for the same test. This is the key to cross-site comparison.

site_a_test.py
from tofupilot import TofuPilotClient

client = TofuPilotClient()

# Same procedure ID used at every site
client.create_run(
    procedure_id="FINAL-FUNCTIONAL-V3",
    unit_under_test={
        "serial_number": "UNIT-A-0042",
        "part_number": "PROD-100-R4",
    },
    run_passed=True,
    steps=[{
        "name": "Power Rail Check",
        "step_type": "measurement",
        "status": True,
        "measurements": [{
            "name": "vcc_3v3",
            "value": 3.31,
            "unit": "V",
            "limit_low": 3.25,
            "limit_high": 3.35,
        }],
    }],
)

Step 2: Tag Runs with Site Information

Use station metadata or sub-unit tracking to identify which site produced each run.

site_b_test.py
from tofupilot import TofuPilotClient

client = TofuPilotClient()

client.create_run(
    procedure_id="FINAL-FUNCTIONAL-V3",
    unit_under_test={
        "serial_number": "UNIT-B-1087",
        "part_number": "PROD-100-R4",
    },
    run_passed=True,
    steps=[{
        "name": "Power Rail Check",
        "step_type": "measurement",
        "status": True,
        "measurements": [{
            "name": "vcc_3v3",
            "value": 3.29,
            "unit": "V",
            "limit_low": 3.25,
            "limit_high": 3.35,
        }],
    }],
)

Step 3: Compare Sites in the Dashboard

With standardized procedure IDs, TofuPilot's station comparison view works across sites. Filter by station to see:

MetricSite A (In-house)Site B (CM)
FPY98.2%95.1%
Avg vcc_3v33.31 V3.29 V
Std dev vcc_3v30.012 V0.028 V
Cycle time42 s55 s

This table tells a story: Site B has lower yield, lower average voltage, higher variance, and longer cycle time. The higher variance suggests a fixture or process control issue at the CM.

Cross-Site Quality Governance

Set Consistent Limits

Use the same measurement limits at every site. TofuPilot enforces limits at the procedure level, so a unit that passes at Site A will also pass at Site B (assuming the measurements are accurate).

Monitor Site-to-Site Differences

Check these metrics regularly across sites:

  1. FPY gap: If one site's yield is consistently lower, investigate
  2. Measurement distribution overlap: Measurement histograms from different sites should overlap. If they don't, the sites are producing or testing differently
  3. Failure mode differences: Different top failure modes at different sites suggest different process issues
  4. Cycle time variation: Significantly different cycle times may indicate different test coverage or equipment issues

Handle CM Data Integration

Contract manufacturers often use their own test systems. Options for getting their data into TofuPilot:

  1. Direct integration: CM installs TofuPilot client on their test stations. Cleanest solution, real-time data.
  2. Batch upload: CM exports CSV/JSON files, you upload via API. Works when the CM can't install software on their stations.
  3. API relay: CM pushes data to an intermediary that forwards to TofuPilot. Good for CMs with existing data systems.
cm_batch_upload.py
import csv
from tofupilot import TofuPilotClient

client = TofuPilotClient()

# Upload CM data from their export file
with open("cm_test_results.csv") as f:
    reader = csv.DictReader(f)
    for row in reader:
        client.create_run(
            procedure_id="FINAL-FUNCTIONAL-V3",
            unit_under_test={"serial_number": row["serial"]},
            run_passed=row["result"] == "PASS",
            steps=[{
                "name": "Power Rail Check",
                "step_type": "measurement",
                "status": float(row["vcc_3v3"]) >= 3.25,
                "measurements": [{
                    "name": "vcc_3v3",
                    "value": float(row["vcc_3v3"]),
                    "unit": "V",
                    "limit_low": 3.25,
                    "limit_high": 3.35,
                }],
            }],
        )

Fleet Telemetry Across Sites

For companies deploying products globally (robotics fleets, industrial equipment, medical devices), TofuPilot provides a single view of test data regardless of where the unit was tested.

Search by serial number to see a unit's full history: manufactured and tested at Site A, field-tested at the customer's location, returned and retested at Site B. Every measurement, every result, every timestamp in one place.

This is especially valuable for warranty analysis. When a unit comes back for repair, you can compare its original production test data with its current state to understand what degraded.

More Guides

Put this guide into practice