Most hardware teams start tracking test results in Excel or Google Sheets. It works for a while, then it doesn't. You lose real-time visibility, can't handle concurrent access, and manual entry introduces errors that compound over time.
TofuPilot replaces that workflow with structured, automated test data collection. You keep writing tests in Python, and TofuPilot handles storage, analytics, and traceability.
The Spreadsheet Pattern
A typical Excel test log looks something like this:
| Serial Number | Date | Operator | Result | Voltage (V) | Current (A) | Firmware | Notes |
|---|---|---|---|---|---|---|---|
| SN-001 | 2025-01-15 | Alice | PASS | 3.31 | 0.52 | v2.1 | |
| SN-002 | 2025-01-15 | Bob | FAIL | 3.58 | 0.89 | v2.1 | Over current limit |
| SN-003 | 2025-01-16 | Alice | PASS | 3.29 | 0.48 | v2.1 |
This format has real problems at scale:
- No concurrent access. Two operators can't log results simultaneously without risking overwrites or merge conflicts.
- No validation. Nothing stops someone from entering "PSAS" instead of "PASS" or putting voltage in the current column.
- No analytics. Calculating FPY, Cpk, or failure trends means writing fragile formulas or pivot tables that break when the sheet structure changes.
- No history. When someone edits a cell, the original value is gone. You have no audit trail.
The OpenHTF + TofuPilot Equivalent
Here's the same test expressed as an OpenHTF test with TofuPilot integration. Every run is automatically structured, validated, and stored.
import openhtf as htf
from openhtf.util import units
from tofupilot.openhtf import TofuPilot
@htf.measures(
htf.Measurement("voltage").with_units(units.VOLT).in_range(3.1, 3.5),
htf.Measurement("current").with_units(units.AMPERE).in_range(maximum=0.7),
)
def power_supply_check(test):
voltage = 3.31 # Read from your instrument
current = 0.52
test.measurements.voltage = voltage
test.measurements.current = current
def main():
test = htf.Test(power_supply_check)
with TofuPilot(test):
test.execute(test_start=lambda: "SN-001")
if __name__ == "__main__":
main()Each run automatically captures the serial number, pass/fail outcome, every measurement with its limits, timestamps, and the test station identity. No manual entry. No copy-paste errors.
What Changes When You Switch
| Capability | Excel / Google Sheets | TofuPilot |
|---|---|---|
| Data entry | Manual, error-prone | Automatic from test code |
| Concurrent access | File locks, merge conflicts | Multi-user, multi-station by default |
| Measurement validation | None | Limits enforced at test time |
| FPY and yield trends | Manual formulas | Built-in dashboard, real-time |
| Cpk and SPC | Requires custom macros | Automatic control charts |
| Failure analysis | Manual filtering | Failure Pareto, drill-down by station |
| Audit trail | No history | Full revision history per run |
| Search and filter | Ctrl+F | Filter by serial, station, date, outcome, batch |
| API access | None | REST API for integrations |
Keeping Your Existing Data
If you have historical test data in spreadsheets that you want to preserve, you can import it through TofuPilot's REST API. Structure each row as a test run with measurements and POST it to the API.
import csv
from tofupilot import TofuPilotClient
client = TofuPilotClient()
with open("test_results.csv") as f:
reader = csv.DictReader(f)
for row in reader:
client.create_run(
procedure_id="power-board-test",
unit_under_test={"serial_number": row["Serial Number"]},
run_passed=row["Result"] == "PASS",
steps=[
{
"name": "power_supply_check",
"step_passed": row["Result"] == "PASS",
"measurements": [
{
"name": "voltage",
"measured_value": float(row["Voltage (V)"]),
"unit": "V",
"lower_limit": 3.1,
"upper_limit": 3.5,
},
{
"name": "current",
"measured_value": float(row["Current (A)"]),
"unit": "A",
"upper_limit": 0.7,
},
],
}
],
)Run this once to backfill your history, then switch all new tests to the OpenHTF workflow.
What You Get in the Dashboard
Once your tests report to TofuPilot, open the dashboard at tofupilot.app. You'll find:
- FPY trends across stations and time periods, calculated automatically.
- Measurement histograms with Cpk and control charts for every measurement you define.
- Failure Pareto showing which measurements fail most often and on which stations.
- Station throughput so you can see which lines are running and which are idle.
- Full traceability per serial number, with every test run, measurement, and revision linked.
These are the analytics you'd otherwise build with pivot tables, VBA macros, or custom scripts. They update in real time as new runs come in.
Running Both Systems in Parallel
You don't have to switch overnight. A practical migration path:
- Pick one test station and add the TofuPilot integration to its OpenHTF tests.
- Run both systems for a week. Keep logging to the spreadsheet while TofuPilot collects the same data automatically.
- Compare results to build confidence that nothing is lost.
- Roll out to remaining stations once you're satisfied.
The spreadsheet stays as a backup until you're ready to retire it. No data is at risk during the transition.