Auditors want to see that every unit was tested, what was measured, what the limits were, and whether it passed. TofuPilot captures all of this automatically when you structure your OpenHTF tests correctly.
What Auditors Need to See
Regardless of the standard (ISO 9001, ISO 13485, IATF 16949, AS9100), audit requirements for test records share the same core elements.
| Element | Why Auditors Ask for It |
|---|---|
| Serial number (DUT ID) | Proves traceability to a specific unit |
| Timestamp | Proves when the test happened |
| Station ID | Proves which equipment was used |
| Operator | Proves who ran the test |
| Measurements with limits | Proves acceptance criteria were applied |
| Pass/fail verdict | Proves disposition was recorded |
| Firmware or software version | Proves configuration at time of test |
If your test records are missing any of these, you'll get a finding. TofuPilot stores all of them when you include them in your test run.
Writing an Audit-Ready Test
A good test record starts with a well-structured OpenHTF test. Include every measurement with explicit limits, and pass metadata through your test run.
import openhtf as htf
from openhtf.util import units
from tofupilot.openhtf import TofuPilot
@htf.measures(
htf.Measurement("input_resistance")
.in_range(minimum=950, maximum=1050)
.with_units(units.OHM),
htf.Measurement("leakage_current")
.in_range(maximum=0.000005)
.with_units(units.AMPERE),
htf.Measurement("dielectric_strength_pass")
.equals(True),
)
def electrical_safety_test(test):
test.measurements.input_resistance = 1002
test.measurements.leakage_current = 0.0000013
test.measurements.dielectric_strength_pass = True
@htf.measures(
htf.Measurement("output_power")
.in_range(minimum=48.0, maximum=52.0)
.with_units(units.WATT),
htf.Measurement("efficiency_percent")
.in_range(minimum=89.0),
)
def performance_test(test):
test.measurements.output_power = 50.1
test.measurements.efficiency_percent = 92.4
def main():
test = htf.Test(
electrical_safety_test,
performance_test,
station_id="STATION-EOL-02",
)
with TofuPilot(test):
test.execute(test_start=lambda: "PSU-2026-00512")
if __name__ == "__main__":
main()Every measurement gets stored with its value, limits, and verdict in TofuPilot. The serial number links the record to the unit's full history.
What Makes a Good Test Record
Three things separate a record that satisfies auditors from one that raises questions.
Consistent measurement names. Use the same name for the same measurement everywhere. If leakage_current is called leak_curr on one station and leakage_current on another, you can't demonstrate consistent process monitoring. Pick one name and stick with it.
Explicit limits on every measurement. A measured value without limits is just data. Auditors want to see that you defined acceptance criteria before production, not after. Always use .in_range() or .equals() on every measurement.
Meaningful metadata. Include the operator, station, firmware version, and any other context your quality system defines. When an auditor asks "who tested unit X on what equipment with what software?", you need an answer.
Viewing and Exporting Records
Once your tests run, TofuPilot stores the complete record. You can access it in three ways.
Run detail page. Search for any unit by serial number in TofuPilot's dashboard. The run detail page shows every measurement, its value, its limits, the verdict, timestamp, station, and all metadata. This is your primary view during an audit walkthrough.
CSV export. Export run data from the dashboard for offline review or to attach to audit documentation. The export includes all measurements, limits, and metadata.
API access. Pull run records programmatically through TofuPilot's REST API at https://tofupilot.app for integration with your QMS or document control system. This is useful if your quality team needs to generate batch records or certificates of conformance.
Metadata Checklist
Before going into production, verify that your tests capture everything your quality system requires. Here's a starting checklist.
- DUT serial number (unique per unit)
- Test station ID (matches calibration records)
- Operator ID (matches training records)
- Firmware/software version (matches release records)
- All measurements have limits defined
- Measurement names are consistent across stations
- Test procedure name or ID is included
- Work order or batch number (if applicable)
If you're missing any of these, add them to your measurement definitions before production starts. It's much harder to backfill traceability data after the fact.