What Is ATE with TofuPilot
Automated test equipment (ATE) is a system that tests a device without manual intervention. It applies stimuli, measures responses, and makes pass/fail decisions based on predefined limits. This guide covers what ATE involves, how modern Python-based ATE compares to traditional systems, and how to log ATE results with TofuPilot.
What ATE Includes
An ATE system has four layers:
| Layer | Purpose | Examples |
|---|---|---|
| Test executive | Sequences test steps, manages flow | OpenHTF, NI TestStand, custom scripts |
| Instruments | Apply stimuli and measure responses | DMM, oscilloscope, power supply, signal generator |
| Fixture | Connects instruments to the DUT | Bed-of-nails, pogo pins, cable harness |
| Software | Controls instruments, records data | Python + PyVISA, LabVIEW, C# |
The device under test (DUT) or unit under test (UUT) sits in the fixture. The test executive runs the sequence. Instruments measure. Software records.
Traditional ATE vs Python-Based ATE
| Aspect | Traditional (NI/Keysight) | Python-Based |
|---|---|---|
| Test executive | NI TestStand ($3-5K/seat) | OpenHTF (free, open source) |
| Instrument control | LabVIEW, proprietary drivers | PyVISA, SCPI, open drivers |
| Data storage | Local database, proprietary format | TofuPilot (cloud or self-hosted) |
| Version control | Difficult with binary files | Git-native (Python scripts) |
| Platform | Windows only | Windows, Linux, macOS |
| Deployment | Manual install per station | pip install, Docker, CI/CD |
| Cost per station | $5-20K in software licenses | $0 in software licenses |
Python-based ATE uses the same instruments and fixtures. The difference is the software stack. You replace proprietary test executives and data systems with open-source tools and TofuPilot.
Prerequisites
- Python 3.10+
- OpenHTF installed (
pip install openhtf) - TofuPilot Python SDK installed (
pip install tofupilot)
Step 1: Define the Test Sequence
Each test step becomes an OpenHTF phase. The test executive runs them in order, collects measurements, and determines pass/fail.
import openhtf as htf
from openhtf.util import units
@htf.measures(
htf.Measurement("supply_current_mA")
.in_range(minimum=90, maximum=110)
.with_units(units.MILLIAMPERE),
)
def phase_power_up(test):
"""Apply power and measure supply current."""
test.measurements.supply_current_mA = 101.3
@htf.measures(
htf.Measurement("output_frequency_Hz")
.in_range(minimum=999000, maximum=1001000)
.with_units(units.HERTZ),
)
def phase_frequency_check(test):
"""Measure output frequency against specification."""
test.measurements.output_frequency_Hz = 1000250
@htf.measures(
htf.Measurement("self_test_result").equals("PASS"),
)
def phase_self_test(test):
"""Command the DUT to run its built-in self-test."""
test.measurements.self_test_result = "PASS"Step 2: Connect to TofuPilot
TofuPilot replaces the proprietary database that traditional ATE systems use. Every test run uploads automatically with measurements, limits, and pass/fail status.
from tofupilot.openhtf import TofuPilot
test = htf.Test(
phase_power_up,
phase_frequency_check,
phase_self_test,
)
with TofuPilot(test):
test.execute(test_start=lambda: input("Scan DUT serial: "))Step 3: Track ATE Performance
TofuPilot tracks ATE results automatically. Open the Analytics tab to see:
- First pass yield per test procedure and station
- Measurement distributions with limit overlays
- Failure Pareto showing which test steps fail most
- Station throughput (units per hour)
- Station comparison to catch fixture degradation or instrument drift
This data replaces the custom reports that traditional ATE software generates. It's available across all stations in real time.
ATE Architecture Patterns
| Pattern | Stations | Best For |
|---|---|---|
| Single station, single DUT | 1 | Prototyping, low volume |
| Single station, multi-DUT | 1 | Parallel testing, higher throughput |
| Multi-station, shared fixtures | 2-10 | Medium volume production |
| Multi-station, line integration | 10-100 | High volume, MES integration |
TofuPilot supports all patterns. Each station runs its own test script and uploads results independently. The dashboard aggregates data across stations, lines, and factories.