Test Data Visualization for Hardware Teams
Numbers in a spreadsheet don't show trends. Charts do. TofuPilot turns your raw test measurements into interactive dashboards that surface yield problems, measurement drift, and station issues at a glance.
Why Visualize Test Data
Hardware test data has three properties that make visualization essential:
- Volume. A production line running 500 units/day across 5 test procedures generates 2,500 test runs daily. You can't read that in a table.
- Patterns. Drift, clustering, bimodal distributions, and outliers are invisible in raw numbers but obvious in a chart.
- Context. A measurement of 3.31V means nothing alone. Plotted against the last 10,000 readings with spec limits overlaid, it tells a story.
Dashboard Views in TofuPilot
Procedure Dashboard
Every test procedure in TofuPilot gets an automatic dashboard showing:
| Widget | What it shows |
|---|---|
| FPY trend | First-pass yield over time (daily, weekly, monthly) |
| Run history | Recent runs with pass/fail status and timing |
| Failure pareto | Top failure modes ranked by frequency |
| Measurement histograms | Distribution of each measurement across all runs |
| Measurement trends | Each measurement plotted over time with limits |
No setup required. Upload test data and the dashboards populate automatically.
Unit History
Search by serial number to see every test a unit has been through. This view shows:
- All test procedures the unit has completed
- Pass/fail status for each procedure
- Full measurement data for every run
- Timeline of when each test was executed
Station Comparison
Compare performance across test stations. This is the fastest way to find station-specific issues:
- FPY by station
- Measurement distributions by station
- Cycle time by station
- Failure modes by station
If Station 3 has 5% lower yield than the others, the station comparison view makes it obvious.
Building Effective Test Dashboards
Start with FPY
First-pass yield is the single most important metric for a production test operation. It tells you what percentage of units pass all tests on the first attempt.
Track FPY at three levels:
- Overall FPY across all procedures. This is your headline number.
- FPY per procedure. Which test is causing the most failures?
- FPY per station. Is one station dragging down the overall number?
Add Measurement Distributions
For each critical measurement, check the histogram. A healthy process looks like a tight bell curve centered well within the spec limits.
Warning signs in the histogram:
- Skewed distribution: Process is biased toward one limit
- Wide distribution: High variance, process not well controlled
- Bimodal distribution: Two peaks, suggesting mixed populations (different component lots, different operators, different stations)
- Flat distribution: No central tendency, process is essentially random within limits
Use Trend Charts for Drift Detection
Plot critical measurements over time. The trend chart shows each individual reading as a data point, with spec limits drawn as horizontal lines.
Look for:
- Gradual slope: Measurement drifting toward a limit (fixture wear, calibration drift)
- Step change: Sudden shift in the baseline (new component lot, process change)
- Increasing scatter: Variance growing over time (loss of process control)
- Periodic pattern: Oscillation tied to time of day or production cycle
Failure Pareto
The failure pareto chart ranks failure modes by frequency. Focus improvement efforts on the top bar. Fixing the #1 failure mode improves overall yield more than fixing #3, #4, and #5 combined (usually).
Code: Exporting Data for Custom Visualizations
TofuPilot's built-in dashboards cover most needs. For custom analysis, export data via the API.
from tofupilot import TofuPilotClient
import csv
client = TofuPilotClient()
runs = client.get_runs(
procedure_id="MOTOR-PERFORMANCE",
limit=1000,
)
with open("motor_data.csv", "w", newline="") as f:
writer = csv.writer(f)
writer.writerow(["serial", "date", "torque_nm", "current_a", "status"])
for run in runs:
for step in run.get("steps", []):
for m in step.get("measurements", []):
if m["name"] == "peak_torque":
writer.writerow([
run["unit_under_test"]["serial_number"],
run["created_at"],
m["value"],
None,
"pass" if run["run_passed"] else "fail",
])Dashboard Anti-Patterns
| Anti-pattern | Why it's bad | What to do instead |
|---|---|---|
| Tracking only pass/fail | Misses drift and anomalies | Track individual measurements |
| Weekly summary reports | Too slow to catch issues | Use live dashboards |
| Separate dashboards per station | Can't compare across stations | Use station comparison view |
| No spec limits on charts | Can't assess margin | Always overlay limits |
| Too many metrics on one screen | Information overload | Focus on FPY + top 5 measurements |