Skip to content
Test Data & Analytics

Test Data Visualization for Hardware Teams

Learn how to build dashboards and visualizations for hardware test data using TofuPilot's built-in analytics and charting tools.

JJulien Buteau
beginner9 min readMarch 14, 2026

Test Data Visualization for Hardware Teams

Numbers in a spreadsheet don't show trends. Charts do. TofuPilot turns your raw test measurements into interactive dashboards that surface yield problems, measurement drift, and station issues at a glance.

Why Visualize Test Data

Hardware test data has three properties that make visualization essential:

  1. Volume. A production line running 500 units/day across 5 test procedures generates 2,500 test runs daily. You can't read that in a table.
  2. Patterns. Drift, clustering, bimodal distributions, and outliers are invisible in raw numbers but obvious in a chart.
  3. Context. A measurement of 3.31V means nothing alone. Plotted against the last 10,000 readings with spec limits overlaid, it tells a story.

Dashboard Views in TofuPilot

Procedure Dashboard

Every test procedure in TofuPilot gets an automatic dashboard showing:

WidgetWhat it shows
FPY trendFirst-pass yield over time (daily, weekly, monthly)
Run historyRecent runs with pass/fail status and timing
Failure paretoTop failure modes ranked by frequency
Measurement histogramsDistribution of each measurement across all runs
Measurement trendsEach measurement plotted over time with limits

No setup required. Upload test data and the dashboards populate automatically.

Unit History

Search by serial number to see every test a unit has been through. This view shows:

  • All test procedures the unit has completed
  • Pass/fail status for each procedure
  • Full measurement data for every run
  • Timeline of when each test was executed

Station Comparison

Compare performance across test stations. This is the fastest way to find station-specific issues:

  • FPY by station
  • Measurement distributions by station
  • Cycle time by station
  • Failure modes by station

If Station 3 has 5% lower yield than the others, the station comparison view makes it obvious.

Building Effective Test Dashboards

Start with FPY

First-pass yield is the single most important metric for a production test operation. It tells you what percentage of units pass all tests on the first attempt.

Track FPY at three levels:

  1. Overall FPY across all procedures. This is your headline number.
  2. FPY per procedure. Which test is causing the most failures?
  3. FPY per station. Is one station dragging down the overall number?

Add Measurement Distributions

For each critical measurement, check the histogram. A healthy process looks like a tight bell curve centered well within the spec limits.

Warning signs in the histogram:

  • Skewed distribution: Process is biased toward one limit
  • Wide distribution: High variance, process not well controlled
  • Bimodal distribution: Two peaks, suggesting mixed populations (different component lots, different operators, different stations)
  • Flat distribution: No central tendency, process is essentially random within limits

Use Trend Charts for Drift Detection

Plot critical measurements over time. The trend chart shows each individual reading as a data point, with spec limits drawn as horizontal lines.

Look for:

  • Gradual slope: Measurement drifting toward a limit (fixture wear, calibration drift)
  • Step change: Sudden shift in the baseline (new component lot, process change)
  • Increasing scatter: Variance growing over time (loss of process control)
  • Periodic pattern: Oscillation tied to time of day or production cycle

Failure Pareto

The failure pareto chart ranks failure modes by frequency. Focus improvement efforts on the top bar. Fixing the #1 failure mode improves overall yield more than fixing #3, #4, and #5 combined (usually).

Code: Exporting Data for Custom Visualizations

TofuPilot's built-in dashboards cover most needs. For custom analysis, export data via the API.

export_measurements.py
from tofupilot import TofuPilotClient
import csv

client = TofuPilotClient()

runs = client.get_runs(
    procedure_id="MOTOR-PERFORMANCE",
    limit=1000,
)

with open("motor_data.csv", "w", newline="") as f:
    writer = csv.writer(f)
    writer.writerow(["serial", "date", "torque_nm", "current_a", "status"])
    for run in runs:
        for step in run.get("steps", []):
            for m in step.get("measurements", []):
                if m["name"] == "peak_torque":
                    writer.writerow([
                        run["unit_under_test"]["serial_number"],
                        run["created_at"],
                        m["value"],
                        None,
                        "pass" if run["run_passed"] else "fail",
                    ])

Dashboard Anti-Patterns

Anti-patternWhy it's badWhat to do instead
Tracking only pass/failMisses drift and anomaliesTrack individual measurements
Weekly summary reportsToo slow to catch issuesUse live dashboards
Separate dashboards per stationCan't compare across stationsUse station comparison view
No spec limits on chartsCan't assess marginAlways overlay limits
Too many metrics on one screenInformation overloadFocus on FPY + top 5 measurements

More Guides

Put this guide into practice