Skip to content
Test Data & Analytics

Anomaly Detection in Hardware Test Data

Learn how to detect anomalies in hardware test measurements using TofuPilot's analytics and automated limit checking.

JJulien Buteau
intermediate10 min readMarch 14, 2026

Anomaly Detection in Hardware Test Data

A unit passes all its limits but something looks wrong. The voltage is 2% higher than usual. The calibration took twice as long. These subtle anomalies are invisible in pass/fail reports but obvious in measurement trends. TofuPilot surfaces them automatically.

Why Pass/Fail Isn't Enough

Spec limits define the acceptable range. But "within spec" doesn't mean "normal." A power supply output that's been stable at 3.30V for 10,000 units and suddenly reads 3.34V is still within the 3.25V-3.35V spec, but it's anomalous. Something changed.

Catching these anomalies early prevents escapes. The unit that reads 3.34V today might read 3.36V tomorrow, after it's shipped.

Types of Hardware Test Anomalies

Anomaly TypeExampleRisk
DriftMeasurement gradually shifting toward a limitEventual field failure
Step changeMeasurement jumping to a new baselineProcess or tooling change
Increased varianceMeasurement scatter wideningLoss of process control
OutlierSingle unit far from the distributionComponent defect
Bimodal distributionTwo clusters instead of oneMixed component lots

Setting Up Anomaly Detection in TofuPilot

Step 1: Define Measurements with Limits

Every measurement you upload should include limits. TofuPilot uses these for pass/fail gating, but they also define the baseline for anomaly detection.

measurements_with_limits.py
from tofupilot import TofuPilotClient

client = TofuPilotClient()

client.create_run(
    procedure_id="SENSOR-CALIBRATION",
    unit_under_test={"serial_number": "SENS-7821"},
    steps=[{
        "name": "Zero Offset",
        "step_type": "measurement",
        "status": True,
        "measurements": [{
            "name": "zero_offset_mv",
            "value": 0.12,
            "unit": "mV",
            "limit_low": -1.0,
            "limit_high": 1.0,
        }],
    }, {
        "name": "Sensitivity",
        "step_type": "measurement",
        "status": True,
        "measurements": [{
            "name": "sensitivity_mv_per_g",
            "value": 100.3,
            "unit": "mV/g",
            "limit_low": 95.0,
            "limit_high": 105.0,
        }],
    }],
)

Step 2: Monitor Measurement Distributions

Open the measurement view for any procedure in TofuPilot. The histogram shows the distribution of values across all runs.

A healthy process looks like a tight normal distribution centered well within the spec limits. Warning signs:

  • Distribution shifting: The center of the histogram is moving toward one limit
  • Distribution widening: The tails are getting closer to the limits
  • Multiple peaks: Two or more clusters in the histogram

Step 3: Set Warning Thresholds

Don't wait for a measurement to hit its spec limit. Set warning thresholds at a percentage of the spec range.

ThresholdLevelAction
Within 70% of spec rangeNormalNo action
70-90% of spec rangeWarningInvestigate trend
90-100% of spec rangeCriticalStop and root-cause
Beyond spec rangeFailUnit rejected

Step 4: Track Trends Over Time

The measurement trend view in TofuPilot shows each reading plotted over time. Use this to spot:

  • Slow drift: A linear trend heading toward a limit. Often caused by fixture wear, calibration drift, or environmental changes.
  • Step changes: An abrupt shift in the baseline. Often caused by a new component lot, process change, or equipment swap.
  • Periodic patterns: Readings that cycle with time of day, day of week, or production batch. Often caused by environmental factors (temperature, humidity) or shift-dependent operator procedures.

Automated Data Checks

TofuPilot checks every incoming measurement against its defined limits in real time. Failed measurements are flagged immediately, and the run is marked as failed.

For anomaly detection beyond simple limit checks, export measurement data via the API and apply statistical methods:

cpk_analysis.py
import numpy as np

# Fetch measurement values from TofuPilot API
values = [3.30, 3.31, 3.29, 3.30, 3.32, 3.31, 3.30, 3.34, 3.29, 3.31]
usl = 3.35  # Upper spec limit
lsl = 3.25  # Lower spec limit

mean = np.mean(values)
std = np.std(values, ddof=1)

cpk = min((usl - mean) / (3 * std), (mean - lsl) / (3 * std))
print(f"Cpk: {cpk:.2f}")

if cpk < 1.33:
    print("Process capability below target. Investigate.")

Real-World Example: Detecting a Supplier Quality Escape

A robotics company tests motor controllers on their production line. Every unit measures motor current draw at three load points. For 6 months, the 50% load measurement averaged 2.1A with a standard deviation of 0.05A.

In week 27, the average shifted to 2.25A. Still within the 1.5A-3.0A spec limit, so no units failed. But TofuPilot's trend view showed the step change clearly.

Investigation revealed a new motor driver IC lot with slightly different gate threshold voltages. The company worked with their supplier to tighten the incoming spec before the drift could cause field issues.

Without measurement trending, this would have been invisible until units started failing in the field.

More Guides

Put this guide into practice