Skip to content
Migrating from Legacy Systems

Migrate from Custom Test Databases

Replace your custom PostgreSQL, SQLite, or Access test database with structured test data collection using OpenHTF and TofuPilot.

JJulien Buteau
intermediate9 min readMarch 14, 2026

Many hardware teams build custom databases for test results. PostgreSQL, SQLite, Microsoft Access. They work well at first. Then someone asks for FPY trends across stations, or you need multi-site access, or an auditor wants full revision history, and the maintenance burden outweighs the original simplicity.

TofuPilot replaces that custom infrastructure with a managed platform that handles storage, analytics, and traceability out of the box.

Common Custom DB Patterns

Most homegrown test databases follow one of a few schemas. Here's a typical one:

typical_test_schema.sql
-- The pattern most teams converge on
CREATE TABLE test_runs (
    id SERIAL PRIMARY KEY,
    serial_number VARCHAR(50),
    test_name VARCHAR(100),
    result VARCHAR(10),  -- 'PASS' or 'FAIL'
    operator VARCHAR(50),
    station VARCHAR(50),
    started_at TIMESTAMP,
    duration_seconds FLOAT
);

CREATE TABLE measurements (
    id SERIAL PRIMARY KEY,
    run_id INTEGER REFERENCES test_runs(id),
    name VARCHAR(100),
    value FLOAT,
    unit VARCHAR(20),
    lower_limit FLOAT,
    upper_limit FLOAT,
    passed BOOLEAN
);

This structure captures the basics. But it misses things that matter at scale:

  • No built-in analytics. FPY, Cpk, and control charts require custom queries and visualization code that someone has to maintain.
  • No audit trail. If someone updates a row, the original value is gone unless you've built trigger-based history tables.
  • No multi-site access. A local PostgreSQL or SQLite file doesn't serve teams across locations without infrastructure work.
  • Schema drift. Each test station might insert slightly different data depending on who wrote the INSERT statement.

TestStand Database Users

If you're using NI TestStand's built-in database logger, you have a variation of this problem. TestStand writes to SQL Server, Oracle, or Access using a fixed schema with UUT_RESULT, STEP_RESULT, PROP_RESULT, and type-specific tables (PROP_NUMERICLIMIT, PROP_NUMERIC, PROP_STRINGVALUE). Querying it requires 5-6 table JOINs, and adding custom metadata means modifying the Process Model's database mapping.

The same migration pattern applies: replace the database INSERT (or TestStand's database logger) with OpenHTF measurements and let TofuPilot handle storage. If you're migrating from TestStand specifically, see the dedicated guide on migrating from NI TestStand to Python.

Replacing Custom Inserts with OpenHTF

If your current workflow looks like "run test script, INSERT results into database," here's the OpenHTF + TofuPilot equivalent. Instead of managing the database, you define measurements as part of the test and let TofuPilot handle storage.

A typical custom database insert might look like this:

old_custom_insert.py
# What you're replacing: manual DB inserts after each test
import psycopg2

conn = psycopg2.connect("dbname=testdata user=testeng")
cur = conn.cursor()

cur.execute(
    "INSERT INTO test_runs (serial_number, test_name, result, station) "
    "VALUES (%s, %s, %s, %s) RETURNING id",
    ("SN-5001", "thermal_cycle", "PASS", "STATION-3"),
)
run_id = cur.fetchone()[0]

cur.execute(
    "INSERT INTO measurements (run_id, name, value, unit, lower_limit, upper_limit, passed) "
    "VALUES (%s, %s, %s, %s, %s, %s, %s)",
    (run_id, "peak_temp", 84.2, "C", 70.0, 90.0, True),
)

conn.commit()
conn.close()

Here's the same test in OpenHTF with TofuPilot:

thermal_cycle_test.py
import openhtf as htf
from openhtf.util import units
from tofupilot.openhtf import TofuPilot

@htf.measures(
    htf.Measurement("peak_temp")
        .with_units(units.DEGREE_CELSIUS)
        .in_range(70.0, 90.0),
    htf.Measurement("settling_time")
        .with_units(units.SECOND)
        .in_range(maximum=30.0),
    htf.Measurement("thermal_resistance")
        .in_range(0.5, 2.0),
)
def thermal_cycle(test):
    test.measurements.peak_temp = 84.2
    test.measurements.settling_time = 18.7
    test.measurements.thermal_resistance = 1.1

def main():
    test = htf.Test(thermal_cycle)
    with TofuPilot(test):
        test.execute(test_start=lambda: "SN-5001")

if __name__ == "__main__":
    main()

No database connection code. No SQL. No schema maintenance. Measurements are defined with their limits in the test itself, so every run is validated and structured identically.

Migrating Historical Data

You have years of test data in your custom database. TofuPilot's REST API lets you import it with full timestamp preservation.

import_from_custom_db.py
import psycopg2
from tofupilot import TofuPilotClient

client = TofuPilotClient()
conn = psycopg2.connect("dbname=testdata user=testeng")
cur = conn.cursor()

# Fetch runs with their measurements
cur.execute("""
    SELECT r.serial_number, r.test_name, r.result, r.started_at, r.duration_seconds,
           m.name, m.value, m.unit, m.lower_limit, m.upper_limit
    FROM test_runs r
    JOIN measurements m ON m.run_id = r.id
    ORDER BY r.id
""")

current_run = None
steps = []

for row in cur:
    serial, test_name, result, started_at, duration, m_name, m_val, m_unit, m_low, m_high = row

    if current_run and current_run != (serial, started_at):
        # Flush previous run
        prev_serial, prev_started = current_run
        client.create_run(
            procedure_id=test_name,
            unit_under_test={"serial_number": prev_serial},
            run_passed=result == "PASS",
            started_at=prev_started,
            duration=duration,
            steps=steps,
        )
        steps = []

    current_run = (serial, started_at)

    # Find or create step
    step = next((s for s in steps if s["name"] == test_name), None)
    if not step:
        step = {"name": test_name, "step_passed": result == "PASS", "measurements": []}
        steps.append(step)

    step["measurements"].append({
        "name": m_name,
        "measured_value": m_val,
        "unit": m_unit,
        "lower_limit": m_low,
        "upper_limit": m_high,
    })

conn.close()

Adapt the query to match your schema. The key is mapping your tables to TofuPilot's structure: each row in test_runs becomes a run, each row in measurements becomes a measurement within a step.

What Replaces Your Custom Queries

If you've built custom SQL queries or Python scripts for analytics, TofuPilot's dashboard replaces them.

Your custom query / scriptTofuPilot built-in feature
SELECT COUNT(*) ... GROUP BY result for yieldFPY trends, updated in real time
Statistical process control scriptsCpk and control charts per measurement
WHERE result = 'FAIL' GROUP BY measurementFailure Pareto with drill-down
GROUP BY station for throughputStation throughput dashboard
WHERE serial_number = 'X' for traceabilitySerial number search with full history
Custom reporting scriptsExportable reports and REST API

TofuPilot tracks all of this automatically. Open the Analytics tab to see FPY, Cpk, and failure analysis for any procedure.

Running Both Systems During Transition

You can run your custom database and TofuPilot in parallel to validate the migration:

  1. Add TofuPilot to one test station while keeping your existing database inserts active. Both systems receive the same results.
  2. Compare data for a week. Check that measurement values, pass/fail counts, and timestamps match between your database and TofuPilot.
  3. Remove the custom database inserts from that station once you're confident. Move to the next station.
  4. Keep your old database read-only as an archive. You can always query it for historical verification.

The custom database doesn't need to go away immediately. It just stops being the system of record once TofuPilot takes over.

More Guides

Put this guide into practice