🧪 Quality Assurance

Testing Methodology

137 end-to-end tests validate every generated SQL query matches Prisma's output exactly

137
E2E Tests
Every query validated
100%
Parity Coverage
Byte-for-byte match
2
Database Engines
PostgreSQL & SQLite
3
ORM Benchmarks
Prisma v6/v7, Drizzle

How We Validate Correctness

Every test follows a rigorous 5-step validation process to ensure generated SQL produces identical results to Prisma

1

Generate SQL from Prisma Query

Parse Prisma query arguments and generate equivalent SQL using the same models and schema. The generated SQL uses parameterized queries for security and performance.

2

Execute Both Queries in Parallel

Run the generated SQL directly via postgres.js or better-sqlite3, and execute the same query through Prisma. Both hit the same database state.

3

Normalize Results

Handle type differences (BigInt vs Number, Decimal precision, Date serialization) and normalize object key ordering for fair comparison.

4

Deep Equality Check

Verify results match exactly: same number of rows, same field values, same nested relations, same ordering. Any mismatch fails the test.

5

Benchmark Performance

Measure execution time with 5 warmup runs, then average 10-50 iterations per test. Compare against Prisma v6, Prisma v7, and Drizzle ORM.

Advanced Validation Techniques

Data Type Normalization

  • BigInt Conversion: JavaScript BigInt → Number for comparison
  • Decimal Handling: Prisma Decimal → Float with 10-digit precision
  • Date Normalization: All DateTime values → null (focus on data, not timestamps)
  • JSON Parsing: Automatic detection and parsing of JSON strings
  • Object Key Sorting: Alphabetical ordering for consistent comparison

Performance Benchmarking

  • Warmup Phase: 5 iterations to prime caches and JIT
  • Adaptive Iterations: 5-50 runs based on query complexity
  • Isolated Measurement: Each query type measured independently
  • Multi-ORM Comparison: Prisma v6, v7, Drizzle, Generated SQL
  • SQL Generation Time: Separate timing for query generation overhead

Comprehensive Test Coverage

Tests cover every Prisma read operation across multiple complexity levels

Query Operations

  • findMany with complex filters
  • findFirst with skip & pagination
  • findUnique by ID & unique fields
  • count with WHERE conditions
  • aggregate (sum, avg, min, max)
  • groupBy with HAVING clauses

Complex Scenarios

  • Nested includes (4 levels deep)
  • Relation filters (some/every/none)
  • Distinct with window functions
  • Cursor pagination
  • Select + include combined
  • Relation counts (_count)

Filter Types

  • Comparison (lt/lte/gt/gte)
  • Logical (AND/OR/NOT)
  • String ops (contains/startsWith)
  • NULL checks (is/isNot)
  • IN/NOT IN arrays
  • Case sensitivity modes

PostgreSQL Testing

  • ✓ ILIKE case-insensitive searches
  • ✓ JSON/JSONB operations
  • ✓ Array field handling
  • ✓ Composite type support
  • ✓ Window function validation
  • ✓ Transaction isolation testing

SQLite Testing

  • ✓ LIKE pattern matching
  • ✓ JSON1 extension validation
  • ✓ Window function emulation
  • ✓ DISTINCT optimization
  • ✓ Subquery correlation
  • ✓ Text affinity handling

Example Test Case

See how we validate a complex nested query with relation filters

tests/e2e/postgres.test.ts
it('nested relation filter', () =>
  runParityTest(
    db,
    benchmarkResults,
    'findMany nested relation',
    'Organization',
    {
      method: 'findMany',
      where: {
        projects: {
          some: {
            tasks: { some: { status: 'DONE' } }
          }
        }
      }
    },
    () => db.prisma.organization.findMany({
      where: {
        projects: {
          some: {
            tasks: { some: { status: 'DONE' } }
          }
        }
      },
      orderBy: { id: 'asc' }
    }),
  )
)

// runParityTest internally:
// 1. Calls generateSQL() with the args
// 2. Executes generated SQL directly
// 3. Executes Prisma query
// 4. Normalizes both results
// 5. Deep equality check - fails if any difference
// 6. Benchmarks execution time

What Happens During Test Execution

1. Query Generation (Microseconds)

The generateSQL() function parses Prisma args and creates parameterized SQL. This step is benchmarked separately to measure query generation overhead.

2. Parallel Execution (Milliseconds)

Both queries hit the same database state simultaneously using Promise.all(), ensuring fair comparison and identical data conditions.

3. Deep Normalization

Results undergo recursive normalization: BigInt→Number, Decimal→Float(10), Date→null, JSON parse, key sorting. This ensures byte-for-byte comparison accuracy.

4. Strict Equality

JSON stringify comparison with zero tolerance. Any mismatch in row count, field values, nested objects, or ordering fails the test with detailed diff output.

5. Performance Measurement

After validation, 5-50 iterations measure average execution time. Results include: Prisma v6, Prisma v7, Drizzle ORM, Generated SQL, and SQL generation overhead.

Multi-Version Validation

Every test runs against both Prisma v6 and v7 to ensure compatibility across versions:

Prisma v6 (6.16.3)

  • • Direct PrismaClient usage
  • • Legacy engine architecture
  • • Baseline performance metrics

Prisma v7 (7.2.0)

  • • Adapter-based architecture
  • • @prisma/adapter-pg & adapter-better-sqlite3
  • • New engine optimizations

Automated Benchmark Reports

All benchmark results are automatically generated and stored as JSON files for complete transparency:

benchmark-results/v6-postgres-latest.json
benchmark-results/v7-postgres-latest.json
benchmark-results/v6-sqlite-latest.json
benchmark-results/v7-sqlite-latest.json

Each file contains: test name, Prisma execution time, generated SQL time, Drizzle time, speedup ratios, and ISO timestamp.

Explore the Full Test Suite

All 137 tests are open source. Review the test code, benchmarks, and validation logic.