Testing
Lightpanda uses a multi-layered testing strategy to ensure correctness across its browser engine, CDP protocol implementation, and Web API surface. This page covers the three main testing approaches: unit tests, end-to-end tests, and Web Platform Tests (WPT).
Prerequisites: You should have Lightpanda built from source before running any tests.
Unit Tests
Unit tests are the primary way to verify individual components of the browser engine. Lightpanda uses Zig’s built-in test framework with a custom test runner that provides enhanced reporting, memory leak detection, and performance tracking.
Running Unit Tests
The simplest way to run all unit tests:
make test
Under the hood, this executes zig build test with Zig’s reference trace enabled. The custom test runner (src/test_runner.zig) replaces the default Zig test runner to provide:
- Color-coded pass/fail/skip output
- Memory leak detection per test
- Slowest test tracking (top 5)
- Optional metrics output in JSON format
Filtering Tests
You can run a subset of tests using the F variable or the TEST_FILTER environment variable:
# Run only tests matching "server"
make test F="server"
# Run tests with a subfilter (filter#subfilter format)
TEST_FILTER="WebApi: #querySelector" zig build test
The filter matches against test names using substring search. The subfilter (after #) provides a second level of filtering, primarily used for WebApi HTML-based tests.
Environment Variables
The test runner responds to several environment variables:
| Variable | Default | Description |
|---|---|---|
TEST_FILTER |
(none) | Filter tests by name. Use # to separate filter and subfilter |
TEST_VERBOSE |
true |
Show individual test names and durations |
TEST_FAIL_FIRST |
false |
Stop on the first test failure |
METRICS |
false |
Output allocation and timing metrics as JSON |
Test Infrastructure
The test framework in src/testing.zig provides several utilities beyond the standard library:
expectEqual– A unified assertion that handles strings, structs, optionals, and tagged unionsexpectDelta– Numeric comparison within a toleranceexpectJson– Deep JSON value comparison with detailed diff outputhtmlRunner– Runs HTML-based Web API test files against a local test serverLogFilter– Temporarily suppresses specific log categories during tests that trigger expected errors
Test Lifecycle
Each test run follows a setup/teardown lifecycle:
beforeAll– Initializes the application, HTTP client, browser instance, session, a local CDP server (port 9583), and a local HTTP test server (port 9582)- Individual tests – Each test gets its own allocator instance for leak detection
afterAll– Tears down servers, collects V8 heap statistics, and releases all resources
This lifecycle ensures that tests have a fully functional browser environment including JavaScript execution and network capabilities.
Memory Leak Detection
Every test is checked for memory leaks using Zig’s std.testing.allocator. The test runner also wraps allocations in a TrackingAllocator that records:
- Total allocation count
- Total reallocation count
- Total allocated bytes
When the METRICS environment variable is set to true, these statistics are output as JSON at the end of the test run, along with V8 peak memory usage.
[
{
"name": "browser",
"bench": {
"duration": 1234567890,
"alloc_nb": 500,
"realloc_nb": 50,
"alloc_size": 1048576
}
},
{
"name": "v8",
"bench": {
"duration": 1234567890,
"alloc_nb": 0,
"realloc_nb": 0,
"alloc_size": 8388608
}
}
]
Writing Unit Tests
Zig tests are declared inline within source files using the test keyword. Lightpanda tests typically use the shared testing utilities:
const testing = @import("testing.zig");
test "parse valid URL" {
const url = try Url.parse(testing.allocator, "https://example.com/path");
defer url.deinit(testing.allocator);
try testing.expectString("example.com", url.host);
}
For tests that require a full browser environment (DOM, JavaScript, network), use the shared test_session and test_browser fixtures initialized during beforeAll.
Web API HTML Tests
Many DOM and Web API tests are written as HTML files in src/browser/tests/. These tests use JavaScript assertions inside HTML pages served by the local test HTTP server. The htmlRunner function loads each HTML file, executes its JavaScript, and calls testing.assertOk() to verify the result.
Test files are organized by feature area:
src/browser/tests/
document/ # Document API tests
element/ # Element API tests
events/ # Event handling tests
css/ # CSS and style tests
custom_elements/ # Custom Elements tests
cdp/ # CDP-specific DOM tests
xhr/ # XMLHttpRequest tests
...
To run a specific category of HTML tests:
TEST_FILTER="WebApi: #querySelector" make test
End-to-End Tests
End-to-end tests verify that Lightpanda works correctly as a complete system, testing the full flow from CDP client connection through page rendering.
Prerequisites
- Clone the demo repository into
../demorelative to the browser repository - Install the demo’s Node.js dependencies
- Install Go version 1.24 or later
Running E2E Tests
make end2end
This launches the Go-based test runner from the demo repository, which connects to Lightpanda via CDP and exercises real-world automation scenarios.
Web Platform Tests (WPT)
Lightpanda is validated against the standardized Web Platform Tests, the same test suite used by major browsers to verify web standards compliance.
Lightpanda uses a fork of the WPT repository with a custom testharnessreport.js for integration with the Lightpanda browser. You can view any WPT test case in a standard browser at wpt.live.
Setting Up the WPT Server
1. Clone the Repository
git clone -b fork --depth=1 git@github.com:lightpanda-io/wpt.git
2. Configure Custom Hosts
From inside the wpt/ directory:
./wpt make-hosts-file | sudo tee -a /etc/hosts
3. Generate the Manifest
./wpt manifest
See the WPT setup guide for full details.
Running WPT Tests
Three terminal sessions are needed:
Terminal 1 – Start the WPT HTTP server:
cd wpt/
./wpt serve
Terminal 2 – Start a Lightpanda browser instance:
zig build run -- --insecure_disable_tls_host_verification
Terminal 3 – Run the WPT test runner:
cd ../demo/wptrunner && go run .
To run a specific test:
cd ../demo/wptrunner && go run . Node-childNodes.html
WPT Runner Options
| Option | Description |
|---|---|
--summary |
Show a summary of results |
--json |
Output results in JSON format |
--concurrency |
Set the concurrency limit for parallel test execution |
Performance Tip
The full WPT suite takes a long time to run. Build in release mode for significantly faster execution:
zig build -Doptimize=ReleaseFast run
Test Architecture Overview
The following diagram illustrates how the different testing layers interact with the Lightpanda browser components:
Troubleshooting
Tests fail to connect
If tests fail with connection errors, verify that no other process is using ports 9582 (test HTTP server) or 9583 (test CDP server). The beforeAll setup waits for both servers to be ready before running tests.
Memory leak reports
A test marked as “leaked” means it allocated memory through std.testing.allocator without freeing it. Check for missing defer statements or error paths that skip cleanup.
WPT tests timing out
Ensure the WPT HTTP server is running and the custom hosts are configured in /etc/hosts. Also verify that TLS host verification is disabled when starting the Lightpanda instance for WPT testing.
Related Topics
- Building from Source – Prerequisites and build instructions
- Architecture – Understanding Lightpanda’s internal structure
- CDP Protocol – CDP implementation details relevant to E2E testing
- Contributing – Guidelines for submitting test improvements