Lab - Ferret Test Runner
April 29, 2026 Β· View on GitHub
Lab is a flexible test runner designed specifically for Ferret scripts. It helps automate testing for web scraping, browser automation, API integration, and regression scenarios written in Ferret Query Language (FQL).
π’ Notice: This branch contains the upcoming Lab for Ferret v2. For the stable v1 release, please visit CLI v1.
π Perfect for:
- End-to-end web application testing
- Web scraping validation and monitoring
- API integration testing
- Browser automation testing
- Regression testing for web applications
Read the introductory blog post about Lab here.
Table of Contents
- Features
- Installation
- Quick Start
- Test Suites
- Advanced Usage
- Configuration Reference
- Architecture
- Development
- Best Practices
- Troubleshooting
- Contributing
- License
Features
πββοΈ Performance & Scalability
- Parallel execution - Run multiple tests concurrently for faster feedback
- Configurable concurrency - Control the number of simultaneous test executions
- Test retry mechanism - Automatically retry failed tests with customizable attempts
- Batch execution - Run tests multiple times with configurable intervals
π Flexible Runtime Support
- Built-in Ferret runtime - Execute tests using the embedded Ferret engine
- Remote HTTP runtime - Connect to remote Ferret services over HTTP/HTTPS
- External binary runtime - Use custom Ferret CLI installations
- Multi-runtime testing - Test against different Ferret versions or runtime configurations
π Multiple Source Types
- Local filesystem - Execute scripts from local directories
- Git repositories - Fetch and run tests directly from Git repositories over HTTP/HTTPS
- HTTP sources - Download and execute scripts from web URLs
- Glob pattern matching - Select multiple files using wildcard patterns
π Static Content Serving
- Built-in HTTP server - Serve static files for testing web applications
- Dedicated
servecommand - Run the static file server without executing tests - Multiple static endpoints - Host different directories at separate URLs
- Custom aliases - Name content endpoints for better organization
- Dynamic port allocation - Automatically find available ports
π Rich Reporting & Monitoring
- Multiple output formats - Console and simple reporters are available
- Detailed test results - Execution metrics and timing information
- Wait conditions - Wait for external services before running tests
- Environment variable support - Configure tests via environment variables
Installation
π¦ Binary Downloads
Download the latest pre-built binaries from the releases page.
The recommended local install location is ~/.local/bin, which does not require elevated permissions.
Linux
curl -L https://github.com/MontFerret/lab/releases/latest/download/lab_linux_amd64.tar.gz | tar xz
mkdir -p ~/.local/bin
mv lab ~/.local/bin/
Make sure ~/.local/bin is in your PATH:
export PATH="$HOME/.local/bin:$PATH"
To make the change permanent, add the line above to your shell profile, such as ~/.bashrc, ~/.zshrc, or equivalent.
macOS Intel
curl -L https://github.com/MontFerret/lab/releases/latest/download/lab_darwin_amd64.tar.gz | tar xz
mkdir -p ~/.local/bin
mv lab ~/.local/bin/
macOS Apple Silicon
curl -L https://github.com/MontFerret/lab/releases/latest/download/lab_darwin_arm64.tar.gz | tar xz
mkdir -p ~/.local/bin
mv lab ~/.local/bin/
Make sure ~/.local/bin is in your PATH:
export PATH="$HOME/.local/bin:$PATH"
System-wide install
If you prefer installing Lab system-wide, move the binary to /usr/local/bin:
sudo mv lab /usr/local/bin/
This step may require administrator privileges.
Windows
Download the .zip file from the releases page, extract lab.exe, and place it in a directory included in your PATH.
π Install Script
The install script detects your operating system and architecture, downloads the matching binary, and installs it into ~/.local/bin by default.
curl -fsSL https://raw.githubusercontent.com/MontFerret/lab/main/install.sh | sh
To inspect the script before running it:
curl -fsSL https://raw.githubusercontent.com/MontFerret/lab/main/install.sh -o install.sh
less install.sh
sh install.sh
To install into a custom directory, set LAB_INSTALL_DIR:
LAB_INSTALL_DIR="$HOME/bin" sh install.sh
Installing into system directories, such as /usr/local/bin, may require elevated permissions.
π³ Docker
Run Lab in a container without installing it locally:
# Pull the latest image
docker pull montferret/lab:latest
# Run a simple test
docker run --rm -v $(pwd):/workspace montferret/lab:latest run /workspace/tests/
# With custom options
docker run --rm -v $(pwd):/workspace montferret/lab:latest \
run --concurrency=4 --reporter=simple /workspace/tests/
The container entrypoint is Lab-aware, so docker run <image> run ... and docker run <image> version invoke the Lab binary directly, while raw commands such as docker run <image> /bin/sh -c 'echo ok' still pass through to the container shell.
Docker Compose Example:
version: '3.8'
services:
lab:
image: montferret/lab:latest
volumes:
- ./tests:/workspace/tests
- ./static:/workspace/static
command: ["run", "--serve", "/workspace/static", "/workspace/tests/"]
π οΈ Build from Source
For development or custom builds:
# Prerequisites: Go 1.23+ required
git clone https://github.com/MontFerret/lab.git
cd lab
go build -o lab .
# Or use the Makefile
make build
β Verify Installation
lab version
lab --help
Quick Start
π― Basic Usage
The simplest way to run Ferret scripts with Lab:
lab run is required for script execution. Root invocation such as lab tests/ or lab -f tests/ is not supported.
# Execute a single FQL script
lab run myscript.fql
# Run all FQL scripts in a directory
lab run myscripts/
# Run with increased concurrency
lab run --concurrency=4 myscripts/
# Run tests multiple times
lab run --times=3 myscript.fql
Use lab version --runtime=... when you want to inspect the version reported by a specific remote or binary runtime.
π Your First Test
Create a simple test file example.fql:
LET doc = DOCUMENT("https://www.github.com", {
driver: "cdp",
userAgent: "Lab Test Runner"
})
// Wait for page to load
WAIT_ELEMENT(doc, "header")
// Extract page title
LET title = doc.title
// Return result
RETURN {
url: doc.url,
title: title,
hasGitHubLogo: ELEMENT_EXISTS(doc, "[aria-label*='GitHub']")
}
Run it:
lab run example.fql
π¨ Using Chrome DevTools Protocol
For browser automation, you'll need a Chrome or Chromium instance running in headless mode:
# Start Chrome in headless mode in a separate terminal
google-chrome --headless --remote-debugging-port=9222
# Run your tests using the default CDP address
lab run --cdp=http://127.0.0.1:9222 browser-tests/
# Or use a custom CDP address
lab run --cdp=http://localhost:9223 browser-tests/
π Example Output
$ lab run example.fql
β example.fql (1.23s)
ββ Assertions: 1 passed, 0 failed
Tests: 1 passed, 0 failed
Time: 1.23s
Test Suites
Lab supports test suites defined in YAML format, enabling more complex scenarios with assertions, parameters, setup, and cleanup steps.
π Basic Test Suite Structure
query:
text: |
LET doc = DOCUMENT("https://github.com/", { driver: "cdp" })
HOVER(doc, ".HeaderMenu-details")
CLICK(doc, ".HeaderMenu a")
WAIT_NAVIGATION(doc)
WAIT_ELEMENT(doc, '.IconNav')
FOR el IN ELEMENTS(doc, '.IconNav a')
RETURN TRIM(el.innerText)
assert:
text: RETURN T::NOT::EMPTY(@lab.data.query.result)
Save as github-test.yaml and run:
lab run github-test.yaml
π Reference External Scripts
Keep your FQL scripts separate and reference them in test suites.
navigation.fql:
LET doc = DOCUMENT(@url, { driver: "cdp" })
WAIT_ELEMENT(doc, "body")
RETURN doc.title
suite.yaml:
query:
ref: ./scripts/navigation.fql
params:
url: "https://example.com"
assert:
text: |
RETURN T::NOT::EMPTY(@lab.data.query.result)
AND T::CONTAINS(@lab.data.query.result, "Example")
π§ͺ Complex Test Scenarios
name: "E-commerce User Journey"
description: "Test complete user purchase flow"
setup:
text: |
LET baseUrl = "https://demo-shop.example.com"
RETURN { baseUrl }
query:
text: |
LET doc = DOCUMENT(@lab.data.setup.result.baseUrl, { driver: "cdp" })
// Navigate to product
CLICK(doc, ".product-item:first-child a")
WAIT_NAVIGATION(doc)
// Add to cart
CLICK(doc, ".add-to-cart")
WAIT_ELEMENT(doc, ".cart-confirmation")
// Go to checkout
CLICK(doc, ".checkout-btn")
WAIT_NAVIGATION(doc)
RETURN {
currentUrl: doc.url,
cartItems: LENGTH(ELEMENTS(doc, ".cart-item")),
totalPrice: INNER_TEXT(doc, ".total-price")
}
assert:
text: |
LET result = @lab.data.query.result
RETURN T::CONTAINS(result.currentUrl, "checkout")
AND result.cartItems > 0
AND T::NOT::EMPTY(result.totalPrice)
cleanup:
text: |
// Clear cart or perform cleanup
RETURN "Cleanup completed"
π― Parameterized Tests
Create reusable test suites with parameters:
query:
text: |
LET doc = DOCUMENT(@testUrl, {
driver: "cdp",
timeout: @pageTimeout
})
WAIT_ELEMENT(doc, @selector)
RETURN {
title: doc.title,
elementExists: ELEMENT_EXISTS(doc, @selector)
}
assert:
text: |
LET result = @lab.data.query.result
RETURN result.elementExists == true
Run with parameters:
lab run --param=testUrl:"https://example.com" \
--param=pageTimeout:5000 \
--param=selector:"h1" \
test-suite.yaml
π Data-Driven Testing
Use external data sources for broader test coverage:
query:
text: |
LET testData = [
{ url: "https://site1.com", expectedTitle: "Site 1" },
{ url: "https://site2.com", expectedTitle: "Site 2" }
]
FOR test IN testData
LET doc = DOCUMENT(test.url, { driver: "cdp" })
WAIT_ELEMENT(doc, "title")
RETURN {
url: test.url,
expectedTitle: test.expectedTitle,
actualTitle: doc.title,
matches: doc.title == test.expectedTitle
}
assert:
text: |
FOR result IN @lab.data.query.result
FILTER result.matches != true
RETURN false
RETURN true
Advanced Usage
π File Resolution
Lab supports multiple source locations for maximum flexibility.
Local Files
# Single file
lab run /path/to/test.fql
# Directory with glob patterns
lab run "tests/**/*.fql"
lab run tests/integration/
# Multiple paths
lab run --files=tests/unit/ --files=tests/integration/ --files=scripts/smoke.fql
Git Repositories
Fetch and execute tests directly from Git repositories:
# HTTPS Git repository
lab run git+https://github.com/username/test-repo.git//tests/
# HTTP Git repository
lab run git+http://git.example.com/tests.git//integration/
# Specific branch or tag
lab run git+https://github.com/username/tests.git@v1.2.0//suite.yaml
# Private repositories using token-based authentication
lab run git+https://username:token@github.com/private/repo.git//tests/
HTTP Sources
Download scripts from web URLs:
# Direct script URL
lab run https://raw.githubusercontent.com/user/repo/main/test.fql
# Multiple HTTP sources
lab run https://example.com/tests/suite1.yaml https://example.com/tests/suite2.yaml
π Static File Serving
Lab can serve local directories over HTTP during test execution. Served endpoints are available in FQL scripts under @lab.static.<alias>.
Standalone Static Server
Use lab serve when you want to expose local directories over HTTP without running a test suite. Positional entries are the primary syntax, and repeated --serve flags are also supported for symmetry with lab run.
# Serve a single directory
lab serve ./website
# Serve multiple directories with aliases
lab serve ./frontend@app ./mockdata@api
Basic Static Serving
# Serve files from ./website directory
lab run --serve ./website tests/
FQL script:
LET doc = DOCUMENT(@lab.static.website, { driver: "cdp" })
Multiple Static Endpoints
lab run --serve ./app --serve ./api-mocks tests/
FQL script:
LET appPage = DOCUMENT(@lab.static.app, { driver: "cdp" })
LET apiData = IO::NET::HTTP::GET(@lab.static["api-mocks"] + "/users.json")
Custom Static Aliases
lab run --serve ./frontend@app --serve ./mockdata@api tests/
FQL script:
LET homePage = DOCUMENT(@lab.static.app + "/index.html", { driver: "cdp" })
LET userData = IO::NET::HTTP::GET(@lab.static.api + "/user/123.json")
Advanced Static Server Example
lab run \
--serve ./dist@webapp \
--serve ./test-fixtures@fixtures \
--serve ./mock-apis@mocks \
--concurrency=3 \
tests/e2e/
Reachable Static URLs for Remote Runtimes
When Lab runs against a remote runtime or inside a containerized environment, use --serve-host to advertise a client-reachable host instead of loopback. Use --serve-bind when you need to control the listener interface explicitly.
lab run \
--runtime=https://ferret.example.com \
--serve ./dist@app \
--serve-host host.docker.internal \
--serve-bind 0.0.0.0 \
tests/
If --serve-host is set without --serve-bind, Lab automatically binds static servers to all interfaces: 0.0.0.0 for IPv4 and hostnames, or :: for IPv6 literals.
π Remote Ferret Runtime
Lab can execute tests against remote Ferret instances instead of using the built-in runtime.
HTTP/HTTPS Runtime
# Connect to remote Ferret service
lab run --runtime=https://ferret.example.com/api tests/
# With custom headers and path
lab run \
--runtime=https://ferret.example.com \
--runtime-param=headers:'{"Authorization": "Bearer token123"}' \
--runtime-param=path:"/v1/execute" \
tests/
When the runtime URL already includes a path, Lab sends run requests to that exact path. The optional runtime-param=path value overrides the run endpoint only. lab version --runtime=... uses the runtime URL path and requests its sibling /info endpoint.
The HTTP runtime sends POST requests with:
{
"text": "FQL script content",
"params": {
"key": "value"
}
}
External Binary Runtime
Use custom Ferret CLI installations:
# Use specific Ferret binary
lab run --runtime=bin:./custom-ferret tests/
# With runtime params forwarded as --param entries
lab run \
--runtime=bin:/usr/local/bin/ferret-v0.18 \
--runtime-param=timeout:30 \
tests/
# With raw binary flags
lab run \
--runtime=bin:/usr/local/bin/ferret \
--runtime-param='flags:["--timeout=60", "--verbose"]' \
tests/
Runtime Comparison Testing
Test against multiple runtime versions:
# Test with built-in runtime
lab run tests/ > builtin-results.txt
# Test with remote runtime
lab run --runtime=https://ferret-v0.17.example.com tests/ > remote-v0.17-results.txt
# Compare results
diff builtin-results.txt remote-v0.17-results.txt
β‘ Performance Optimization
Parallel Execution
# Run up to 8 tests simultaneously
lab run --concurrency=8 tests/
# Balance speed and resource usage
lab run --concurrency=4 --timeout=60 large-test-suite/
Test Repetition & Retry
# Run each test 3 times for reliability testing
lab run --times=3 tests/flaky/
# Retry failed tests up to 2 additional times
lab run --attempts=3 tests/
# Add delay between test cycles
lab run --times=5 --times-interval=10 stress-tests/
Conditional Execution
# Wait for services to be available before running tests
lab run \
--wait=http://127.0.0.1:9222/json/version \
--wait=postgres://localhost:5432/testdb \
--wait-timeout=30 \
tests/integration/
Configuration Reference
ποΈ Command Line Flags
These flags apply to lab run.
| Flag | Short | Environment Variable | Default | Description |
|---|---|---|---|---|
--files | -f | LAB_FILES | - | Location of FQL script files to run |
--timeout | -t | LAB_TIMEOUT | 30 | Test timeout in seconds |
--cdp | - | LAB_CDP | http://127.0.0.1:9222 | Chrome DevTools Protocol address |
--reporter | - | LAB_REPORTER | console | Output reporter: console, simple |
--runtime | -r | LAB_RUNTIME | - | URL to remote Ferret runtime |
--runtime-param | --rp | LAB_RUNTIME_PARAM | - | Parameters for remote runtime |
--concurrency | -c | LAB_CONCURRENCY | 1 | Number of parallel test executions |
--times | - | LAB_TIMES | 1 | Number of times to run each test |
--attempts | -a | LAB_ATTEMPTS | 1 | Number of retry attempts for failed tests |
--times-interval | - | LAB_TIMES_INTERVAL | 0 | Interval between test cycles in seconds |
--serve | - | LAB_SERVE | - | Served directory mapping exposed over HTTP |
--serve-bind | - | LAB_SERVE_BIND | - | Host to bind static servers to, without port |
--serve-host | - | LAB_SERVE_HOST | - | Host to advertise for static server URLs, without port |
--param | -p | LAB_PARAM | - | Query parameters for tests |
--wait | -w | LAB_WAIT | - | Wait for resource availability |
--wait-timeout | --wt | LAB_WAIT_TIMEOUT | 5 | Wait timeout in seconds |
--wait-attempts | - | LAB_WAIT_ATTEMPTS | 5 | Number of wait attempts |
π Environment Variables
Set environment variables for consistent configuration across environments:
# Basic configuration
export LAB_TIMEOUT=60
export LAB_CONCURRENCY=4
export LAB_REPORTER=simple
# CDP configuration
export LAB_CDP=http://chrome-headless:9222
# Runtime configuration
export LAB_RUNTIME=https://ferret-api.example.com
export LAB_RUNTIME_PARAM='headers:{"API-Key":"secret123"}'
# Run tests
lab run tests/
π Configuration Examples
CI/CD Configuration
#!/bin/bash
# ci-test.sh
# Set CI-friendly defaults
export LAB_TIMEOUT=120
export LAB_CONCURRENCY=2
export LAB_REPORTER=simple
export LAB_ATTEMPTS=3
# Wait for services
lab run \
--wait=http://app:3000/health \
--wait=postgres://db:5432/testdb \
--wait-timeout=60 \
tests/integration/
Local Development
#!/bin/bash
# dev-test.sh
export LAB_CDP=http://localhost:9222
export LAB_TIMEOUT=30
export LAB_CONCURRENCY=1
# Serve local assets and run tests
lab run \
--serve ./dist@app \
--serve ./fixtures@data \
tests/dev/
Load Testing
#!/bin/bash
# load-test.sh
lab run \
--concurrency=20 \
--times=100 \
--times-interval=1 \
--timeout=10 \
tests/performance/
βοΈ Runtime Parameters
Configure remote Ferret runtime behavior:
# HTTP runtime with custom headers
lab run \
--runtime=https://ferret.api.com \
--runtime-param='headers:{"Authorization":"Bearer token"}' \
--runtime-param='path:"/v2/execute"' \
tests/
# Binary runtime with custom flags
lab run \
--runtime=bin:/usr/local/bin/ferret \
--runtime-param='flags:["--timeout=60", "--verbose"]' \
tests/
For HTTP runtimes, path overrides the run endpoint only. For binary runtimes, flags is special and is appended directly to the external binary argv. All other runtime params are still passed as --param=name:value.
Architecture
ποΈ System Overview
Lab is built with a modular architecture that separates source resolution, test orchestration, runtime execution, static serving, and reporting.
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β Test Sources β β Test Runner β β Ferret β
β β β β β Runtime β
β β’ File System βββββΆβ β’ Orchestration βββββΆβ β
β β’ Git Repos β β β’ Parallelism β β β’ Built-in β
β β’ HTTP URLs β β β’ Retry Logic β β β’ Remote HTTP β
βββββββββββββββββββ β β’ Reporting β β β’ External Bin β
βββββββββββββββββββ βββββββββββββββββββ
β
βΌ
βββββββββββββββββββ
β Static Server β
β β
β β’ Served Dirs β
β β’ Static URLs β
β β’ Auto Ports β
βββββββββββββββββββ
π¦ Core Components
Sources (sources/)
Handles fetching test files from various locations:
- FileSystem Source - Local directory and file access with glob pattern support
- Git Source - Clone and fetch files from Git repositories over HTTP/HTTPS
- HTTP Source - Download scripts from web URLs
- Aggregate Source - Combine multiple source types
Runtime (runtime/)
Manages Ferret script execution:
- Built-in Runtime - Uses the embedded Ferret engine
- Remote Runtime - Communicates with remote Ferret services over HTTP
- Binary Runtime - Executes external Ferret CLI binaries
Test Runner (runner/)
Orchestrates test execution:
- Parallel Processing - Manages concurrent test execution
- Retry Mechanism - Handles failed test retries
- Resource Management - Controls timeouts and resource allocation
- Lifecycle Management - Handles setup, execution, assertion, and cleanup phases
Static Server (staticserver/)
Built-in HTTP server for local static assets:
- Multiple endpoints - Serve multiple directories simultaneously
- Dynamic ports - Allocate free ports automatically
- Alias support - Assign stable names to served directories
Reporters (reporters/)
Output formatting and result presentation:
- Console Reporter - Rich output for interactive use
- Simple Reporter - Plain text output suitable for CI/CD
Testing Framework (testing/)
Test suite definition and validation:
- YAML Parser - Parse test suite definitions
- Parameter Injection - Handle runtime parameters and data binding
- Assertion Engine - Validate test results
π Execution Flow
- Input Processing - Parse command-line arguments and environment variables
- Source Resolution - Fetch test files from configured sources
- Static Server Initialization - Start HTTP servers for served directories, if needed
- Runtime Setup - Initialize Ferret runtime, either built-in, remote, or binary
- Test Discovery - Find and parse test files and suites
- Parallel Execution - Run tests according to concurrency settings
- Result Collection - Gather execution results and timing data
- Reporting - Format and output results via the selected reporter
- Cleanup - Stop static file servers and release resources
π― Design Principles
- Modularity - Each component has a focused responsibility
- Extensibility - New source types, runtimes, and reporters can be added without changing the whole system
- Performance - Parallel execution and resource-conscious orchestration
- Reliability - Retry support, wait conditions, and explicit timeouts
- Flexibility - Works with local, remote, and containerized Ferret runtimes
Development
π οΈ Building from Source
Prerequisites:
- Go 1.23 or later
- Git
Build Steps:
# Clone the repository
git clone https://github.com/MontFerret/lab.git
cd lab
# Install development tools
make install-tools
# Build the project
make build
# Or manually
go build -o bin/lab -ldflags "-X main.version=dev" ./main.go
Development Workflow:
# Run tests
make test
# Or:
go test ./...
# Format code
make fmt
# Lint code
make lint
# Run all checks
make build
π§ͺ Testing Lab Itself
# Run unit tests
go test -v ./...
# Run specific package tests
go test -v ./sources/...
go test -v ./runtime/...
# Run tests with coverage
make cover
ποΈ Project Structure
lab/
βββ main.go # Application entry point
βββ cmd/ # CLI command implementations
βββ staticserver/ # Static file server
βββ reporters/ # Output formatters
βββ runner/ # Test execution orchestration
βββ runtime/ # Ferret runtime implementations
βββ sources/ # Test file source handlers
βββ testing/ # Test suite definitions
βββ assets/ # Documentation assets
βββ Dockerfile # Container build definition
βββ Makefile # Build automation
βββ README.md # Project documentation
π Adding New Features
New Source Type
- Implement the
Sourceinterface insources/. - Add URL scheme handling in
sources/source.go. - Add tests in
sources/.
New Runtime
- Implement the
Runtimeinterface inruntime/. - Add runtime type detection in
runtime/runtime.go. - Add configuration handling.
New Reporter
- Implement the
Reporterinterface inreporters/. - Register the reporter in CLI flags.
- Add output format tests.
Best Practices
π Test Organization
Directory Structure
tests/
βββ unit/ # Unit tests for individual components
β βββ api/
β βββ ui/
βββ integration/ # Integration tests
β βββ user-flows/
β βββ data-validation/
βββ e2e/ # End-to-end tests
β βββ critical-path/
β βββ smoke/
βββ fixtures/ # Test data and assets
β βββ pages/
β βββ data/
βββ scripts/ # Reusable FQL scripts
βββ common/
βββ helpers/
Naming Conventions
- Use descriptive test names, such as
user-registration-flow.yaml. - Prefix test types when useful, such as
smoke-,regression-, orload-. - Use kebab-case for files, such as
checkout-process.fql.
Test Suite Best Practices
name: "User Authentication Flow"
description: "Verify user login, logout, and session management"
setup:
text: |
// Clear any existing sessions
// Set up test data
query:
text: |
// Main test logic with clear comments
assert:
text: |
// Specific, meaningful assertions
cleanup:
text: |
// Clean up test data
β‘ Performance Optimization
Concurrency Guidelines
# Local development: low concurrency
lab run --concurrency=2 tests/
# CI environments: medium concurrency
lab run --concurrency=4 tests/
# Dedicated test infrastructure: higher concurrency
lab run --concurrency=8 tests/
Resource Management
- Use appropriate timeouts for different test types.
- Implement cleanup in test suites.
- Monitor memory usage with large test suites.
- Use the static server for shared static assets.
Test Efficiency
# Run faster tests first
lab run tests/smoke/ && lab run tests/integration/ && lab run tests/e2e/
# Use separate paths for different test categories
lab run --timeout=60 tests/critical/
lab run --timeout=300 --concurrency=1 tests/extended/
π Security Considerations
- Never commit sensitive data in test files.
- Use environment variables for credentials.
- Sanitize test outputs that might contain secrets.
- Use separate test environments for security testing.
# Good: use environment variables
export TEST_API_KEY="your-key-here"
lab run --param=apiKey:$TEST_API_KEY tests/
# Bad: hardcode secrets in scripts
# LET apiKey = "secret-key-123"
Troubleshooting
π Common Issues
Chrome/CDP Connection Issues
Error: Failed to connect to CDP at http://127.0.0.1:9222
Solutions:
-
Start Chrome in headless mode:
google-chrome --headless --remote-debugging-port=9222 --no-sandbox -
Check if Chrome is running:
curl http://127.0.0.1:9222/json/version -
Use a custom CDP address:
lab run --cdp=http://localhost:9223 tests/
Test Timeouts
Error: Test timed out after 30 seconds
Solutions:
-
Increase timeout:
lab run --timeout=60 tests/ -
Optimize test scripts:
-- Add explicit waits WAIT_ELEMENT(doc, ".loading", { displayed: false }) -- Use shorter timeouts for quick checks WAIT_ELEMENT(doc, ".button", { timeout: 5000 })
Git Source Issues
Error: Failed to clone repository
Solutions:
-
Check repository URL:
git clone https://github.com/user/repo.git -
Use token-based authentication for private repositories:
lab run git+https://username:token@github.com/private/repo.git//tests/ -
Use SSH for private repositories:
lab run git+ssh://git@github.com/private/repo.git//tests/
Static Server Port Conflicts
Error: failed to start static file server on port 8080
Solutions:
-
Lab normally finds free ports automatically, but you can specify a port if needed:
lab run --serve ./static@app:8081 tests/ -
Check for port conflicts:
netstat -tlnp | grep :8080
π Performance Issues
High Memory Usage
- Reduce concurrency with
--concurrency=2. - Implement cleanup in tests.
- Use an external binary runtime for memory-intensive tests.
Slow Test Execution
- Enable parallel execution with
--concurrency=4. - Use the static server for local assets.
- Optimize FQL scripts.
- Profile tests to identify bottlenecks.
π Debugging Tips
Verbose Output
# Enable detailed logging, if supported by the current build
export LOG_LEVEL=debug
lab run tests/
# Use the simple reporter for cleaner output
lab run --reporter=simple tests/
Test Individual Scripts
# Test one file at a time
lab run specific-test.fql
# Run with retries disabled
lab run --attempts=1 problematic-test.fql
Validate Test Syntax
# Test FQL syntax with Ferret CLI
ferret -q "RETURN 1"
Contributing
π€ How to Contribute
We welcome contributions to Lab. To get started:
- Fork the repository on GitHub.
- Create a feature branch:
git checkout -b feature/awesome-feature. - Make your changes and add tests.
- Run the test suite:
make test. - Commit your changes:
git commit -am 'Add awesome feature'. - Push to the branch:
git push origin feature/awesome-feature. - Submit a pull request.
π Development Guidelines
- Write tests for new features.
- Follow Go conventions and formatting with
make fmt. - Pass linting checks with
make lint. - Update documentation for user-facing changes.
- Keep commits atomic and write clear commit messages.
π Reporting Issues
When reporting bugs, please include:
- Lab version from
lab version - Operating system and version
- Go version, if building from source
- Complete command that failed
- Full error message and stack trace
- Minimal reproduction case
π‘ Feature Requests
Before requesting features:
- Check existing issues and discussions.
- Describe the use case and problem being solved.
- Consider whether it fits Lab's scope.
- Share examples of the desired behavior when possible.
π Development Process
- Discussion - Major features should be discussed in issues first.
- Implementation - Write code with tests and documentation.
- Review - Submit a pull request for code review.
- Testing - Ensure all CI checks pass.
- Merge - A maintainer will merge when ready.
License
Lab is licensed under the Apache License 2.0.
Happy Testing! π
For more information about Ferret and FQL, visit the Ferret documentation.
Join the community on Discord for support and discussions.