README.md

May 4, 2026 ยท View on GitHub

What is system-tests?

Having trouble? Reach out on slack: #apm-shared-testing

System-tests is a black-box testing workbench for Datadog tracer libraries. It runs the same tests against every tracer implementation -- Java, Node.js, Python, PHP, Ruby, C++, .NET, Go, and Rust -- so shared features stay consistent across languages.

Key principles:

  • Black-box testing -- only component interfaces are checked, no assumptions about internals. "Check that the car moves, regardless of the engine."
  • Cross-language -- one test validates all tracer libraries.

Quick start

You need bash, Docker (20.10+), and Python 3.12.

# 1. Build images for the language you want to test
./build.sh python          # or: java, nodejs, ruby, php, dotnet, golang

# 2. Run the tests
./run.sh                   # run all default tests
./run.sh SCENARIO_NAME     # run a specific scenario
./run.sh tests/test_smoke.py::Test_Class::test_method   # run a single test

Having trouble? Check the troubleshooting page.

To understand the test output, see test outcomes and the glossary.

Documentation

All detailed documentation lives in the docs/ folder. Here is a guided reading order:

Understand system-tests

TopicDescription
Architecture overviewComponents, containers, data flow
ScenariosEnd-to-end, parametric, SSI, K8s -- what each one tests
WeblogsThe test applications instrumented by tracers
GlossaryDefinitions of pass, fail, xpass, xfail, etc.

Run tests

TopicDescription
BuildBuild options, weblog variants, image names
RunRun options, selecting tests, scenarios, timeouts
LogsUnderstanding the logs folder structure
Test outcomesReading test results
Replay modeRe-run tests without starting the containers
Custom tracer versionsTesting with local tracer builds
TroubleshootingCommon issues and how to fix them

Write and edit tests

TopicDescription
Add a new testStep-by-step guide to adding tests
Add a new scenarioCreating new test scenarios
Enable / disable testsActivating tests for a library version
ManifestsHow test activation is declared per library
Skip testsDecorators for conditional skipping
FeaturesLinking tests to the feature parity dashboard
FormattingLinter and code style
TroubleshootingDebugging tips for test development

CI integration

TopicDescription
CI overviewAdding system-tests to your CI pipeline
GitHub ActionsGitHub Actions workflow details
System-tests CIHow the system-tests own CI works

Internals

TopicDescription
Internals overviewDeep-dive index for maintainers
End-to-end lifecycleHow e2e scenarios execute step by step
Parametric lifecycleHow parametric scenarios execute
Interface validationAPI reference for validating intercepted traces

AI tooling

TopicDescription
AI integration guideBuilt-in rules for AI-assisted development

Additional requirements

Specific scenarios may require additional tools:

  • Kubernetes tests -- require Kind/Minikube for local K8s clusters. See K8s docs.
  • AWS SSI tests -- require AWS credentials and Pulumi setup. See AWS SSI docs.

Contributing

Before submitting a PR, always run the linter (./format.sh). Here are the most common types of contributions, ordered by frequency:

What you want to doGuide
Activate or deactivate a test for a libraryManifests, enable a test, skip tests
Add or edit a testAdd a new test, editing overview
Add or edit a scenarioScenarios guide, scenarios overview
Add or edit a weblogWeblog spec, build options
Other changesFull editing docs, internals

For testing against unmerged tracer changes, see enable-test.md and binaries.

Ownership

For file ownership see .github/CODEOWNERS.

Test ownership is defined using the @features decorator (see the feature doc and utils/_features.py).

Need help?

Drop a message in #apm-shared-testing -- we're happy to help!