markdown-parser-benchmark

April 16, 2026 ยท View on GitHub

Benchmark of popular Markdown parsers in Node.js using tinybench.

Benchmark suites

AST/token parsing

ParserOutput
commonmarkAST
ironmarkAST JSON
markdown-itToken array
litemarkupAST

Parsing + HTML rendering

ParserOutput
commonmarkHTML string
ironmarkHTML string
markdown-itHTML string
markdown-wasmHTML string
markedHTML string
micromarkHTML string
snarkdownHTML string
litemarkupHTML string

Usage

npm install
npm run benchmark

Corpus

The benchmark input corpus comes from markdown-dataset. This script loads base64-encoded markdown documents from that package and concatenates them into one input string.

Run settings

Tune benchmark parameters in BENCHMARK_CONFIG at the top of benchmark.js:

  • files number of markdown documents to use (Infinity = all)
  • rounds number of benchmark rounds
  • timeMs measurement time per task per round
  • warmupMs warmup time per task per round
  • gcBetweenRounds call global.gc() between rounds (requires node --expose-gc)

If gcBetweenRounds: true, run with:

node --expose-gc benchmark.js

Interpreting results

The benchmark runs two separate suites:

  1. AST/token parsing: Measure throughput of parsers that produce structured output (AST or token arrays).
  2. Parsing + HTML rendering: Measure throughput of parsers that render directly to HTML strings.

Both suites benchmark the same input corpus and with the same metrics, but are reported separately to avoid mixing incomparable output types.

Some libraries appears in both suites using different configurations: AST parsing in suite 1, HTML rendering in suite 2.

AI usage

Code in this project has been heavily written with AI tools, reviewed and edited by a human.

License

MIT