MoltenDB

May 12, 2026 ยท View on GitHub

MoltenDB Logo

MoltenDB

๐ŸŒ‹ A Universal Local-First Database in Pure Rust

Runs in the browser (WASM + OPFS) and on the server (Rust + disk).
Same query engine. Same append-only log + snapshot storage. Two environments.

Request only the fields you need โ€” like GraphQL, but over a plain JSON API.

Warning

Versions starting with v1.0.0-rc1 are not backwards compatible with previous versions. We are actively working on improving performance and stability. Please review the changelog before upgrading.

License Rust Tests Status

๐Ÿš€ Release Candidate (v1.0.0-rc) โ€” The API is stable. Suitable for early production use. Minor breaking changes may occur before the final 1.0.0 release.

๐ŸŒ Building for the browser? The WebAssembly engine, TypeScript client, and React/Angular adapters live in the moltendb-web repository (MIT Licensed).


What is MoltenDB?

MoltenDB is a JSON document database written in Rust that compiles to both a native server binary and a WebAssembly module. The same query engine runs in your browser (via WASM + OPFS) and on your server (via a Rust binary + disk). Data written in the browser persists across page reloads and can optionally sync to the server.


What's new in v1.0.0-rc2

  • ~8ร— lower memory โ€” documents are now stored as MessagePack bytes (Box<[u8]>) instead of serde_json::Value, dropping steady-state RSS for 1M docs from ~4 GB to ~500 MB.
  • Parallel queries โ€” get_filtered, get_all, and scan_top_n use rayon across all CPU cores on native targets; filter + sort queries went from ~13s to ~1โ€“2s on an 8-core machine.
  • Bounded sort heaps โ€” sort-only paginated queries (scan_top_n) use per-worker heaps via rayon fold + reduce, eliminating the 1M-element intermediate allocation that caused ~7s latency.

See CHANGELOG.md for the full list of changes.


Architecture

MoltenDB is structured as a Cargo Workspace with four independent crates. Each crate has a single, well-defined responsibility and can be used in isolation.

MoltenDB/
โ”œโ”€โ”€ moltendb-core/     โ€” pure engine: no HTTP, no auth, no JWT, no WASM bindings
โ”œโ”€โ”€ moltendb-wasm/     โ€” browser adapter: wasm-bindgen glue, WorkerDb, OPFS
โ”œโ”€โ”€ moltendb-auth/     โ€” identity layer: JWT, Argon2, UserStore
โ””โ”€โ”€ moltendb-server/   โ€” network layer: Axum, TLS, CORS, CLI config

moltendb-core โ€” The Engine

The heart of MoltenDB. Contains the in-memory DashMap store, the append-only WAL, all storage backends (disk, encrypted, OPFS), the query evaluator ($in, $gt, joins, field projection), and all handler and validation logic shared between the server and the WASM adapter.

Zero knowledge of HTTP, TCP, JWT, users, or WASM bindings. This crate compiles to:

  • A native rlib for embedding in other Rust projects
  • A cdylib for FFI (mobile, Tauri, etc.)

moltendb-wasm โ€” The Browser Adapter

A thin cdylib crate that wraps moltendb-core and exposes it to JavaScript via wasm-bindgen. Contains WorkerDb โ€” the WASM entry point used by the Web Worker โ€” and all browser-specific glue (web-sys, js-sys, OPFS access). Built with wasm-pack build moltendb-wasm --target web.

JS initialisation uses a named static factory (not an async constructor, which produces invalid TypeScript):

// โœ… correct
const db = await WorkerDb.create("my_database");

// โŒ deprecated โ€” do not use
const db = await new WorkerDb("my_database");

Keeping WASM bindings in a separate crate means moltendb-core and moltendb-server have a clean, WASM-free dependency tree.

Use it as an embedded database โ€” add it to any Rust project with no HTTP overhead:

# Cargo.toml
[dependencies]
moltendb-core = "1.0.0-rc2"
use moltendb_core::engine::{Db, DbConfig};

let config = DbConfig {
    path: "./my_app.log".to_string(),
    sync_mode: true,
    ..Default::default()
};

let db = Db::open(config).await?;
db.insert_batch("users", vec![("u1".to_string(), serde_json::json!({ "name": "Alice" }))])?;
let user = db.get("users", "u1");
FeatureAvailable in moltendb-core?Available in moltendb-server?Why?
MOLTENDB_DB_PATHNo (passed via DbConfig)YesEngine needs a path; server provides the CLI flag.
MOLTENDB_HOSTNoYesCore has no network listener or HTTP logic.
MOLTENDB_PORTNoYesCore has no network listener or HTTP logic.
MOLTENDB_ROOT_USERNoYesCore doesn't handle API authentication.
MOLTENDB_JWT_SECRETNoYesServer-side token security.
MOLTENDB_SYNC_MODENo (passed via DbConfig)YesControls write flush behaviour (async or sync).
MOLTENDB_IN_MEMORYNo (passed via DbConfig)YesBypasses the WAL; all data lives in RAM only.

Tip

When using the standalone moltendb-server binary, all flags and environment variables are available. The server acts as a thin wrapper that combines the engine, authentication, and networking layers. The distinction only matters if you are using moltendb-core as a library in your own Rust project.

3. How to configure moltendb-core directly

If you are building a custom application and importing moltendb-core, you don't use environment variables or CLI flags unless you implement them yourself. Instead, you initialize the database using the DbConfig struct:

use moltendb_core::engine::{Db, DbConfig};

#[tokio::main]
async fn main() {
    // Core doesn't know about MOLTENDB_PORT or MOLTENDB_ROOT_USER
    let config = DbConfig {
        path: "my_data.db".to_string(),
        sync_mode: true,
        ..Default::default()
    };

    let db = Db::open(config).await.unwrap();
    // Now you have a running database instance in your own app!
}

In summary: the server flags are just a user interface for the standalone binary. If you use the core package as a library, you are responsible for how you want to configure it.


moltendb-auth โ€” The Identity Layer

Handles everything related to identity: Argon2 password hashing, JWT minting and validation (HMAC-SHA256), the UserStore, and scoped token delegation. Depends only on moltendb-core โ€” it has no knowledge of HTTP routing or the server binary.

Single root user. One root user is configured at startup via --root-user / --root-password. There is no user management API โ€” MoltenDB is designed to work alongside your own user table. Your backend validates credentials against your database, then calls POST /auth/delegate to mint a narrow-scoped JWT for the client. The root token never leaves your backend.

WASM excluded. The entire crate is gated with #![cfg(not(target_arch = "wasm32"))] โ€” auth is irrelevant for local browser storage and adds no weight to the WASM bundle.

moltendb-server โ€” The Network Layer

The runnable binary. Owns Axum routing, TLS termination, CORS policy, per-IP rate limiting, HTTP body size enforcement, and the CLI configuration (via clap). Parses incoming JSON requests and delegates to moltendb-core. Depends on both moltendb-core and moltendb-auth.


Deployment model: Run moltendb-server as a standalone HTTPS server, embed moltendb-core directly in your Rust application, or compile moltendb-core to WASM for browser-side local-first storage.

MoltenDB keeps the entire dataset in RAM (DashMap) โ€” reads are pure hashmap lookups with no disk I/O. All data is loaded into memory at startup from the snapshot + WAL delta. RAM is the hard dataset size limit.

One of MoltenDB's core features is GraphQL-style field selection: every query lets you specify exactly which fields (including deeply nested ones) you want back. You never receive more data than you asked for โ€” no over-fetching, no under-fetching, no separate schema to maintain.

What Actually Works Today

โœ… Browser (WASM + OPFS)

  • Full document store running inside a Web Worker โ€” zero main-thread blocking
  • Data persists across page reloads using the Origin Private File System (OPFS)
  • Manual compaction via POST /snapshot โ€” no surprise I/O spikes during writes
  • @moltendb-web/core on NPM โ€” bundles the WASM engine, Web Worker, and main-thread client into a single publishable artifact
  • @moltendb-web/query on NPM โ€” type-safe, chainable query builder (CJS + ESM + .d.ts)
  • @moltendb-web/angular on NPM โ€” official Angular wrapper for seamless integration
  • Point-in-Time Recovery Ready: Every write in the browser now includes a _t timestamp. While the recovery tool runs natively, browser logs can be exported and recovered to any millisecond using the native CLI.
  • โšก Try the Live Angular Demo
  • โšก Try the Live Browser WASM Demo on StackBlitz

โœ… Server (Rust binary)

  • HTTPS-only server with TLS (cert + key required)
  • JWT authentication (POST /login โ†’ bearer token)
  • Per-IP sliding-window rate limiting
  • At-rest encryption with XChaCha20-Poly1305 (on by default, key from --encryption-key)
  • In-memory store: the entire dataset lives in RAM (DashMap) โ€” reads are pure hashmap lookups with no disk I/O; RAM is the hard dataset size limit
  • Two write modes: async (50 ms flush, high throughput) and sync (flush-on-write, zero data loss)
  • Binary snapshots for fast startup (snapshot + delta replay, not full log replay)
  • Point-in-Time Recovery (PITR): Recover the database to any millisecond or log sequence number using the recover CLI command.
  • Snapshot Versioning: Historical snapshots are automatically moved to a /backup folder with Unix timestamps.
  • Post-Backup Hook: Automatically execute custom shell commands (e.g., S3 upload, Slack notify) after every successful snapshot.
  • Manual Snapshots: Trigger a snapshot on demand via the POST /snapshot endpoint.
  • WebSocket endpoint (/ws) for real-time push notifications โ€” subscribe and receive change events on every write

โœ… Query Engine (shared between browser and server)

  • GraphQL-style field selection โ€” request only the fields you need using fields (include) or excludedFields (exclude). Dot-notation works at any depth: "specs.display.features.refresh_rate" returns only that one nested value, not the whole document.
  • WHERE clause with: $eq, $ne, $gt, $gte, $lt, $lte, $contains / $ct (strings and arrays), $in / $oneOf, $nin / $notIn โ€” all string comparisons are case-insensitive
  • Field projection (fields) and field exclusion (excludedFields) โ€” mutually exclusive, validated before any data is read
  • Pagination: count (limit) and offset
  • Cross-collection joins with dot-notation foreign keys
  • Snapshot Exports: Atomic, non-blocking binary snapshots for fast recovery and off-site backups.
  • JSON Schema Validation: High-speed consistency enforcement on a per-collection basis.
  • Optimistic Concurrency Control: Improved version conflict detection and 409 Conflict reporting.
  • Document versioning: every document automatically gets _v, createdAt, modifiedAt
  • Atomic Batch Transactions: WAL transaction markers (TX_BEGIN/TX_COMMIT) prevent partial write failures.
  • Conflict resolution: incoming writes with stale _v return a 409 Conflict error.
  • Inline reference embedding (extends): embed data from another collection at insert time

โœ… Security

  • Passwords hashed with Argon2id
  • JWT tokens signed with HMAC-SHA256; root tokens carry *:*:* scope (24-hour expiry)
  • Scoped token delegation: root user mints narrow-permission JWTs for clients via POST /auth/delegate. Scope format: action:collection:document_key (e.g. read:laptops:lp1, write:users:*, read:*:*). Every endpoint enforces scopes โ€” tokens missing the required scope receive 403 Forbidden.
  • Document-level access control: a token with read:laptops:lp1 can only read that one document. POST /get without a key filter automatically returns only the documents the token is permitted to see.
  • Only the root user can mint *:*:* (admin) tokens โ€” non-root admin tokens cannot escalate their own privileges.
  • Token revocation (JTI blacklist): every JWT carries a unique jti (UUID). Compromised or leaked tokens can be immediately invalidated via DELETE /auth/tokens/:jti (admin-only) before their TTL expires. The revocation store is persisted to <db-path>.revocations.json and reloaded on server restart โ€” revocations survive restarts.
  • Credentials loaded from environment variables at startup (no hardcoded defaults in production)
  • Single root user: MoltenDB supports exactly one root user. Your own user table handles the rest โ€” MoltenDB acts as a stateless delegation gateway, not an identity provider. Note that while the in-memory user store is ephemeral, the token revocation list is persisted to <db-path>.revocations.json and reloaded on every server restart โ€” a revoked JWT remains revoked even after a crash or restart.
  • Input validation: collection names, key names, field names, JSON depth (max 32), payload size (max 10 MB), batch size (max 1000 keys)
  • Security headers on every response: X-Content-Type-Options, X-Frame-Options, HSTS, CSP, etc.
  • Graceful shutdown: drains in-flight requests (up to 30 s), then awaits the async writer task to fully flush all buffered log entries before exit

โœ… Developer Tooling

  • Interactive WASM Browser Demo โ€” A complete, live environment to test raw JSON queries and the chainable builder directly in your browser.
  • Server Integration Test Suite (GitHub) โ€” A browser-based testing environment to exercise the HTTP API and WebSocket endpoint against a live server using the TypeScript client. Includes an interactive Server Query Builder, a WebSocket tester, and a collection fetcher.
  • 57+ documented example requests in tests/requests.http
  • 80+ integration tests covering all query features, versioning, persistence, compaction, concurrency, and schema validation.
  • Rust stress-test examples (examples/) โ€” generate 100 000 synthetic documents, bulk-insert via HTTP, and run 10 000โ€“100 000 concurrent fetch requests with a full latency percentile report.

Getting Started

Prerequisites

  • Rust 1.85+ (rustup update stable)
  • Node.js 20+ (for the dev server and npm packages)
  • wasm-pack (only if building the browser package: cargo install wasm-pack)
  • A TLS certificate and key (for the server)

Install via Cargo (Easiest)

If you just want to run the standalone database server, install it directly from crates.io:

cargo install moltendb-server

Use the core engine as an embedded library

Add moltendb-core to your Cargo.toml to embed the engine directly โ€” no HTTP server, no auth overhead:

[dependencies]
moltendb-core = "1.0.0-rc2"

Download Pre-built Binaries

Alternatively, you can also download the pre-compiled binaries and self-signed certificates directly from the GitHub releases page.

Generate a self-signed certificate (development only)

openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes \
  -subj "/CN=localhost"

Build the WASM package

The WASM package targets moltendb-core only โ€” no HTTP or auth deps are included:

wasm-pack build moltendb-wasm --target web

Run the server

# Set credentials (REQUIRED)
export MOLTENDB_ROOT_USER=myuser
export MOLTENDB_ROOT_PASSWORD=str0ng-p4ssw0rd
export MOLTENDB_JWT_SECRET=another-strong-secret

# Run the server binary
cargo run --release -p moltendb-server

# Or with CLI flags (equivalent)
cargo run --release -p moltendb-server -- \
  --root-user myuser \
  --root-password str0ng-p4ssw0rd \
  --jwt-secret another-strong-secret \
  --encryption-key my-encryption-password \
  --port 1538

# Verbose debug logging (optimizer, indexing, compaction details)
cargo run --release -p moltendb-server -- --debug

Run cargo run -p moltendb-server -- --help to see all available flags.

Quick Test with requests.http

If you want to quickly test the functionality with the requests.http file, you should start the server with the following credentials (via CLI flags or environment variables):
--root-user admin
--root-password admin123
Make sure to login first and then replace the token in the requests.http file with the one you get from the login response.

RECOVERY & MAINTENANCE

Take a manual snapshot

POST /snapshot
Authorization: Bearer <token>

Triggers an immediate compaction and saves a new snapshot.bin. The previous snapshot is moved to the /backup folder.

Point-in-Time Recovery (CLI)

To recover a database to a specific time (e.g., before a bug deleted data):

moltendb recover --log my_database.log --to-time 1713972000000 --out recovered.snapshot.bin

The resulting recovered.snapshot.bin can then be renamed to my_database.log.snapshot.bin to restore the state.


HTTP API

All endpoints except POST /login require an Authorization: Bearer <token> header. Every endpoint also enforces scopes โ€” the token must carry the appropriate action:collection:key scope or the request is rejected with 403 Forbidden.
All endpoints return a consistent JSON envelope with a statusCode field:

{ "statusCode": 200, "count": 5, "status": "ok" }
{ "statusCode": 400, "error": "Unknown property: 'foo'. Check the API docs..." }
{ "statusCode": 404, "error": "No documents found" }

Authentication

POST /login
Content-Type: application/json

{ "username": "myuser", "password": "str0ng-p4ssw0rd" }

Returns { "token": "<jwt>" }. The root token carries *:*:* scope (full access).

Delegate a scoped token

The root user can mint narrow-permission JWTs for clients. Only the root user can call this endpoint.

POST /auth/delegate
Authorization: Bearer <root-token>
Content-Type: application/json

{
  "client_id": "laptop-service",
  "scopes": ["read:laptops:*", "write:laptops:*"],
  "ttl_secs": 3600
}

Returns { "token": "<scoped-jwt>", "client_id": "laptop-service", "scopes": [...] }.

Scope format: action:collection:document_key

ScopeMeaning
read:laptops:lp1Read only document lp1 in laptops
read:laptops:*Read any document in laptops
write:laptops:*Write any document in laptops
delete:laptops:*Delete any document in laptops
read:*:*Read any document in any collection
*:*:*Full admin โ€” root only

Insert / Upsert

POST /set
Content-Type: application/json
Authorization: Bearer <token>

{
  "collection": "laptops",
  "data": {
    "lp1": { "brand": "Lenovo", "model": "ThinkPad X1 Carbon", "price": 1499, "in_stock": true }
  }
}

Pass data as an array to auto-generate UUIDv7 keys:

{ "collection": "laptops", "data": [{ "brand": "HP", "model": "Spectre x360", "price": 1599 }] }

Returns { "statusCode": 200, "status": "ok", "count": 1 }.

Every document automatically receives _v (version counter), createdAt, and modifiedAt fields managed by the engine.

Query

POST /get
Content-Type: application/json
Authorization: Bearer <token>

{
  "collection": "laptops",
  "where": { "brand": { "$in": ["Apple", "Dell"] }, "in_stock": true },
  "fields": ["brand", "model", "price"],
  "count": 10,
  "offset": 0
}

All query properties:

PropertyTypeDescription
collectionstringRequired. The collection to query.
keysstring | string[]Fetch one or more documents by key. Returns the document directly for a single string; returns an array for an array of keys.
whereobjectFilter documents. All conditions at the top level are ANDed together.
fieldsstring[]GraphQL-style field selection. Return only these fields. Dot-notation selects nested fields. Mutually exclusive with excludedFields.
excludedFieldsstring[]Return everything except these fields. Mutually exclusive with fields.
joinsobject[]Cross-collection joins. Each element is { "<name>": { "from": "<collection>", "on": "<foreign_key_field>", "fields": [...] } }.
sortobject[]Sort results. Each spec is { "field": "<name>", "order": "asc" | "desc" }. Multiple specs applied in priority order.
countnumberMaximum number of results to return (applied after filtering and sorting).
offsetnumberNumber of results to skip (for stable pagination, applied after sorting).

Response shape: All multi-document queries return a JSON array where each element includes a _key field with the document ID. The only exception is a single-key lookup ("keys": "lp2") which returns the document directly.

Supported where operators:

OperatorAliasesDescription
$eq$equalsExact equality
$ne$notEqualsNot equal
$gt$greaterThanGreater than (numeric)
$gteGreater than or equal
$lt$lessThanLess than (numeric)
$lteLess than or equal
$contains$ctSubstring check (string, case-insensitive) or membership check (array)
$in$oneOfField value is one of a list (string comparison is case-insensitive)
$nin$notInField value is not in a list
$orAt least one of the sub-conditions must match (array of where-style objects)
$andAll sub-conditions must match (array of where-style objects)

Query examples:

// WHERE with multiple conditions (all must match โ€” implicit AND)

{ "collection": "laptops", "where": { "brand": "Apple", "in_stock": true } }

// GraphQL-style field selection

{ "collection": "laptops", "fields": ["brand", "model", "price"] }

// Deep nested field selection

{ "collection": "laptops", "fields": ["brand", "specs.cpu.ghz", "specs.weight_kg"] }

// Field exclusion

{ "collection": "laptops", "excludedFields": ["memory_id", "display_id"] }

// Sort by price descending, then brand ascending

{ "collection": "laptops", "sort": [{ "field": "price", "order": "desc" }, { "field": "brand", "order": "asc" }] }

// Pagination โ€” second page of 3

{ "collection": "laptops", "sort": [{ "field": "price", "order": "asc" }], "offset": 3, "count": 3 }

// $in โ€” brand is one of a list

{ "collection": "laptops", "where": { "brand": { "$in": ["Apple", "Dell", "Razer"] } } }

// $contains on an array field

{ "collection": "laptops", "where": { "tags": { "$contains": "gaming" } } }

// $or โ€” match documents where brand is Apple OR price is below 1000

{ "collection": "laptops", "where": { "$or": [{ "brand": "Apple" }, { "price": { "$lt": 1000 } }] } }

// $and โ€” match documents where brand is Apple AND price is below 2000

{ "collection": "laptops", "where": { "$and": [{ "brand": "Apple" }, { "price": { "$lt": 2000 } }] } }

Cross-collection join

POST /get
Content-Type: application/json
Authorization: Bearer <token>

{
  "collection": "laptops",
  "fields": ["brand", "model", "price"],
  "joins": [
    {  
      "ram": { 
        "from": "memory", 
        "on": "memory_id", 
        "fields": ["capacity_gb", "type"] 
      }
    },
    { 
      "screen": { 
        "from": "display",
        "on": "display_id", 
        "fields": ["size_inch", "panel", "refresh_hz"]
      }
    }
  ]
}

The on field is read from the parent document using dot-notation and used to look up a document in the target collection. The result is embedded under the alias key. fields is optional โ€” omit it to return the full joined document.

Note: Joins are resolved at query time โ€” the joined data is fetched live on every request. For a snapshot embedded at insert time, use extends (see below).

Inline reference embedding (extends)

The extends key embeds data from another collection directly into the stored document at insert time โ€” no join needed on reads.

POST /set
Content-Type: application/json
Authorization: Bearer <token>

{
  "collection": "laptops",
  "data": {
    "lp7": {
      "brand": "MSI",
      "model": "Titan GT77",
      "price": 3299,
      "extends": {
        "ram":    "memory.mem4",
        "screen": "display.dsp3"
      }
    }
  }
}

Each value in extends is a "collection.key" reference. The engine fetches the referenced document and embeds it under the alias key. The extends key itself is removed from the stored document.

When to use extends vs joins:

extendsjoins
Resolved atInsert time (once)Query time (every request)
Data freshnessSnapshot โ€” may become staleAlways live
Read costO(1) โ€” data already embeddedO(1) per join per document
Use whenData rarely changes, fast reads matterData changes frequently, freshness matters

Patch / merge

POST /update
Content-Type: application/json
Authorization: Bearer <token>

{
  "collection": "laptops",
  "data": { "lp4": { "in_stock": true, "price": 1749 } }
}

Only the fields in data are changed. All other fields are preserved. _v is incremented automatically; createdAt cannot be overwritten.

Delete

POST /delete
Content-Type: application/json
Authorization: Bearer <token>

{ "collection": "laptops", "keys": "lp6" }              // single key
{ "collection": "laptops", "keys": ["lp4", "lp5"] }     // batch
{ "collection": "laptops", "drop": true }               // drop entire collection

Paginated collection fetch

GET /collections/laptops?limit=100&offset=0
Authorization: Bearer <token>

Returns all documents in the collection, with optional pagination.


Query Builder (JavaScript / TypeScript)

The @moltendb-web/query package provides a type-safe, chainable API that works with both the HTTP server and the WASM engine.

npm install @moltendb-web/query
import { MoltenDBClient, WorkerTransport, HttpTransport } from '@moltendb-web/query';

// WASM (browser)
const client = new MoltenDBClient(new WorkerTransport(worker));

// HTTP server
const client = new MoltenDBClient(new HttpTransport('https://localhost:1538', token));

// GET โ€” chainable query
const results = await client.collection('laptops')
  .get()
  .where({ brand: 'Apple', in_stock: true })
  .fields(['brand', 'model', 'price'])
  .joins([{ 
    screen: { 
      from: 'display', on: 'display_id', fields: ['panel', 'refresh_hz'] 
    }
  }])
  .sort([{ field: 'price', order: 'asc' }])
  .count(5)
  .exec();

// SET โ€” insert / upsert
await client.collection('laptops')
  .set({ lp1: { brand: 'Lenovo', model: 'ThinkPad X1', price: 1499 } })
  .exec();

// UPDATE โ€” partial patch
await client.collection('laptops')
  .update({ lp4: { price: 1749, in_stock: true } })
  .exec();

// DELETE
await client.collection('laptops').delete().keys('lp6').exec();
await client.collection('laptops').delete().drop().exec();

Each operation class only exposes the methods that are valid for that operation โ€” invalid method chains are caught at compile time in TypeScript.


WebSocket (Real-time Push)

The WebSocket endpoint is exclusively for real-time push notifications. All CRUD operations must go through the HTTP endpoints.

wss://localhost:1538/ws

Protocol:

  1. The first message must be { "action": "AUTH", "token": "<jwt>" }. The connection is closed immediately if authentication fails, with one of the following structured error codes:

    error codeCause
    invalid_messageFirst frame was not valid JSON or not a text frame
    invalid_actionFirst message was not an AUTH action
    missing_tokenAUTH frame had no token field
    invalid_tokenJWT verification failed (expired, wrong secret, malformed)
    token_revokedToken has been revoked via DELETE /auth/tokens/:jti
  2. After authentication, the server pushes a change event on every write for collections the token's scopes allow read access to. Events for other collections are silently filtered out. Admin tokens (*:*:*) receive all events.

    { "event": "change", "collection": "laptops", "key": "lp2", "new_v": 3 }
    
    { "event": "change", "collection": "laptops", "key": "lp6", "new_v": null }
    
    { "event": "change", "collection": "laptops", "key": "*",   "new_v": null }
    
    • new_v is the document's _v after the write, or null for deletes/drops
    • key: "*" means the entire collection was dropped
  3. Clients fetch fresh data via HTTP after receiving a notification.

Revocation on open connections: If a token is revoked while a WebSocket connection is already open, the server will detect this within 30 seconds, send a token_revoked error, and close the connection.

See src/ws_test/websocket-test.html for an interactive tester.


Telemetry

Health check

Public endpoint โ€” no authentication required. Use it as a liveness probe in Docker / Kubernetes.

GET /system/health

Response:

{ "status": "ok", "message": "MoltenDB is running" }

Metrics

Admin-only endpoint. Returns a structured snapshot of server uptime, process memory, host hardware, and live database internals. All values are raw integers โ€” formatting is left to the client (MoltenDB Studio / dashboards).

GET /system/metrics
Authorization: Bearer <admin-token>

Response:

{
  "uptime_seconds": 14200,
  "process": {
    "memory_used_bytes": 20017152
  },
  "host": {
    "memory": {
      "total_bytes": 34070192128,
      "used_bytes": 17026154496,
      "free_bytes": 17044037632
    },
    "disks": [
      {
        "mount": "C:\\",
        "total_bytes": 1022645760000,
        "used_bytes": 616695963648,
        "available_bytes": 405949796352
      }
    ]
  },
  "database": {
    "hot_keys_count": 14523,
    "wal_size_bytes": 8450122,
    "storage_mode": "async"
  }
}
FieldDescription
uptime_secondsSeconds since the server started
process.memory_used_bytesRAM consumed by the MoltenDB process
host.memoryTotal / used / free RAM on the host machine
host.disksPer-disk total, used, and available bytes
database.hot_keys_countTotal number of documents currently held in RAM
database.wal_size_bytesCurrent size of the WAL / storage file on disk
database.storage_modeasync, sync, or in-memory

Returns 403 Forbidden if the token does not have admin (*:*:*) scope.


Configuration Reference

All options can be set via CLI flags or environment variables. CLI flags take priority.

Note

If you are running the moltendb-server binary, you can use all flags listed below. The separation between "Networking/Auth" and "Database Engine" is only relevant for developers embedding moltendb-core as a library.

Networking & Authentication (Server-only)

FlagEnv varDefaultDescription
--certMOLTENDB_TLS_CERTcert.pemTLS certificate
--hostMOLTENDB_HOST0.0.0.0IP address to bind to. Use 127.0.0.1 for localhost-only, 0.0.0.0 for all interfaces (required for Docker)
--cors-originMOLTENDB_CORS_ORIGIN* โš ๏ธAllowed CORS origin(s)
--jwt-secretMOLTENDB_JWT_SECRETREQUIRED ๐Ÿ”ฅJWT signing secret
--keyMOLTENDB_TLS_KEYkey.pemTLS private key
--portMOLTENDB_PORT1538TCP port
--root-passwordMOLTENDB_ROOT_PASSWORDREQUIRED ๐Ÿ”ฅRoot password
--root-userMOLTENDB_ROOT_USERREQUIRED ๐Ÿ”ฅRoot username
--debugMOLTENDB_DEBUGfalseEnable verbose debug logging
--dev-modeMOLTENDB_DEV_MODEfalseRun over plain HTTP/WS instead of HTTPS/WSS. Ignores --cert and --key. โš ๏ธ NEVER use in production

Database Engine Flags (passed to moltendb-core)

FlagEnv varDefaultDescription
--db-pathMOLTENDB_DB_PATHmy_database.logLog file path
--disable-encryptionMOLTENDB_DISABLE_ENCRYPTIONfalseStore data as plain JSON
--encryption-keyMOLTENDB_ENCRYPTION_KEYbuilt-in default โš ๏ธAt-rest encryption password
--max-body-sizeMOLTENDB_MAX_BODY_SIZE10485760Maximum request body size in bytes
--max-keys-per-requestMOLTENDB_MAX_KEYS_PER_REQUEST1000Maximum number of keys allowed per JSON request
--post-backup-scriptMOLTENDB_POST_BACKUP_SCRIPTNonePath to a script file to run after backup
--rate-limit-requestsMOLTENDB_RATE_LIMIT_REQS100Max requests per IP per window
--rate-limit-windowMOLTENDB_RATE_LIMIT_WINDOW60Window size in seconds
--in-memoryMOLTENDB_IN_MEMORYfalseRun entirely in RAM โ€” no WAL, no disk I/O. All data is lost on exit. Ideal for ephemeral caches and CI environments
--write-modeMOLTENDB_WRITE_MODEasyncasync or sync โ€” controls flush behaviour for the single log file

๐Ÿ”’ Security Considerations

Executing external scripts carries inherent risks. MoltenDB mitigates some of these by:

  • Positional Arguments: The snapshot path is passed as a sanitized argument, not injected into a command string.
  • Explicit Paths: On Windows, scripts in the current directory require the ./ prefix (e.g., --post-backup-script "./my_hook.ps1").
  1. Docker Isolation: Run MoltenDB in a container to isolate the host filesystem and network. Use a minimal base image.
  2. Principle of Least Privilege: Run the MoltenDB process under a dedicated service account with access only to its data directory. Ensure only the MoltenDB service user can read the hook script files.
  3. Absolute Paths: Always use absolute paths for your scripts to avoid "command not found" errors or potential path hijacking.
  4. Sandboxing: Use seccomp or AppArmor/Selinux on Linux to restrict the types of processes MoltenDB can spawn.
  5. Script Hardening: Ensure your hook scripts have restricted permissions (e.g., chmod 700) and do not contain hardcoded secrets. Use environment variables for API keys.

โš ๏ธ = insecure default, must be overridden in production. The server prints a warning at startup for each one that is not set.

๐Ÿ”ฅ = mandatory requirement. The server will not start if these are missing.


Storage Modes

MoltenDB has three storage modes. Choose based on your durability requirements:

ModeFlagBest for
async (default)--write-mode asyncMax throughput, up to 50 ms data loss on crash
sync--write-mode syncZero data loss per write, lower throughput
in-memory--in-memoryEphemeral caches, CI, session stores

Async (default)

Single append-only log file (my_database.log). Writes are buffered in memory and flushed to disk every 50 ms โ€” up to 50 ms of data can be lost on a hard crash. Highest write throughput. Call POST /snapshot to compact manually โ€” a binary snapshot is written so the next startup only replays the delta, not the full log.

Sync (--write-mode sync)

Same single-file layout as async, but every write blocks until the OS confirms the data is on disk. Zero data loss on crash. Lower throughput than async. Use this when losing even 50 ms of writes is unacceptable (financial records, audit logs).

In-Memory (--in-memory)

Bypasses the WAL and all disk I/O entirely. All data lives exclusively in the RAM DashMap โ€” no log file is created or written. This turns MoltenDB into a pure in-process cache with the full query engine (filters, joins, pub/sub) on top. Compaction and revocation-file persistence are automatically skipped. A startup warning is printed to make the ephemeral nature explicit.

โš ๏ธ All data is lost when the server exits. Use this mode for ephemeral caches, session stores, CI test environments, or any scenario where durability is not required.

Write modes summary

  • async (default): writes are buffered in memory and flushed every 50 ms. Up to 50 ms of data loss on a hard crash. Highest throughput.
  • sync: every write blocks until the OS confirms the data. Zero data loss on crash. Lower throughput.

Snapshots, Compaction & Data Safety

What happens during compaction

Compaction runs on demand when you call POST /snapshot. It:

  1. Writes the complete current in-memory state to a temp snapshot file โ€” the live snapshot is untouched at this point.
  2. Moves the existing snapshot to backup/<name>.snapshot.bin.<unix_timestamp>.bak โ€” the old snapshot is never deleted.
  3. Atomically renames the temp file to the live snapshot โ€” a single OS rename, so there is no window where neither file exists.
  4. Resets the live log to empty โ€” but all data is already captured in the new snapshot before this happens.

Is any data lost during compaction?

No. The new snapshot is a full state dump โ€” it contains every document that existed at compaction time, including documents first inserted many compactions ago. There is no snapshot chain to traverse; each snapshot is self-contained.

Compaction 1:  snapshot_1 = { doc_A, doc_B }
Compaction 2:  snapshot_2 = { doc_A, doc_B, doc_C }   โ† doc_A still here
Compaction 3:  snapshot_3 = { doc_A, doc_B, doc_C, doc_D }  โ† doc_A still here

Data is only gone if it was explicitly deleted or overwritten before the compaction ran.

What the backup/ folder contains

Every compaction moves the previous snapshot to backup/ as a .bak file. These are point-in-time copies of the full database state. They are:

  • Not loaded at startup โ€” only the current snapshot is used.
  • Not pruned automatically โ€” they accumulate indefinitely. Clean them up manually or add a retention policy.
  • Useful for manual point-in-time recovery via the recover CLI command.

How large snapshots are loaded at startup

At startup, stream_into_state reads the snapshot file and applies each entry directly into the DashMap as it is read โ€” there is no intermediate buffer. Peak RAM usage at startup is approximately 1ร— the snapshot file size (just the DashMap being built).

The snapshot is a full state dump โ€” it contains every document that existed at compaction time. On startup, only the delta (log lines written after the last snapshot) needs to be replayed.


How the Log Works

MoltenDB uses an append-only log format โ€” every insert, update, and delete is a new JSON line:

{"cmd":"INSERT","collection":"laptops","key":"lp1","value":{"brand":"Lenovo","model":"ThinkPad X1 Carbon","price":1499,"_v":1,"createdAt":"2026-03-09T13:51:05Z","modifiedAt":"2026-03-09T13:51:05Z"}}
{"cmd":"DELETE","collection":"laptops","key":"lp6","value":null}
{"cmd":"DROP","collection":"laptops","key":"_","value":null}

With encryption enabled (the default), each line is an opaque ENC entry:

{"cmd":"ENC","collection":"_","key":"_","value":"base64encodedciphertext..."}

On startup, the log is replayed top-to-bottom to rebuild the in-memory state. After compaction, only the current state is kept โ€” dead entries are removed.


Testing

# Run the full integration test suite (56 tests)
cargo test -p moltendb-server --test integration

# Run with verbose output
cargo test -p moltendb-server --test integration -- --nocapture

# Run the 100 000-entry stress test (insert + log replay verification)
cargo test -p moltendb-server --test stress -- --nocapture

The test suite covers: SET, GET, field selection, WHERE (all 9 operators, case-insensitive string matching), sort, pagination, joins, update, delete, versioning, extends, validation, persistence, compaction, and concurrency (8 threads ร— 100 docs).

Stress & Performance Tools

Three Rust example binaries are provided for real-world load testing against a live server:

# 1. Generate 100 000 synthetic documents (writes tests/stress_data.json + stress_keys.json)
cargo run -p moltendb-server --example generate_stress_data

# 2. Bulk-insert the dataset into the running server
cargo run -p moltendb-server --example stress_insert

# 3. Fire 10 000 concurrent fetch requests and print a latency report
cargo run -p moltendb-server --example stress_fetch

# Tune concurrency (default 10 000) and collection name via env vars
STRESS_CONCURRENCY=50000 STRESS_COLLECTION=stress cargo run -p moltendb-server --example stress_fetch

The fetch report includes min / mean / p50 / p75 / p90 / p95 / p99 / p99.9 / max latency and sustained throughput (req/s). In a typical local debug build, MoltenDB sustains 4 000โ€“8 000 req/s for pure in-memory reads.


Project Structure

MoltenDB is a Cargo Workspace. Each crate lives in its own directory:

MoltenDB/
โ”œโ”€โ”€ Cargo.toml                        โ€” workspace root
โ”‚
โ”œโ”€โ”€ moltendb-core/                    โ€” pure engine crate (no HTTP, no auth)
โ”‚   โ””โ”€โ”€ src/
โ”‚       โ”œโ”€โ”€ lib.rs                    โ€” crate root
โ”‚       โ”œโ”€โ”€ query.rs                  โ€” query AST evaluator ($eq, $in, $regex, $contains, $or, $and, โ€ฆ)
โ”‚       โ”œโ”€โ”€ validation.rs             โ€” collection name / document depth / size guards
โ”‚       โ”œโ”€โ”€ engine/
โ”‚       โ”‚   โ”œโ”€โ”€ mod.rs                โ€” Db struct, thin delegation layer
โ”‚       โ”‚   โ”œโ”€โ”€ open.rs               โ€” Db::open() โ€” native startup (disk / encrypted)
โ”‚       โ”‚   โ”œโ”€โ”€ open_wasm.rs          โ€” Db::open_wasm() โ€” WASM/OPFS startup
โ”‚       โ”‚   โ”œโ”€โ”€ config.rs             โ€” DbConfig (path, encryption key, storage options)
โ”‚       โ”‚   โ”œโ”€โ”€ schema.rs             โ€” JSON Schema validation per collection
โ”‚       โ”‚   โ”œโ”€โ”€ types.rs              โ€” LogEntry, DbError, DocumentState, RecordPointer
โ”‚       โ”‚   โ”œโ”€โ”€ operations/           โ€” all engine operations (one file per operation)
โ”‚       โ”‚   โ”‚   โ”œโ”€โ”€ mod.rs            โ€” re-exports: get, get_all, insert, update, delete, โ€ฆ
โ”‚       โ”‚   โ”‚   โ”œโ”€โ”€ common.rs         โ€” shared helpers (now_iso())
โ”‚       โ”‚   โ”‚   โ”œโ”€โ”€ read.rs           โ€” get (batch, Vec<String> โ†’ HashMap), get_all
โ”‚       โ”‚   โ”‚   โ”œโ”€โ”€ insert.rs         โ€” insert (batch, versioning, schema validation, WAL)
โ”‚       โ”‚   โ”‚   โ”œโ”€โ”€ update.rs         โ€” update (partial patch, _v optimistic lock, WAL)
โ”‚       โ”‚   โ”‚   โ”œโ”€โ”€ delete.rs         โ€” delete (batch, Vec<String>), delete_collection
โ”‚       โ”‚   โ”‚   โ”œโ”€โ”€ compact.rs        โ€” compact (build log entries, call compact_with_hook)
โ”‚       โ”‚   โ”‚   โ””โ”€โ”€ recover.rs        โ€” recover_to (PITR restore from backup snapshot)
โ”‚       โ”‚   โ””โ”€โ”€ storage/
โ”‚       โ”‚       โ”œโ”€โ”€ mod.rs            โ€” StorageBackend trait, apply_entry, startup WAL replay
โ”‚       โ”‚       โ”œโ”€โ”€ disk/             โ€” disk storage (split module)
โ”‚       โ”‚       โ”‚   โ”œโ”€โ”€ mod.rs        โ€” re-exports: AsyncDiskStorage, SyncDiskStorage, helpers
โ”‚       โ”‚       โ”‚   โ”œโ”€โ”€ async_storage.rs โ€” MPSC channel + background Tokio flush task
โ”‚       โ”‚       โ”‚   โ”œโ”€โ”€ sync_storage.rs  โ€” Mutex-guarded BufWriter, immediate flush
โ”‚       โ”‚       โ”‚   โ”œโ”€โ”€ log.rs        โ€” stream_log_entries, read_log_from_disk
โ”‚       โ”‚       โ”‚   โ””โ”€โ”€ snapshot.rs   โ€” write_snapshot, load_snapshot, atomic rename, backup rotation
โ”‚       โ”‚       โ”œโ”€โ”€ memory.rs         โ€” InMemoryStorage (ephemeral, no disk)
โ”‚       โ”‚       โ”œโ”€โ”€ encrypted.rs      โ€” XChaCha20-Poly1305 + Argon2id encryption wrapper
โ”‚       โ”‚       โ””โ”€โ”€ wasm.rs           โ€” OpfsStorage (browser OPFS backend)
โ”‚       โ””โ”€โ”€ handlers/
โ”‚           โ”œโ”€โ”€ mod.rs
โ”‚           โ”œโ”€โ”€ process_get.rs        โ€” GET handler (query, field selection, joins, pagination)
โ”‚           โ”œโ”€โ”€ process_set.rs        โ€” SET handler (insert/upsert, extends resolution)
โ”‚           โ”œโ”€โ”€ process_update.rs     โ€” UPDATE handler (partial merge, $unset)
โ”‚           โ”œโ”€โ”€ process_delete.rs     โ€” DELETE handler (single, batch, drop)
โ”‚           โ”œโ”€โ”€ process_snapshot.rs   โ€” SNAPSHOT handler (PITR trigger)
โ”‚           โ”œโ”€โ”€ process_schema.rs     โ€” SCHEMA handler (define / update collection schema)
โ”‚
โ”œโ”€โ”€ moltendb-auth/                    โ€” identity crate (JWT, Argon2, scoped delegation) โ€” excluded from WASM
โ”‚   โ””โ”€โ”€ src/
โ”‚       โ””โ”€โ”€ lib.rs                    โ€” Claims (jti, scopes), has_access(), key_matches(),
โ”‚                                       create_scoped_token(), RevocationStore,
โ”‚                                       UserStore, DelegateRequest/Response,
โ”‚                                       auth_middleware (JWT validation + revocation check)
โ”‚
โ”œโ”€โ”€ moltendb-server/                  โ€” network crate (Axum, TLS, CLI, rate limiting)
โ”‚   โ”œโ”€โ”€ src/
โ”‚   โ”‚   โ”œโ”€โ”€ main.rs                   โ€” server entry point, router wiring, CLI config, background tasks
โ”‚   โ”‚   โ”œโ”€โ”€ lib.rs                    โ€” library root (re-exports for integration tests)
โ”‚   โ”‚   โ”œโ”€โ”€ route_handlers.rs         โ€” all HTTP handlers (login, delegate, revoke, set, get, update,
โ”‚   โ”‚   โ”‚                               delete, snapshot, schema, REST get/collection)
โ”‚   โ”‚   โ”œโ”€โ”€ ws.rs                     โ€” WebSocket upgrade, per-connection authenticated push
โ”‚   โ”‚   โ”œโ”€โ”€ server.rs                 โ€” TLS config loader, graceful shutdown signal
โ”‚   โ”‚   โ””โ”€โ”€ rate_limit.rs             โ€” per-IP sliding window rate limiter
โ”‚   โ”œโ”€โ”€ tests/
โ”‚   โ”‚   โ””โ”€โ”€ integration.rs            โ€” integration test suite
โ”‚   โ””โ”€โ”€ examples/
โ”‚       โ”œโ”€โ”€ generate_stress_data.rs   โ€” generates 100 000 synthetic documents
โ”‚       โ”œโ”€โ”€ stress_insert.rs          โ€” bulk-inserts the dataset into a live server
โ”‚       โ””โ”€โ”€ stress_fetch.rs           โ€” fires concurrent GET requests, reports latency percentiles
โ”‚
โ”œโ”€โ”€ moltendb-wasm/                    โ€” WASM crate (browser / Node.js bundle)
โ”‚   โ””โ”€โ”€ src/
โ”‚       โ””โ”€โ”€ lib.rs                    โ€” wasm-bindgen entry point, OPFS-backed Db
โ”‚
โ”œโ”€โ”€ tests/
โ”‚   โ”œโ”€โ”€ requests_1_reads.http         โ€” GET / query / field-selection examples
โ”‚   โ”œโ”€โ”€ requests_2_joins.http         โ€” join query examples
โ”‚   โ”œโ”€โ”€ requests_3_mutations.http     โ€” SET / UPDATE / DELETE examples
โ”‚   โ”œโ”€โ”€ requests_4_security.http      โ€” auth / JWT / rate-limit examples
โ”‚   โ”œโ”€โ”€ requests_5_schemas.http       โ€” schema definition examples
โ”‚   โ”œโ”€โ”€ requests_6_auth_telemetry.http โ€” delegation / revocation / telemetry examples
โ”‚   โ”œโ”€โ”€ requests_7_in_memory.http     โ€” in-memory mode examples
โ”‚   โ””โ”€โ”€ stress_fetch.http             โ€” stress-test request file
โ”œโ”€โ”€ pkg/                              โ€” generated WASM package (wasm-pack output)
โ””โ”€โ”€ assets/
    โ””โ”€โ”€ logo.png

Horizontal Scaling

MoltenDB is currently a single-node, embedded database. Its state lives in DashMap in memory, backed by an append-only log on disk. There is no built-in concept of nodes, replication, or sharding.

Single-node throughput

OperationThroughputBottleneck
Reads (get, get_all)100kโ€“500k+ req/sNone โ€” pure lock-free DashMap lookups
Writes (insert, delete, update)10kโ€“50k req/sSequential log writer (one Mutex-guarded append)

Reads are fully parallel and scale with CPU cores. Writes are bounded by disk I/O on the log writer.

Scaling options

Option 1 โ€” Read replicas (easiest, read-heavy workloads)

One primary node accepts all writes. One or more replica nodes tail the primary's log and replay entries via the same apply_entry path used at startup. Reads are distributed across replicas; writes always go to the primary.

MoltenDB already has most of the building blocks: the append-only log is the source of truth, stream_into_state / apply_entry already replay log entries into RAM state, and the WebSocket broadcast could be repurposed to stream log entries to replicas.

What needs to be added: a replication protocol (push log entries from primary โ†’ replicas), a read_only flag on replicas, and a load balancer to route reads to replicas and writes to the primary.

Option 2 โ€” Sharding (write-heavy workloads)

Split collections across nodes โ€” each node owns a subset of the data. Requires a shard map and a coordinator or client-side routing layer. Most complex option but gives true write scalability.

Option 3 โ€” Active-active (high availability)

Multiple nodes accept writes independently and sync with each other. Requires conflict resolution. MoltenDB already has conflict detection logic (_v optimistic locking), but full multi-master is a significant undertaking.

Read replicas are the most natural first step given the existing architecture. A single node with read replicas will scale very far before sharding becomes necessary โ€” the single node already handles hundreds of thousands of reads per second.


What's Next? (The Roadmap)

MoltenDB is currently in RC Stage. The core engine is stable, fast, and feature-rich.

1. Scaling & Ecosystem

  • Mobile Native Modules: Compiling the exact same Rust core to run natively on iOS and Android (via FFI/JNI). This will bring blazing-fast, local-first embedded databases to React Native and Flutter.
  • Language Clients: Official transport drivers for Python, Go, and Swift.
  • Data Portability: Built-in, zero-friction utilities to export your entire database to standard JSON and CSV formats. No vendor lock-in.

2. Distributed Systems & Core

  • Robust Sync: Two-way browser โ†” server delta sync with automatic conflict resolution (server-wins on _v collision).
  • Hardened Analytics: The COUNT/SUM/AVG/MIN/MAX analytics engine exists in the codebase but is currently under development and not ready for production use. Expanding and rigorously testing it, accompanied by a comprehensive, interactive live demo, is a key roadmap item.

3. Security, Tooling & Polish

  • MoltenDB Studio (Premium): A paid, official GUI dashboard to visually manage your databases, inspect collections, and execute queries without touching the CLI.

What's NOT on the Roadmap (The Anti-Goals)

Keeping a project fast and lightweight means being very strict about what not to build. Here are a few things I have intentionally decided to leave out of MoltenDB:

  • Natural Language Queries (NLQ): I know AI and "chat-to-query" interfaces are the hot trend right now, and it feels like every database is bolting them on. However, MoltenDB is fundamentally designed to be lean, predictable, and exceptionally fast. Adding NLQ or embedding a vector engine would completely destroy the lightweight footprint of the WASM build and the native binary. While I might explore building an NLQ adapter as a completely separate middleware package down the road, it will never be baked into the core engine.
  • Heavy Data Transformations (map, flat, flatMap): The query engine is highly optimized to retrieve your data (with precise field selection) as quickly as possible. Baking complex array manipulations or heavy map/reduce operations into the fetch pipeline adds unnecessary overhead to the core engine. It is much faster and cleaner to let the database be a database, and handle those specific data transformations in your application layer (JavaScript/Rust) after the data is returned.

License

MoltenDB is licensed under the Business Source License 1.1.

  • Free for personal use and organisations with annual revenue under $5 million USD.
  • Not permitted to offer MoltenDB as a hosted/managed service (Database-as-a-Service) without a commercial license.
  • Converts to MIT automatically 3 years after each version's release date.

For commercial licensing enquiries: admin@moltendb.dev