The Kubo config file is a JSON document located at $IPFS_PATH/config. It
is read once at node instantiation, either for an offline command, or when
starting the daemon. Commands that execute on a running daemon do not read the
config file at runtime.
Multiaddr or array of multiaddrs describing the addresses to serve
the local Kubo RPC API (/api/v0).
Supported Transports:
tcp/ip{4,6} - /ipN/.../tcp/...
unix - /unix/path/to/socket
Caution
NEVER EXPOSE UNPROTECTED ADMIN RPC TO LAN OR THE PUBLIC INTERNET
The RPC API grants admin-level access to your Kubo IPFS node, including
configuration and secret key management.
By default, it is bound to localhost for security reasons. Exposing it to LAN
or the public internet is highly risky—similar to exposing a SQL database or
backend service without authentication middleware
If you need secure access to a subset of RPC, secure it with API.Authorizations or custom auth middleware running in front of the localhost-only RPC port defined here.
If you are looking for an interface designed for browsers and public internet, use Addresses.Gateway port instead.
Multiaddr or array of multiaddrs describing the address to serve
the local HTTP gateway (/ipfs, /ipns) on.
Supported Transports:
tcp/ip{4,6} - /ipN/.../tcp/...
unix - /unix/path/to/socket
Caution
SECURITY CONSIDERATIONS FOR GATEWAY EXPOSURE
By default, the gateway is bound to localhost for security. If you bind to 0.0.0.0
or a public IP, anyone with access can trigger retrieval of arbitrary CIDs, causing
bandwidth usage and potential exposure to malicious content. Limit with
Gateway.NoFetch. Consider firewall rules, authentication,
and Gateway.PublicGateways for public exposure.
See Security section for network exposure considerations.
An array of multiaddrs describing which addresses to listen on for p2p swarm
connections.
Supported Transports:
tcp/ip{4,6} - /ipN/.../tcp/...
websocket - /ipN/.../tcp/.../ws
quicv1 (RFC9000) - /ipN/.../udp/.../quic-v1 - can share the same two tuple with /quic-v1/webtransport
webtransport /ipN/.../udp/.../quic-v1/webtransport - can share the same two tuple with /quic-v1
Important
Make sure your firewall rules allow incoming connections on both TCP and UDP ports defined here.
See Security section for network exposure considerations.
Note that quic (Draft-29) used to be supported with the format /ipN/.../udp/.../quic, but has since been removed.
An array of multiaddrs (exact matches or /ipcidr/ netmasks). Kubo does not
announce these addresses and strips them from libp2p identify, the DHT
self-record, and the signed peer record. Matching entries in
Addresses.Announce and
Addresses.AppendAnnounce are removed as well.
This is the publish-side filter: it controls what other peers learn about
this node's addresses. It does not affect what this node dials. For the
dial-side filter see Swarm.AddrFilters. The
server profile typically populates both fields together
so that a range is neither advertised nor dialed.
Tip
The server profile populates this field with a set of
private, local-only, and non-globally-reachable prefixes (RFC 1918 private,
RFC 6598 CGNAT, ULA, link-local, and others). See the
server profile section for the full list and for
optional entries operators may add manually.
The API.Authorizations field defines user-based access restrictions for the
Kubo RPC API, which is located at
Addresses.API under /api/v0 paths.
By default, the admin-level RPC API is accessible without restrictions as it is only
exposed on 127.0.0.1 and safeguarded with Origin check and implicit
CORS headers that
block random websites from accessing the RPC.
When entries are defined in API.Authorizations, RPC requests will be declined
unless a corresponding secret is present in the HTTP Authorization header,
and the requested path is included in the AllowedPaths list for that specific
secret.
Caution
NEVER EXPOSE UNPROTECTED ADMIN RPC TO LAN OR THE PUBLIC INTERNET
The RPC API is vast. It grants admin-level access to your Kubo IPFS node, including
configuration and secret key management.
If you need secure access to a subset of RPC, make sure you understand the risk, block everything by default and allow basic auth access with API.Authorizations or custom auth middleware running in front of the localhost-only port defined in Addresses.API.
If you are looking for an interface designed for browsers and public internet, use Addresses.Gateway port instead.
Default: null
Type: object[string -> object] (user name -> authorization object, see below)
For example, to limit RPC access to Alice (access id and MFS files commands with HTTP Basic Auth)
and Bob (full access with Bearer token):
The AllowedPaths field is an array of strings containing allowed RPC path
prefixes. Users authorized with the related AuthSecret will only be able to
access paths prefixed by the specified prefixes.
For instance:
If set to ["/api/v0"], the user will have access to the complete RPC API.
If set to ["/api/v0/id", "/api/v0/files"], the user will only have access
to the id command and all MFS commands under files.
Note that /api/v0/version is always permitted access to allow version check
to ensure compatibility.
Contains the configuration options for the libp2p's AutoNAT service. The AutoNAT service
helps other nodes on the network determine if they're publicly reachable from
the rest of the internet.
When unset (default), the AutoNAT service defaults to enabled. Otherwise, this
field can take one of two values:
enabled - Enable the V1+V2 service (unless the node determines that it,
itself, isn't reachable by the public internet).
legacy-v1 - DEPRECATED Same as enabled but only V1 service is enabled. Used for testing
during as few releases as we transition to V2, will be removed in the future.
disabled - Disable the service.
Additional modes may be added in the future.
Important
We are in the progress of rolling out AutoNAT V2.
Right now, by default, a publicly dialable Kubo provides both V1 and V2 service to other peers,
and V1 is still used by Kubo for Autorelay feature. In a future release we will remove V1 and switch all features to use V2.
When set, this option configures the AutoNAT services throttling behavior. By
default, Kubo will rate-limit the number of NAT checks performed for other
nodes to 30 per minute, and 3 per peer.
The AutoConf feature enables Kubo nodes to automatically fetch and apply network configuration from a remote JSON endpoint. This system allows dynamic configuration updates for bootstrap peers, DNS resolvers, delegated routing, and IPNS publishing endpoints without requiring manual updates to each node's local config.
AutoConf works by using special "auto" placeholder values in configuration fields. When Kubo encounters these placeholders, it fetches the latest configuration from the specified URL and resolves the placeholders with the appropriate values at runtime. The original configuration file remains unchanged - "auto" values are preserved in the JSON and only resolved in memory during node operation.
AutoConf supports path-based routing URLs that automatically enable specific routing operations based on the URL path. This allows precise control over which HTTP Routing V1 endpoints are used for different operations:
Supported paths:
/routing/v1/providers - Enables provider record lookups only
/routing/v1/peers - Enables peer routing lookups only
/routing/v1/ipns - Enables IPNS record operations only
No path - Enables all routing operations (backward compatibility)
mainnet-for-nodes-with-dht: Mainnet nodes with DHT enabled (typically only need additional provider lookups)
mainnet-for-nodes-without-dht: Mainnet nodes without DHT (need comprehensive routing services)
mainnet-for-ipns-publishers-with-http: Mainnet nodes that publish IPNS records via HTTP
This design enables efficient, selective routing where each endpoint URL automatically determines its capabilities based on the path, while maintaining semantic grouping by node configuration type.
Controls whether the AutoConf system is active. When enabled, Kubo will fetch configuration from the specified URL and resolve "auto" placeholders at runtime. When disabled, any "auto" values in the configuration will cause daemon startup to fail with validation errors.
This provides a safety mechanism to ensure nodes don't start with unresolved placeholders when AutoConf is intentionally disabled.
Specifies the HTTP(S) URL from which to fetch the autoconf JSON. The endpoint should return a JSON document containing Bootstrap peers, DNS resolvers, delegated routing endpoints, and IPNS publishing endpoints that will replace "auto" placeholders in the local configuration.
The URL must serve a JSON document matching the AutoConf schema. Kubo validates all multiaddr and URL values before caching to ensure they are properly formatted.
When not specified in the configuration, the default mainnet URL is used automatically.
Note
Public good autoconf manifest at conf.ipfs-mainnet.org is provided by the team at Shipyard.
Default: "https://conf.ipfs-mainnet.org/autoconf.json" (when not specified)
Specifies how frequently Kubo should refresh autoconf data. This controls both how often cached autoconf data is considered fresh and how frequently the background service checks for new configuration updates.
When a new configuration version is detected during background updates, Kubo logs an ERROR message informing the user that a node restart is required to apply the changes to any "auto" entries in their configuration.
FOR TESTING ONLY - Allows skipping TLS certificate verification when fetching autoconf from HTTPS URLs. This should never be enabled in production as it makes the configuration fetching vulnerable to man-in-the-middle attacks.
The AutoTLS feature enables publicly reachable Kubo nodes (those dialable from the public
internet) to automatically obtain a wildcard TLS certificate for a DNS name
unique to their PeerID at *.[PeerID].libp2p.direct. This enables direct
libp2p connections and retrieval of IPFS content from browsers Secure Context
using transports such as Secure WebSockets,
without requiring user to do any manual domain registration and certificate configuration.
Under the hood, p2p-forge client uses public utility service at libp2p.direct as an ACME DNS-01 Challenge
broker enabling peer to obtain a wildcard TLS certificate tied to public key of their PeerID.
By default, the certificates are requested from Let's Encrypt. Origin and rationale for this project can be found in community.letsencrypt.org discussion.
Enables the AutoTLS feature to provide DNS and TLS support for libp2p Secure WebSocket over a /tcp port,
to allow JS clients running in web browser Secure Context to connect to Kubo directly.
When activated, together with AutoTLS.AutoWSS (default) or manually including a /tcp/{port}/tls/sni/*.libp2p.direct/ws multiaddr in Addresses.Swarm
(with SNI suffix matching AutoTLS.DomainSuffix), Kubo retrieves a trusted PKI TLS certificate for *.{peerid}.libp2p.direct and configures the /ws listener to use it.
Note:
This feature requires a publicly reachable node. If behind NAT, manual port forwarding or UPnP (Swarm.DisableNatPortMap=false) is required.
The first time AutoTLS is used, it may take 5-15 minutes + AutoTLS.RegistrationDelay before /ws listener is added. Be patient.
Avoid manual configuration. AutoTLS.AutoWSS=true should automatically add /ws listener to existing, firewall-forwarded /tcp ports.
To troubleshoot, use GOLOG_LOG_LEVEL="error,autotls=debug for detailed logs, or GOLOG_LOG_LEVEL="error,autotls=info for quieter output.
Certificates are stored in $IPFS_PATH/p2p-forge-certs; deleting this directory and restarting the daemon forces a certificate rotation.
For now, the TLS cert applies solely to /ws libp2p WebSocket connections, not HTTP Gateway, which still need separate reverse proxy TLS setup with a custom domain.
Optional. Controls if Kubo should add /tls/sni/*.libp2p.direct/ws listener to every pre-existing /tcp port IFF no explicit /ws is defined in Addresses.Swarm already.
Optional. Controls if final AutoTLS listeners are announced under shorter /dnsX/A.B.C.D.peerid.libp2p.direct/tcp/4001/tls/ws addresses instead of fully resolved /ip4/A.B.C.D/tcp/4001/tls/sni/A-B-C-D.peerid.libp2p.direct/tls/ws.
The main use for AutoTLS is allowing connectivity from Secure Context in a web browser, and DNS lookup needs to happen there anyway, making /dnsX a more compact, more interoperable option without obvious downside.
Optional. Controls whether to skip network DNS lookups for p2p-forge domains like *.libp2p.direct.
This applies to DNS resolution performed via DNS.Resolvers, including /dns* multiaddrs resolved by go-libp2p (e.g., peer addresses from DHT or delegated routing).
When enabled (default), A/AAAA queries for hostnames matching AutoTLS.DomainSuffix are resolved locally by parsing the IP address directly from the hostname (e.g., 1-2-3-4.peerID.libp2p.direct resolves to 1.2.3.4 without network I/O). This avoids unnecessary DNS queries since the IP is already encoded in the hostname.
If the hostname format is invalid (wrong peerID, malformed IP encoding), the resolver falls back to network DNS, ensuring forward compatibility with potential future DNS record types.
Set to false to always use network DNS for these domains. This is primarily useful for debugging or if you need to override resolution behavior via DNS.Resolvers.
Optional override of the parent domain suffix that will be used in DNS+TLS+WebSockets multiaddrs generated by p2p-forge client.
Do not change this unless you self-host p2p-forge.
Optional override of p2p-forge HTTP registration API.
Do not change this unless you self-host p2p-forge under own domain.
Important
The default endpoint performs libp2p Peer ID Authentication over HTTP
(proving ownership of PeerID), probes if your Kubo node can correctly answer to a libp2p Identify query.
This ensures only a correctly configured, publicly dialable Kubo can initiate ACME DNS-01 challenge for peerid.libp2p.direct.
Optional value for Forge-Authorization token sent with request to RegistrationEndpoint
(useful for private/self-hosted/test instances of p2p-forge, unset by default).
An additional delay applied before sending a request to the RegistrationEndpoint.
The default delay is bypassed if the user explicitly set AutoTLS.Enabled=true in the JSON configuration file.
This ensures that ephemeral nodes using the default configuration do not spam theAutoTLS.CAEndpoint with unnecessary ACME requests.
Default: 1h (or 0 if explicit AutoTLS.Enabled=true)
Determines whether Kubo will use Bitswap over libp2p.
Disabling this, will remove /ipfs/bitswap/* protocol support from libp2p identify responses, effectively shutting down both Bitswap libp2p client and server.
Warning
Bitswap over libp2p is a core component of Kubo and the oldest way of exchanging blocks. Disabling it completely may cause unpredictable outcomes, such as retrieval failures, if the only providers were libp2p ones. Treat this as experimental and use it solely for testing purposes with HTTPRetrieval.Enabled.
Determines whether Kubo functions as a Bitswap server to host and respond to block requests.
Disabling the server retains client and protocol support in libp2p identify responses but causes Kubo to reply with "don't have" to all block requests.
Bootstrap peers help your node discover and connect to the IPFS network when starting up. This array contains multiaddrs of trusted nodes that your node contacts first to find other peers and content.
The special value "auto" automatically uses curated, up-to-date bootstrap peers from AutoConf, ensuring your node can always connect to the healthy network without manual maintenance.
What this gives you:
Reliable startup: Your node can always find the network, even if some bootstrap peers go offline
Automatic updates: New bootstrap peers are added as the network evolves
Custom control: Add your own trusted peers alongside or instead of the defaults
A soft upper limit for the size of the ipfs repository's datastore. With StorageGCWatermark,
is used to calculate whether to trigger a gc run (only if --enable-gc flag is set).
Note
This only controls when automatic GC of raw blocks is triggered. It is not a
hard limit on total disk usage. The metadata stored alongside blocks (pins,
MFS, provider system state, pubsub message ID tracking, and other internal
data) is not counted against this limit. Always include extra headroom to
account for metadata overhead. See datastores.md for details
on how different datastore backends handle disk space reclamation.
The percentage of the StorageMax value at which a garbage collection will be
triggered automatically if the daemon was run with automatic gc enabled (that
option defaults to false currently).
The size in bytes of the blockstore's bloom filter.
A value of 0 disables the feature.
The bloom filter answers "does the blockstore not have this CID?" from RAM
without touching the datastore. A negative answer is exact (no false
negatives, so blocks are never falsely reported missing); a positive answer
is probabilistic and falls through to the underlying blockstore for
verification. The chance of a false "maybe present" is the filter's
false-positive rate (FPR). A false positive costs one wasted datastore
lookup; it never causes data loss or incorrect retrieval. The lower the FPR,
the more Has() calls the filter answers from RAM alone.
This cache pays off most on nodes that field many requests for content they
don't host: public gateways, mirrors, and peers asked to serve
opportunistically-cached blocks.
The complementary cache for the positive path (block exists, look up its
size) is Datastore.BlockKeyCacheSize.
Kubo wires the underlying ipfs/bbloom
filter with k=7 hash positions. Two kubo-specific behaviors matter for
sizing:
Power-of-two bit-count rounding. bbloom rounds the requested bit
count up to the next power of two, so a BloomFilterSize value that is
not itself a power of two in bits silently allocates more memory than
configured. For example, BloomFilterSize: 1199120 (~1.14 MiB)
actually allocates a 16,777,216-bit (= 2 MiB) filter internally. For
predictable behavior, pick BloomFilterSize values that are
power-of-two byte counts: 1 MiB, 2 MiB, 4 MiB, ..., 256 MiB, 512 MiB,
1 GiB.
Fixed k=7. With seven hash positions, FPR for a filter of m
bits and n inserted entries is (1 - exp(-7n/m))^7. To hit a
target FPR, budget roughly ~1.8 bytes per entry at ~1% FPR, ~2.8
bytes per entry at ~0.1% FPR, and ~4.2 bytes per entry at ~0.01%
FPR. These figures already include the average ~1.5x penalty from
the power-of-two rounding above; the worst case is ~2x.
For a tighter FPR at the design point, step up to the next power of two.
The hur.st/bloomfilter
calculator works as a reference for exploring (n, p, m) combinations
(remember kubo uses k=7); just keep in mind that the m it suggests
is the optimal-fit value, while bbloom rounds up to the next power of
two on top of that.
A bloom filter is fixed-size after creation. As more CIDs are inserted
past its design n, the false-positive rate climbs steeply. Rough
behavior with a filter sized for ~0.6% FPR at its design point:
At n: ~0.6% FPR. Every "definitely not" reliably saves a datastore
lookup.
At ~2 × n: ~11% FPR. Most negatives still save lookups, but tail
latency rises because each "maybe" still hits the datastore.
At ~4 × n: ~58% FPR. Most "maybe" answers fall through. The filter
is mostly paying CPU and RAM cost without short-circuiting much.
At ~8 × n or more: above ~95% FPR. Effectively saturated. The
filter answers "maybe" for nearly every CID and provides no benefit.
Size for expected steady-state, not today's count, and re-tune after
crossing the design point. Bloom filters cannot grow in place; raising
BloomFilterSize and restarting the daemon rebuilds the filter from
scratch.
A poorly-sized filter is never a correctness issue. Bloom filters
have no false negatives, so blocks are never falsely reported missing.
The risks are operational:
Wasted RAM and CPU. Every Has() still runs all seven hash
positions. Once the filter saturates, those cycles return nothing.
Silent regression as the pinset grows. A filter sized for last
year's data can drift past saturation without warning; the
negative-Has short-circuit benefit just quietly disappears.
Recurring startup tax. The filter rebuilds on every daemon
restart (see below). On slow disks this means minutes of
AllKeysChan walking, paid in full even when the resulting filter
is too small to help.
Quick health check: divide BloomFilterSize by your current block count.
Below ~1 byte/block the filter is past its design point; below
~0.5 bytes/block it is effectively saturated.
The filter is not persisted across restarts. Every daemon start rebuilds it
by walking all datastore keys (AllKeysChan). On very large blockstores or
slow disks this can take many minutes, during which Has() falls through
to the datastore and the filter provides no benefit. Datastores that cannot
enumerate keys without reading values (block content) pay even more here;
flatfs and pebble both support keys-only iteration, so the rebuild cost
scales with the keyset, not data volume.
This option controls whether a block that already exist in the datastore
should be written to it. When set to false, a Has() call is performed
against the datastore prior to writing every block. If the block is already
stored, the write is skipped. This check happens both on the Blockservice and
the Blockstore layers and this setting affects both.
When set to true, no checks are performed and blocks are written to the
datastore, which depending on the implementation may perform its own checks.
The maximum number of entries held in the blockstore's Two-Queue cache. The
cache stores per-CID metadata (existence and block size) but never block
content. Use 0 to disable.
A cache hit answers Has and GetSize from RAM and skips the underlying
datastore lookup. This includes the per-block os.Stat flatfs does to learn a
block's size, which is the dominant cost on bitswap servers responding to peer
wantlists.
The cache uses a Two-Queue (2Q) replacement policy:
an entry must be touched twice before it is promoted to the frequently-used
tier. A long one-shot scan (reprovider, GC, ipfs repo verify) therefore
does not evict the hot entries that bitswap repeatedly serves.
Memory usage is roughly the entry count times the per-entry overhead, which
combines 2Q bookkeeping, the multihash key bytes, and the cached value. As a
rough estimate, budget ~200 bytes per entry, so 1048576 (1M entries) is on
the order of ~200 MB resident. The cache only needs to cover the hot
working set of CIDs (the ones repeatedly hit by inbound bitswap, gateway,
or DAG-resolution traffic), not the entire blockstore.
The default of 65536 is sized for small dev/desktop nodes. Operators
running public gateways, pinning clusters, or any node serving non-trivial
bitswap traffic should size this against the active working set. See
Datastore.BloomFilterSize for the
complementary negative-Has() short-circuit that pairs well with this cache.
Default: 65536 (entries)
Type: optionalInteger (non-negative, number of entries)
Spec defines the structure of the ipfs datastore. It is a composable structure,
where each datastore is represented by a json object. Datastores can wrap other
datastores to provide extra functionality (eg metrics, logging, or caching).
Note
For more information on possible values for this configuration option, see kubo/docs/datastores.md
By default, Kubo's gateway is configured for local use at 127.0.0.1 and localhost.
To run a public gateway, configure your domain names in Gateway.PublicGateways.
For production deployment considerations (reverse proxy, timeouts, rate limiting, CDN),
see Running in Production.
A boolean to configure whether DNSLink lookup for value in Host HTTP header
should be performed. If DNSLink is present, the content path stored in the DNS TXT
record becomes the / and the respective payload is returned to the client.
An optional flag to explicitly configure whether this gateway responds to deserialized
requests, or not. By default, it is enabled. When disabling this option, the gateway
operates as a Trustless Gateway only: https://specs.ipfs.tech/http-gateways/trustless-gateway/.
An optional flag to enable automatic conversion between codecs when the
requested format differs from the block's native codec (e.g., converting
dag-pb or dag-cbor to dag-json).
When disabled (the default), the gateway returns 406 Not Acceptable for
codec mismatches, following behavior specified in
IPIP-524.
Most users should keep this disabled unless legacy
IPLD Logical Format
support is needed as a stop-gap while switching clients to ?format=raw
and converting client-side.
Instead of relying on gateway-side conversion, fetch the raw block using
?format=raw (application/vnd.ipld.raw) and convert client-side. This:
Allows clients to use any codec without waiting for gateway support
Enables ecosystem innovation without gateway operator coordination
An optional flag to disable the pretty HTML error pages of the gateway. Instead,
a text/plain page will be returned with the raw error message from Kubo.
It is useful for whitelabel or middleware deployments that wish to avoid
text/html responses with IPFS branding and links on error pages in browsers.
An optional flag to expose Kubo Routing system on the gateway port
as an HTTP /routing/v1 endpoint on 127.0.0.1.
Use reverse proxy to expose it on a different hostname.
This endpoint can be used by other Kubo instances, as illustrated in
delegated_routing_v1_http_proxy_test.go.
Kubo will filter out routing results which are not actionable, for example, all
graphsync providers will be skipped. If you need a generic pass-through, see
standalone router implementation named someguy.
An absolute deadline for the entire gateway request. Unlike RetrievalTimeout (which resets on each data write and catches stalled transfers), this is a hard limit on the total time a request can take.
Returns 504 Gateway Timeout when exceeded. This protects the gateway from edge cases and slow client attacks.
Maximum file size for HTTP range requests on deserialized responses. Range requests for files larger than this limit return 501 Not Implemented.
Why this exists:
Some CDNs like Cloudflare intercept HTTP range requests and convert them to full file downloads when files exceed their cache bucket limits. Cloudflare's default plan only caches range requests for files up to 5GiB. Files larger than this receive HTTP 200 with the entire file instead of HTTP 206 with the requested byte range. A client requesting 1MB from a 40GiB file would unknowingly download all 40GiB, causing bandwidth overcharges for the gateway operator, unexpected data costs for the client, and potential browser crashes.
This only affects deserialized responses. Clients fetching verifiable blocks as application/vnd.ipld.raw are not impacted because they work with small chunks that stay well below CDN cache limits.
How to use:
Set this to your CDN's range request cache limit (e.g., "5GiB" for Cloudflare's default plan). The gateway returns 501 Not Implemented for range requests over files larger than this limit, with an error message suggesting verifiable block requests as an alternative.
Note
Cloudflare users running open gateway hosting deserialized responses should deploy additional protection via Cloudflare Snippets (requires Enterprise plan). The Kubo configuration alone is not sufficient because Cloudflare has already intercepted and cached the response by the time it reaches your origin. See boxo#856 for a snippet that aborts HTTP 200 responses when Content-Length exceeds the limit.
Limits concurrent HTTP requests. Requests beyond limit receive 429 Too Many Requests.
Protects nodes from traffic spikes and resource exhaustion, especially behind reverse proxies without rate-limiting. Default (4096) aligns with common reverse proxy configurations (e.g., nginx: 8 workers × 1024 connections).
Monitoring:ipfs_http_gw_concurrent_requests tracks current requests in flight.
Tuning guidance:
Monitor ipfs_http_gw_concurrent_requests gauge for usage patterns
Track 429s (ipfs_http_gw_responses_total{status="429"}) and success rate ({status="200"})
Near limit with low resource usage → increase value
Memory pressure or OOMs → decrease value and consider scaling
Set slightly below reverse proxy limit for graceful degradation
Start with default, adjust based on observed performance for your hardware
URL for a service to diagnose CID retrievability issues. When the gateway returns a 504 Gateway Timeout error, an "Inspect retrievability of CID" button will be shown that links to this service with the CID appended as ?cid=<CID-to-diagnose>.
This configuration is NOT for HTTP Client, it is for HTTP Server – use this ONLY if you want to run your own IPFS gateway.
PublicGateways is a configuration map used for dictionary for customizing gateway behavior
on specified hostnames that point at your Kubo instance.
It is useful when you want to run Path gateway on example.com/ipfs/cid,
and Subdomain gateway on cid.ipfs.example.org,
or limit verifiable.example.net to response types defined in Trustless Gateway specification.
Caution
Keys (Hostnames) MUST be unique. Do not use the same parent domain for multiple gateway types, it will break origin isolation.
Hostnames can optionally be defined with one or more wildcards.
Examples:
*.example.com will match requests to http://foo.example.com/ipfs/* or http://{cid}.ipfs.bar.example.com/*.
foo-*.example.com will match requests to http://foo-bar.example.com/ipfs/* or http://{cid}.ipfs.foo-xyz.example.com/*.
Important
Reverse Proxy: If running behind nginx or another reverse proxy, ensure
Host and X-Forwarded-* headers are forwarded correctly.
See Reverse Proxy Caveats in gateway documentation.
Requires whitelist: make sure respective Paths are set.
For example, Paths: ["/ipfs", "/ipns"] are required for http://{cid}.ipfs.{hostname} and http://{foo}.ipns.{hostname} to work:
A boolean to configure whether DNSLink for hostname present in Host
HTTP header should be resolved. Overrides global setting.
If Paths are defined, they take priority over DNSLink.
Default: false (DNSLink lookup enabled by default for every defined hostname)
An optional flag to explicitly configure whether subdomain gateway's redirects
(enabled by UseSubdomains: true) should always inline a DNSLink name (FQDN)
into a single DNS label (specification):
DNSLink name inlining allows for HTTPS on public subdomain gateways with single
label wildcard TLS certs (also enabled when passing X-Forwarded-Proto: https),
and provides disjoint Origin per root CID when special rules like
https://publicsuffix.org, or a custom localhost logic in browsers like Brave
has to be applied.
Default entries for localhost hostname and loopback IPs are always present.
If additional config is provided for those hostnames, it will be merged on top of implicit values:
Performance: Consider enabling Routing.AcceleratedDHTClient=true to improve content routing lookups. Separately, gateway operators should decide if the gateway node should also co-host and provide (announce) fetched content to the DHT. If providing content, enable Provide.DHT.SweepEnabled=true for efficient announcements. If announcements are still not fast enough, adjust Provide.DHT.MaxWorkers. For a read-only gateway that doesn't announce content, use Provide.Enabled=false.
Backward-compatible: this feature enables automatic redirects from content paths to subdomains:
X-Forwarded-Proto: if you run Kubo behind a reverse proxy that provides TLS, make it add a X-Forwarded-Proto: https HTTP header to ensure users are redirected to https://, not http://. It will also ensure DNSLink names are inlined to fit in a single DNS label, so they work fine with a wildcard TLS cert (details). The NGINX directive is proxy_set_header X-Forwarded-Proto "https";.:
Performance: Consider enabling Routing.AcceleratedDHTClient=true to improve content routing lookups. When running an open, recursive gateway, decide if the gateway should also co-host and provide (announce) fetched content to the DHT. If providing content, enable Provide.DHT.SweepEnabled=true for efficient announcements. If announcements are still not fast enough, adjust Provide.DHT.MaxWorkers. For a read-only gateway that doesn't announce content, use Provide.Enabled=false.
Public DNSLink gateway resolving every hostname passed in Host header.
ipfs config --json Gateway.NoDNSLink false
Note that NoDNSLink: false is the default (it works out of the box unless set to true manually)
Disable fetching of remote data (NoFetch: true) and resolving DNSLink at unknown hostnames (NoDNSLink: true).
Then, enable DNSLink gateway only for the specific hostname (for which data
is already present on the node), without exposing any content-addressing Paths:
The unique PKI identity label for this configs peer. Set on init and never read,
it's merely here for convenience. Ipfs will always generate the peerID from its
keypair at runtime.
This section includes internal knobs for various subsystems to allow advanced users with big or private infrastructures to fine-tune some behaviors without the need to recompile Kubo.
Be aware that making informed change here requires in-depth knowledge and most users should leave these untouched. All knobs listed here are subject to breaking changes between versions.
The knobs (below) document how their value should related to each other.
Whether their values should be raised or lowered should be determined
based on the metrics ipfs_bitswap_active_tasks, ipfs_bitswap_pending_tasks,
ipfs_bitswap_pending_block_tasks and ipfs_bitswap_active_block_tasks
reported by bitswap.
These metrics can be accessed as the Prometheus endpoint at {Addresses.API}/debug/metrics/prometheus (default: http://127.0.0.1:5001/debug/metrics/prometheus)
The value of ipfs_bitswap_active_tasks is capped by EngineTaskWorkerCount.
The value of ipfs_bitswap_pending_tasks is generally capped by the knobs below,
however its exact maximum value is hard to predict as it depends on task sizes
as well as number of requesting peers. However, as a rule of thumb,
during healthy operation this value should oscillate around a "typical" low value
(without hitting a plateau continuously).
If ipfs_bitswap_pending_tasks is growing while ipfs_bitswap_active_tasks is at its maximum then
the node has reached its resource limits and new requests are unable to be processed as quickly as they are coming in.
Raising resource limits (using the knobs below) could help, assuming the hardware can support the new limits.
The value of ipfs_bitswap_active_block_tasks is capped by EngineBlockstoreWorkerCount.
The value of ipfs_bitswap_pending_block_tasks is indirectly capped by ipfs_bitswap_active_tasks, but can be hard to
predict as it depends on the number of blocks involved in a peer task which can vary.
If the value of ipfs_bitswap_pending_block_tasks is observed to grow,
while ipfs_bitswap_active_block_tasks is at its maximum, there is indication that the number of
available block tasks is creating a bottleneck (either due to high-latency block operations,
or due to high number of block operations per bitswap peer task).
In such cases, try increasing the EngineBlockstoreWorkerCount.
If this adjustment still does not increase the throughput of the node, there might
be hardware limitations like I/O or CPU.
Number of threads for blockstore operations.
Used to throttle the number of concurrent requests to the block store.
The optimal value can be informed by the metrics ipfs_bitswap_pending_block_tasks and ipfs_bitswap_active_block_tasks.
This would be a number that depends on your hardware (I/O and CPU).
Type: optionalInteger (thread count, null means default which is 128)
Number of worker threads used for preparing and packaging responses before they are sent out.
This number should generally be equal to TaskWorkerCount.
Type: optionalInteger (thread count, null means default which is 8)
Maximum number of bytes (across all tasks) pending to be processed and sent to any individual peer.
This number controls fairness and can vary from 250Kb (very fair) to 10Mb (less fair, with more work
dedicated to peers who ask for more). Values below 250Kb could cause thrashing.
Values above 10Mb open the potential for aggressively-wanting peers to consume all resources and
deteriorate the quality provided to less aggressively-wanting peers.
Type: optionalInteger (byte count, null means default which is 1MB)
This parameter determines how long to wait before looking for providers outside of bitswap.
Other routing systems like the Amino DHT are able to provide results in less than a second, so lowering
this number will allow faster peers lookups in some cases.
Type: optionalDuration (null means default which is 1s)
Internal.Bitswap.BroadcastControl contains settings for the bitswap client's broadcast control functionality.
Broadcast control tries to reduce the number of bitswap broadcast messages sent to peers by choosing a subset of of the peers to send to. Peers are chosen based on whether they have previously responded indicating they have wanted blocks, as well as other configurable criteria. The settings here change how peers are selected as broadcast targets. Broadcast control can also be completely disabled to return bitswap to its previous behavior before broadcast control was introduced.
Enabling broadcast control should generally reduce the number of broadcasts significantly without significantly degrading the ability to discover which peers have wanted blocks. However, if block discovery on your network relies sufficiently on broadcasts to discover peers that have wanted blocks, then adjusting the broadcast control configuration or disabling it altogether, may be helpful.
Enables or disables broadcast control functionality. Setting this to false disables broadcast reduction logic and restores the previous (Kubo < 0.36) broadcast behavior of sending broadcasts to all peers. When disabled, all other Bitswap.BroadcastControl configuration items are ignored.
Enables or disables broadcast control for peers on the local network. Peers that have private or loopback addresses are considered to be on the local network. If this setting is false, than always broadcast to peers on the local network. If true, apply broadcast control to local peers.
Default: false (Always broadcast to peers on local network)
Enables or disables broadcast reduction for peers configured for peering. If false, than always broadcast to peers configured for peering. If true, apply broadcast reduction to peered peers.
Default: false (Always broadcast to peers configured for peering)
Sets the number of peers to broadcast to anyway, even though broadcast control logic has determined that they are not broadcast targets. Setting this to a non-zero value ensures at least this number of random peers receives a broadcast. This may be helpful in cases where peers that are not receiving broadcasts my have wanted blocks.
Default: 0 (do not send broadcasts to peers not already targeted broadcast control)
Type: optionalInteger (non-negative, 0 means do not broadcast to any random peers)
Enables or disables sending broadcasts to any peers to which there is a pending message to send. When enabled, this sends broadcasts to many more peers, but does so in a way that does not increase the number of separate broadcast messages. There is still the increased cost of the recipients having to process and respond to the broadcasts.
Default: false (Do not send broadcasts to all peers for which there are pending messages)
Controls the maximum number of consecutive MFS operations allowed with --flush=false
before requiring a manual flush. This prevents unbounded memory growth and ensures
data consistency when using deferred flushing with ipfs files commands.
When the limit is reached, further operations will fail with an error message
instructing the user to run ipfs files flush, use --flush=true, or increase
this limit in the configuration.
Why operations fail instead of auto-flushing: Automatic flushing once the limit
is reached was considered but rejected because it can lead to data corruption issues
that are difficult to debug. When the system decides to flush without user knowledge, it can:
Create partial states that violate user expectations about atomicity
Interfere with concurrent operations in unexpected ways
Make debugging and recovery much harder when issues occur
By failing explicitly, users maintain control over when their data is persisted,
allowing them to:
Batch related operations together before flushing
Handle errors predictably at natural transaction boundaries
Understand exactly when and why their data is written to disk
If you expect automatic flushing behavior, simply use the default --flush=true
(or omit the flag entirely) instead of --flush=false.
⚠️ WARNING: Increasing this limit or disabling it (setting to 0) can lead to:
Out-of-memory errors (OOM) - Each unflushed operation consumes memory
Data loss - If the daemon crashes before flushing, all unflushed changes are lost
Degraded performance - Large unflushed caches slow down MFS operations
Default: 256
Type: optionalInteger (0 disables the limit, strongly discouraged)
Note: This is an EXPERIMENTAL feature and may change or be removed in future releases.
See #10842 for more information.
Maximum duration for which entries are valid in the name system cache. Applied
to everything under /ipns/ namespace, allows you to cap
the Time-To-Live (TTL) of
IPNS Records
AND also DNSLink TXT records (when DoH-specific DNS.MaxCacheTTL
is not set to a lower value).
When Ipns.MaxCacheTTL is set, it defines the upper bound limit of how long a
IPNS Name lookup result
will be cached and read from cache before checking for updates.
Examples:
"1m" IPNS results are cached 1m or less (good compromise for system where
faster updates are desired).
"0s" IPNS caching is effectively turned off (useful for testing, bad for production use)
Note: setting this to 0 will turn off TTL-based caching entirely.
This is discouraged in production environments. It will make IPNS websites
artificially slow because IPNS resolution results will expire as soon as
they are retrieved, forcing expensive IPNS lookup to happen on every
request. If you want near-real-time IPNS, set it to a low, but still
sensible value, such as 1m.
Default: No upper bound, TTL from IPNS Record (see ipns name publish --help) is always respected.
DEPRECATED: Only applies to legacy migrations (repo versions <16). Modern repos (v16+) use embedded migrations.
This section is optional and will not appear in new configurations.
Mountpoint for Mutable File System (MFS) behind the ipfs files API.
Caution
Write support is highly experimental and not recommended for mission-critical deployments.
Avoid storing lazy-loaded datasets in MFS. Exposing a partially local, lazy-loaded DAG risks operating system search indexers crawling it, which may trigger unintended network prefetching of non-local DAG components.
When true, writable mounts (/ipns and /mfs) store the current time as mtime in UnixFS metadata when creating a file or opening it for writing. Setting mtime explicitly via touch works on both files and directories. This changes the resulting CID even when the file content is identical, because mtime is stored in the root block of the UnixFS DAG.
Most data on IPFS does not include mtime. When mtime is present in the UnixFS metadata, it is always shown in stat responses on all mounts, regardless of this flag. When absent, mtime is reported as zero (epoch).
When true, writable mounts (/ipns and /mfs) accept chmod requests on both files and directories and persist POSIX permission bits in UnixFS metadata. This changes the resulting CID because mode is stored in the root block of the UnixFS DAG.
Most data on IPFS does not include mode. When mode is present in the UnixFS metadata, it is always shown in stat responses on all mounts, regardless of this flag. When absent, a default mode is used (files: 0644 on writable mounts, 0444 on /ipfs; directories: 0755 on writable mounts, 0555 on /ipfs).
When this policy is enabled, it follows changes to MFS
and updates the pin for MFS root on the configured remote service.
A pin request to the remote service is sent only when MFS root CID has changed
and enough time has passed since the previous request (determined by RepinInterval).
One can observe MFS pinning details by enabling debug via ipfs log level remotepinning/mfs debug and switching back to error when done.
Defines how often (at most) the pin request should be sent to the remote service.
If left empty, the default interval will be used. Values lower than 1m will be ignored.
Configures how your node advertises content to make it discoverable by other
peers.
What is providing? When your node stores content, it publishes provider
records to the routing system announcing "I have this content". These records
map CIDs to your peer ID, enabling content discovery across the network.
While designed to support multiple routing systems in the future, the current
default configuration only supports providing to the Amino DHT.
Controls whether Kubo provide and reprovide systems are enabled.
Caution
Disabling this will prevent other nodes from discovering your content.
Your node will stop announcing data to the routing system, making it
inaccessible unless peers connect to you directly.
Controls which CIDs are announced to the content routing system. Valid strategies are:
"all" - announce all CIDs of stored blocks
"pinned" - only announce recursively pinned CIDs (ipfs pin add -r, both roots and child blocks)
Order: root blocks of direct and recursive pins are announced first, then the child blocks of recursive pins
"roots" - only announce the top-level root CID of explicitly pinned DAGs (ipfs pin add)
⚠️ BE CAREFUL: a node with roots strategy will not announce child blocks.
It makes sense only for use cases where the entire DAG is fetched in full,
and a graceful resume does not have to be guaranteed: the lack of child
announcements means an interrupted retrieval won't be able to find
providers for the missing block in the middle of a file, unless the peer
happens to already be connected to a provider and asks for child CID over
bitswap. Does not traverse the DAG to discover sub-entity roots
(files within directories, HAMT shards, etc.). If you want that, use
"pinned+entities" instead.
"mfs" - announce only the local CIDs that are part of the MFS (ipfs files)
Note: MFS is lazy-loaded. Only the MFS blocks present in local datastore are announced.
"pinned+mfs" - a combination of the pinned and mfs strategies.
Order: first pinned and then the locally available part of mfs.
Append +unique or +entities to pinned, mfs, or pinned+mfs to optimize the reprovide cycle. Neither works with "all" or "roots".
+unique: uses a bloom filter to deduplicate CIDs across recursive
pins that share sub-DAGs. Without it, a node with 1000 pins sharing 99%
of their content re-traverses the shared blocks for every pin. With +unique,
shared subtrees are skipped, cutting traversal from
O(pins * total_blocks) to O(unique_blocks). This also cuts the number of
CIDs sent to the routing system when similar datasets are pinned multiple
times.
+entities: announces only entity roots (file roots, directory roots,
HAMT shard nodes) instead of every block. Internal file chunks are not
announced. This significantly reduces the number of provider records for
repositories with large files while keeping all files and directories
discoverable. Implies +unique. Non-UnixFS content (e.g. dag-cbor) is
still fully announced.
⚠️ BE CAREFUL: since internal file chunks are not announced, resuming
an interrupted download from a specific byte offset or requesting a byte
range may not work unless the client is smart enough to find providers
for the entity root CID instead of the chunk CID. This is a work in
progress; see kubo#10251.
Suggested configurations:
"pinned+mfs+unique": safe default for nodes with GC enabled, or desktop
users who don't want to announce all blocks cached in the local repository.
Handles pins of similar DAGs efficiently (e.g. versioned datasets where pins
are added and removed over time).
"pinned+mfs+entities": same as above, but also skips internal file chunks
for even fewer provider records. Use when the +entities trade-off (no
chunk-level discoverability) is acceptable.
Reproviding larger pinsets using the mfs, pinned, pinned+mfs or roots strategies requires additional memory, with an estimated ~1 GiB of RAM per 20 million CIDs. This is because the pinner snapshots the pin index into memory at the start of each reprovide cycle so that pin/unpin are not blocked while the DHT reprovider works over the snapshot.
With +unique or +entities, a bloom filter replaces the in-memory CID set, significantly reducing memory usage:
Strategy changes automatically clear the provide queue. When you change Provide.Strategy and restart Kubo, the provide queue is automatically cleared to ensure only content matching your new strategy is announced. You can also manually clear the queue using ipfs provide clear.
Configuration for providing data to Amino DHT peers.
Provider record lifecycle: On the Amino DHT, provider records expire after
amino.DefaultProvideValidity.
Your node must re-announce (reprovide) content periodically to keep it
discoverable. The Provide.DHT.Interval setting
controls this timing, with the default ensuring records refresh well before
expiration or negative churn effects kick in.
Two provider systems:
Sweep provider: Divides the DHT keyspace into regions and systematically
sweeps through them over the reprovide interval. This batches CIDs allocated
to the same DHT servers, dramatically reducing the number of DHT lookups and
PUTs needed. Spreads work evenly over time with predictable resource usage.
Legacy provider: Processes each CID individually with separate DHT
lookups. Works well for small content collections but struggles to complete
reprovide cycles when managing thousands of CIDs.
Quick command-line monitoring: Use ipfs provide stat to view the current
state of the provider system. For real-time monitoring, run
watch ipfs provide stat --all --compact to see detailed statistics refreshed
continuously in a 2-column layout.
Long-term monitoring: For in-depth or long-term monitoring, metrics are
exposed at the Prometheus endpoint: {Addresses.API}/debug/metrics/prometheus
(default: http://127.0.0.1:5001/debug/metrics/prometheus). Different metrics
are available depending on whether you use legacy mode (SweepEnabled=false) or
sweep mode (SweepEnabled=true). See Provide metrics documentation
for details.
Debug logging: For troubleshooting, enable detailed logging by setting:
Sets how often to re-announce content to the DHT. Provider records on Amino DHT
expire after amino.DefaultProvideValidity.
Why this matters: The interval must be shorter than the expiration window to
ensure provider records refresh before they expire. The default value is
approximately half of amino.DefaultProvideValidity,
which accounts for network churn and ensures records stay alive without
overwhelming the network with unnecessary announcements.
With sweep mode enabled
(Provide.DHT.SweepEnabled): The system spreads
reprovide operations smoothly across this entire interval. Each keyspace region
is reprovided at scheduled times throughout the period, ensuring each region's
announcements complete before records expire.
With legacy mode: The system attempts to reprovide all CIDs as quickly as
possible at the start of each interval. If reproviding takes longer than this
interval (common with large datasets), the next cycle is skipped and provider
records may expire.
If unset, it uses the implicit safe default.
If set to the value "0" it will disable content reproviding to DHT.
Caution
Disabling this will prevent other nodes from discovering your content via the DHT.
Your node will stop announcing data to the DHT, making it
inaccessible unless peers connect to you directly. Since provider
records expire after amino.DefaultProvideValidity, your content will become undiscoverable
after this period.
If the accelerated DHT client is enabled, each
provide operation opens ~20 connections in parallel. With the standard DHT
client (accelerated disabled), each provide opens between 20 and 60
connections, with at most 10 active at once. Provides complete more quickly
when using the accelerated client. Be mindful of how many simultaneous
connections this setting can generate.
Caution
For nodes without strict connection limits that need to provide large volumes
of content, we recommend first trying Provide.DHT.SweepEnabled=true for efficient
announcements. If announcements are still not fast enough, adjust Provide.DHT.MaxWorkers.
As a last resort, consider enabling Routing.AcceleratedDHTClient=true but be aware that it is very resource hungry.
At the same time, mind that raising this value too high may lead to increased load.
Proceed with caution, ensure proper hardware and networking are in place.
Tip
When SweepEnabled is true: Users providing millions of CIDs or more
should increase the worker count accordingly. Underprovisioning can lead to
slow provides (burst workers) and inability to keep up with content
reproviding (periodic workers). For nodes with sufficient resources (CPU,
bandwidth, number of connections), dedicating 1024 for periodic
workers and 512 for burst
workers, and 2048max
workers should be adequate even for the largest
users. The system will only use workers as needed - unused resources won't be
consumed. Ensure you adjust the swarm connection manager and
resource manager configuration accordingly.
See Capacity Planning for more details.
Default: 16
Type: optionalInteger (non-negative; 0 means unlimited number of workers)
Enables the sweep provider for efficient content announcements. When disabled,
the legacy boxo/provider is
used instead.
The legacy provider problem: The legacy system processes CIDs one at a
time, requiring a separate DHT lookup (10-20 seconds each) to find the 20
closest peers for each CID. This sequential approach typically handles less
than 10,000 CID over 22h (Provide.DHT.Interval). If
your node has more CIDs than can be reprovided within
Provide.DHT.Interval, provider records start expiring
after
amino.DefaultProvideValidity,
making content undiscoverable.
How sweep mode works: The sweep provider divides the DHT keyspace into
regions based on keyspace prefixes. It estimates the Amino DHT size, calculates
how many regions are needed (sized to contain at least 20 peers each), then
schedules region processing evenly across
Provide.DHT.Interval. When processing a region, it
discovers the peers in that region once, then sends all provider records for
CIDs allocated to those peers in a batch. This batching is the key efficiency:
instead of N lookups for N CIDs, the number of lookups is bounded by a constant
fraction of the Amino DHT size (e.g., ~3,000 lookups when there are ~10,000 DHT
servers), regardless of how many CIDs you're providing.
Efficiency gains: For a node providing 100,000 CIDs, sweep mode reduces
lookups by 97% compared to legacy. The work spreads smoothly over time rather
than completing in bursts, preventing resource spikes and duplicate
announcements. Long-running nodes reprovide systematically just before records
would expire, keeping content continuously discoverable without wasting
bandwidth.
Implementation details: The sweep provider tracks CIDs in a persistent
keystore. New content added via StartProviding() enters the provide queue and
gets batched by keyspace region. The keystore is periodically refreshed at each
Provide.DHT.Interval with CIDs matching
Provide.Strategy to ensure only current content remains
scheduled. This handles cases where content is unpinned or removed.
Persistent reprovide cycle state: When Provide Sweep is enabled, the
reprovide cycle state is persisted to the datastore by default. On restart, Kubo
automatically resumes from where it left off. If the node was offline for an
extended period, all CIDs that haven't been reprovided within the configured
Provide.DHT.Interval are immediately queued for
reproviding. Additionally, the provide queue is persisted on shutdown and
restored on startup, ensuring no pending provide operations are lost. If you
don't want to keep the persisted provider state from a previous run, you can
disable this behavior by setting Provide.DHT.ResumeEnabled
to false.
The diagram compares performance patterns:
Legacy mode: Sequential processing, one lookup per CID, struggles with large datasets
Sweep mode: Smooth distribution over time, batched lookups by keyspace region, predictable resource usage
You can compare the effectiveness of sweep mode vs legacy mode by monitoring the appropriate metrics (see Monitoring Provide Operations above).
Note
This is the default provider system as of Kubo v0.39. To use the legacy provider instead, set Provide.DHT.SweepEnabled=false.
Note
When DHT routing is unavailable (e.g., Routing.Type=custom with only HTTP routers), the provider automatically falls back to the legacy provider regardless of this setting.
Controls whether the provider resumes from its previous state on restart. Only
applies when Provide.DHT.SweepEnabled is true.
When enabled (the default), the provider persists its reprovide cycle state and
provide queue to the datastore, and restores them on restart. This ensures:
The reprovide cycle continues from where it left off instead of starting over
Any CIDs in the provide queue during shutdown are restored and provided after
restart
CIDs that missed their reprovide window while the node was offline are queued
for immediate reproviding
When disabled, the provider starts fresh on each restart, discarding any
previous reprovide cycle state and provide queue. On a fresh start, all CIDs
matching the Provide.Strategy will be provided ASAP (as
burst provides), and then keyspace regions are reprovided according to the
regular schedule starting from the beginning of the reprovide cycle.
Note
Disabling this option means the provider will provide all content matching
your strategy on every restart (which can be resource-intensive for large
datasets), then start from the beginning of the reprovide cycle. For nodes
with large datasets or frequent restarts, keeping this enabled (the default)
is recommended for better resource efficiency and more consistent reproviding
behavior.
Number of workers dedicated to periodic keyspace region reprovides. Only
applies when Provide.DHT.SweepEnabled is true.
Among the Provide.DHT.MaxWorkers, this
number of workers will be dedicated to the periodic region reprovide only. The sum of
DedicatedPeriodicWorkers and DedicatedBurstWorkers should not exceed MaxWorkers.
Any remaining workers (MaxWorkers - DedicatedPeriodicWorkers - DedicatedBurstWorkers)
form a shared pool that can be used for either type of work as needed.
Note
If the provider system isn't able to keep up with reproviding all your
content within the Provide.DHT.Interval, consider
increasing this value.
Default: 2
Type: optionalInteger (0 means there are no dedicated workers, but the
operation can be performed by free non-dedicated workers)
Number of workers dedicated to burst provides. Only applies when Provide.DHT.SweepEnabled is true.
Burst provides are triggered by:
Manual provide commands (ipfs routing provide)
New content matching your Provide.Strategy (blocks from ipfs add, bitswap, or trustless gateway requests)
Catch-up reprovides after being disconnected/offline for a while
Having dedicated burst workers ensures that bulk operations (like adding many CIDs
or reconnecting to the network) don't delay regular periodic reprovides, and vice versa.
Among the Provide.DHT.MaxWorkers, this
number of workers will be dedicated to burst provides only. In addition to
these, if there are available workers in the pool, they can also be used for
burst provides.
Note
If CIDs aren't provided quickly enough to your taste, and you can afford more
CPU and bandwidth, consider increasing this value.
Default: 1
Type: optionalInteger (0 means there are no dedicated workers, but the
operation can be performed by free non-dedicated workers)
Maximum number of connections that a single worker can use to send provider
records over the network.
When reproviding CIDs corresponding to a keyspace region, the reprovider must
send a provider record to the 20 closest peers to the CID (in XOR distance) for
each CID belonging to this keyspace region.
The reprovider opens a connection to a peer from that region, sends it all its
allocated provider records. Once done, it opens a connection to the next peer
from that keyspace region until all provider records are assigned.
This option defines how many such connections can be open concurrently by a
single worker.
Note
Increasing this value can speed up the provide operation, at the cost of
opening more simultaneous connections to DHT servers. A keyspace typically
has less than 60 peers, so you may hit a performance ceiling beyond which
increasing this value has no effect.
During the garbage collection, all keys stored in the Keystore are removed, and
the keys are streamed from a channel to fill the Keystore again with up-to-date
keys. Since a high number of CIDs to reprovide can easily fill up the memory,
keys are read and written in batches to optimize for memory usage.
This option defines how many multihashes should be contained within a batch. A
multihash is usually represented by 34 bytes.
The SweepingProvider has 3 states: ONLINE, DISCONNECTED and OFFLINE. It
starts OFFLINE, and as the node bootstraps, it changes its state to ONLINE.
When the provider loses connection to all DHT peers, it switches to the
DISCONNECTED state. In this state, new provides will be added to the provide
queue, and provided as soon as the node comes back online.
After a node has been DISCONNECTED for OfflineDelay, it goes to OFFLINE
state. When OFFLINE, the provider drops the provide queue, and returns errors
to new provide requests. However, when OFFLINE the provider still adds the
keys to its state, so keys will eventually be provided in the
Provide.DHT.Interval after the provider comes back
ONLINE.
Target false positive rate for the bloom filter used by the +unique and
+entities strategy modifiers and
the matching --fast-provide-dag walk. Expressed as 1/N (one false positive
per N lookups), so a higher value means a lower FP rate but more memory per
CID. Has no effect when Provide.Strategy does not include +unique or
+entities.
The bloom filter sizes itself from the previous reprovide cycle's CID count
and the configured FP rate. The auto-scaling described in
Memory during reprovide is unaffected; this
setting only changes the bits-per-CID ratio of each bloom in the chain.
Memory tradeoff (approximate, before ipfs/bbloom's power-of-two rounding):
Provide.BloomFPRate
Approx. FP rate
Bytes per CID
1000000
1 in 1M
~3
(default)
~1 in 4.75M
~4
10000000
1 in 10M
~5
100000000
1 in 100M
~6
A false positive causes the walker to skip a CID it has already been told
about; the skipped CID is provided in the next reprovide cycle (see
Provide.DHT.Interval). At the default rate, fewer
than ~21 CIDs per 100M are skipped per cycle.
The minimum accepted value is 1000000 (1 in 1M). Below that the bloom
filter becomes lossy enough to drop a meaningful fraction of CIDs from each
reprovide cycle.
Default: 4750000 (~1 false positive per 4.75M lookups, ~4 bytes per CID)
Pubsub configures Kubo's opt-in, opinionated libp2p pubsub instance.
To enable, set Pubsub.Enabled to true.
EXPERIMENTAL: This is an opt-in feature. Its primary use case is
IPNS over PubSub, which
enables real-time IPNS record propagation. See Ipns.UsePubsub
for details.
The ipfs pubsub commands can also be used for basic publish/subscribe
operations, but only if Kubo's built-in message validation (described below) is
acceptable for your use case.
Kubo's pubsub is optimized for IPNS. It uses opinionated message validation
that may not fit all applications. If you need custom Message ID computation,
different deduplication logic, or validation rules beyond what Kubo provides,
consider building a dedicated pubsub node using
go-libp2p-pubsub directly.
Kubo uses two layers of message deduplication to handle duplicate messages that
may arrive via different network paths:
Layer 1: In-memory TimeCache (Message ID)
When a message arrives, Kubo computes its Message ID (hash of the message
content) and checks an in-memory cache. If the ID was seen recently, the
message is dropped. This cache is controlled by:
This cache is fast but limited: it only works within the TTL window and is
cleared on node restart.
Layer 2: Persistent Seqno Validator (per-peer)
For stronger deduplication, Kubo tracks the maximum sequence number seen from
each peer and persists it to the datastore. Messages with sequence numbers
lower than the recorded maximum are rejected. This prevents replay attacks and
handles message cycles in large networks where messages may take longer than
the TimeCache TTL to propagate.
This layer survives node restarts. The state can be inspected or cleared using
ipfs pubsub reset (for testing/recovery only).
Disables message signing and signature verification.
FOR TESTING ONLY - DO NOT USE IN PRODUCTION
It is not safe to disable signing even if you don't care who sent the
message because spoofed messages can be used to silence real messages by
intentionally re-using the real message's message ID.
Controls the time window for the in-memory Message ID cache (Layer 1
deduplication). Messages with the same ID seen within this window are dropped.
A smaller value reduces memory usage but may cause more duplicates in networks
with slow nodes. A larger value uses more memory but provides better duplicate
detection within the time window.
Determines how the TTL countdown for the Message ID cache works.
last-seen - Sliding window: TTL resets each time the message is seen again.
Keeps frequently-seen messages in cache longer, preventing continued propagation.
first-seen - Fixed window: TTL counts from first sighting only. Messages are
purged after the TTL regardless of how many times they're seen.
Configures the peering subsystem. The peering subsystem configures Kubo to
connect to, remain connected to, and reconnect to a set of nodes. Nodes should
use this subsystem to create "sticky" links between frequently useful peers to
improve reliability.
Use-cases:
An IPFS gateway connected to an IPFS cluster should peer to ensure that the
gateway can always fetch content from the cluster.
A dapp may peer embedded Kubo nodes with a set of pinning services or
textile cafes/hubs.
A set of friends may peer to ensure that they can always fetch each other's
content.
When a node is added to the set of peered nodes, Kubo will:
Protect connections to this node from the connection manager. That is,
Kubo will never automatically close the connection to this node and
connections to this node will not count towards the connection limit.
Connect to this node on startup.
Repeatedly try to reconnect to this node if the last connection dies or the
node goes offline. This repeated re-connect logic is governed by a randomized
exponential backoff delay ranging from ~5 seconds to ~10 minutes to avoid
repeatedly reconnect to a node that's offline.
Peering can be asymmetric or symmetric:
When symmetric, the connection will be protected by both nodes and will likely
be very stable.
When asymmetric, only one node (the node that configured peering) will protect
the connection and attempt to re-connect to the peered node on disconnect. If
the peered node is under heavy load and/or has a low connection limit, the
connection may flap repeatedly. Be careful when asymmetrically peering to not
overload peers.
Controls how your node discovers content and peers on the network.
Production options:
auto (default): Uses both the public IPFS DHT (Amino) and HTTP routers
from Routing.DelegatedRouters for faster lookups.
Your node starts as a DHT client and automatically switches to server mode
when reachable from the public internet.
autoclient: Same as auto, but never runs a DHT server.
Use this if your node is behind a firewall or NAT, or if you run a
content denylist
and do not want to store or serve routing records (provider records,
IPNS records) for denied keys on behalf of other peers. See
Scope of denylists
for why this matters.
dht: Uses only the Amino DHT (no HTTP routers). Automatically switches
between client and server mode based on reachability.
dhtclient: DHT-only, always running as a client. Lower resource usage.
dhtserver: DHT-only, always running as a server.
Only use this if your node is reachable from the public internet.
none: Disables all routing. You must manually connect to peers.
About DHT client vs server mode:
When the DHT is enabled, your node can operate as either a client or server.
In server mode, it queries other peers and responds to their queries - this helps
the network but uses more resources. In client mode, it only queries others without
responding, which is less resource-intensive. With auto or dht, your node starts
as a client and switches to server when it detects public reachability.
Caution
Routing.Type Experimental options:
These modes are for research and testing only, not production use.
They may change without notice between releases.
delegated: Uses only HTTP routers from Routing.DelegatedRouters
and IPNS publishers from Ipns.DelegatedPublishers,
without initializing the DHT. Useful when peer-to-peer connectivity is unavailable.
Note: cannot provide content to the network (no DHT means no provider records).
An array of URL hostnames for delegated routers to be queried in addition to the Amino DHT when Routing.Type is set to auto (default) or autoclient.
These endpoints must support the Delegated Routing V1 HTTP API.
The special value "auto" uses delegated routers from AutoConf when enabled.
You can combine "auto" with custom URLs (e.g., ["auto", "https://custom.example.com"]) to query both the default delegated routers and your own endpoints. The first "auto" entry gets substituted with autoconf values, and other URLs are preserved.
Tip
Delegated routing allows IPFS implementations to offload tasks like content routing, peer routing, and naming to a separate process or server while also benefiting from HTTP caching.
One can run their own delegated router either by implementing the Delegated Routing V1 HTTP API themselves, or by using Someguy, a turn-key implementation that proxies requests to other routing systems. A public utility instance of Someguy is hosted at https://delegated-ipfs.dev.
This alternative Amino DHT client with a Full-Routing-Table strategy will
do a complete scan of the DHT every hour and record all nodes found.
Then when a lookup is tried instead of having to go through multiple Kad hops it
is able to find the 20 final nodes by looking up the in-memory recorded network table.
This means sustained higher memory to store the routing table
and extra CPU and network bandwidth for each network scan.
However the latency of individual read/write operations should be ~10x faster
and provide throughput up to 6 million times faster on larger datasets!
This is not compatible with Routing.Typecustom. If you are using composable routers
you can configure this individually on each router.
When it is enabled:
Client DHT operations (reads and writes) should complete much faster
The provider will now use a keyspace sweeping mode allowing to keep alive
CID sets that are multiple orders of magnitude larger.
Note: For improved provide/reprovide operations specifically, consider using
Provide.DHT.SweepEnabled instead, which offers similar
benefits without the hourly traffic spikes.
The standard Bucket-Routing-Table DHT will still run for the DHT server (if
the DHT server is enabled). This means the classical routing table will
still be used to answer other nodes.
This is critical to maintain to not harm the network.
The operations ipfs stats dht will default to showing information about the accelerated DHT client
Caution
Routing.AcceleratedDHTClient Caveats:
Running the accelerated client likely will result in more resource consumption (connections, RAM, CPU, bandwidth)
Users that are limited in the number of parallel connections their machines/networks can perform will be most affected
The resource usage is not smooth as the client crawls the network in rounds and reproviding is similarly done in rounds
Users who previously had a lot of content but were unable to advertise it on the network will see an increase in
egress bandwidth as their nodes start to advertise all of their CIDs into the network. If you have lots of data
entering your node that you don't want to advertise, consider using Provide.* configuration
to control which CIDs are reprovided.
Currently, the DHT is not usable for queries for the first 5-10 minutes of operation as the routing table is being
prepared. This means operations like searching the DHT for particular peers or content will not work initially.
You can see if the DHT has been initially populated by running ipfs stats dht
Currently, the accelerated DHT client is not compatible with LAN-based DHTs and will not perform operations against
them.
EXPERIMENTAL: Routing.LoopbackAddressesOnLanDHT configuration may change in future release
Whether loopback addresses (e.g. 127.0.0.1) should not be ignored on the local LAN DHT.
Most users do not need this setting. It can be useful during testing, when multiple Kubo nodes run on the same machine but some of them do not have Discovery.MDNS.Enabled.
An array of string-encoded PeerIDs. Any provider record associated to one of these peer IDs is ignored.
Apart from ignoring specific providers for reasons like misbehaviour etc. this
setting is useful to ignore providers as a way to indicate preference, when the same provider
is found under different peerIDs (i.e. one for HTTP and one for Bitswap retrieval).
Tip
This denylist operates on PeerIDs.
To deny specific HTTP Provider URL, use HTTPRetrieval.Denylist instead.
⚠️ EXPERIMENTAL: For research and testing only. May change without notice.
Parameters needed to create the specified router. Supported params per router type:
HTTP:
Endpoint (mandatory): URL that will be used to connect to a specified router.
MaxProvideBatchSize: This number determines the maximum amount of CIDs sent per batch. Servers might not accept more than 100 elements per batch. 100 elements by default.
MaxProvideConcurrency: It determines the number of threads used when providing content. GOMAXPROCS by default.
DHT:
"Mode": Mode used by the Amino DHT. Possible values: "server", "client", "auto"
"AcceleratedDHTClient": Set to true if you want to use the acceleratedDHT.
"PublicIPNetwork": Set to true to create a WAN DHT. Set to false to create a LAN DHT.
Parallel:
Routers: A list of routers that will be executed in parallel:
Name:string: Name of the router. It should be one of the previously added to Routers list.
Timeout:duration: Local timeout. It accepts strings compatible with Go time.ParseDuration(string) (10s, 1m, 2h). Time will start counting when this specific router is called, and it will stop when the router returns, or we reach the specified timeout.
ExecuteAfter:duration: Providing this param will delay the execution of that router at the specified time. It accepts strings compatible with Go time.ParseDuration(string) (10s, 1m, 2h).
IgnoreErrors:bool: It will specify if that router should be ignored if an error occurred.
Timeout:duration: Global timeout. It accepts strings compatible with Go time.ParseDuration(string) (10s, 1m, 2h).
Sequential:
Routers: A list of routers that will be executed in order:
Name:string: Name of the router. It should be one of the previously added to Routers list.
Timeout:duration: Local timeout. It accepts strings compatible with Go time.ParseDuration(string). Time will start counting when this specific router is called, and it will stop when the router returns, or we reach the specified timeout.
IgnoreErrors:bool: It will specify if that router should be ignored if an error occurred.
Timeout:duration: Global timeout. It accepts strings compatible with Go time.ParseDuration(string).
An array of multiaddr netmasks. The libp2p connection gater refuses any
connection (inbound or outbound) whose remote address matches an entry,
before any handshake.
By default Kubo advertises every interface address, so without this list a
node may dial private or non-routable addresses learned from other peers.
Some hosting providers treat such dials as netscan abuse.
This is the dial-side filter: it controls which peers this node connects
to or accepts connections from. It does not affect what this node advertises
about itself. For the publish-side filter see
Addresses.NoAnnounce. The
server profile typically populates both fields together
so that a range is neither advertised nor dialed.
Tip
The server profile populates this field with a set of
private, local-only, and non-globally-reachable prefixes (RFC 1918 private,
RFC 6598 CGNAT, ULA, link-local, and others). See the
server profile section for the full list and for
optional entries operators may add manually.
Caution
If an Addresses.Swarm listener (for example a manually configured /ip4/127.0.0.1/tcp/.../ws fronted by a local nginx or Caddy reverse proxy) is covered by an entry in this list, Kubo rejects every incoming connection to it, so the proxy cannot reach Kubo. Kubo logs an ERROR at startup naming the offending rule. Remove the rule from Swarm.AddrFilters to allow the listener; keep it in Addresses.NoAnnounce if you still want to suppress its announcement.
A boolean value that when set to true, will cause ipfs to not keep track of
bandwidth metrics. Disabling bandwidth metrics can lead to a slight performance
improvement, as well as a reduction in memory usage.
Disable automatic NAT port forwarding (turn off UPnP).
When not disabled (default), Kubo asks NAT devices (e.g., routers), to open
up an external port and forward it to the port Kubo is running on. When this
works (i.e., when your router supports NAT port forwarding), it makes the local
Kubo node accessible from the public internet.
Enable hole punching for NAT traversal
when port forwarding is not possible.
When enabled, Kubo will coordinate with the counterparty using
a relayed connection,
to upgrade to a direct connection
through a NAT/firewall whenever possible.
This feature requires Swarm.RelayClient.Enabled to be set to true.
Enables "automatic relay user" mode for this node.
Your node will automatically use public relays from the network if it detects
that it cannot be reached from the public internet (e.g., it's behind a
firewall) and get a /p2p-circuit address from a public relay.
Enables providing /p2p-circuit v2 relay service to other peers on the network.
NOTE: This is the service/server part of the relay system.
Disabling this will prevent this node from running as a relay server.
Use Swarm.RelayClient.Enabled for turning your node into a relay user.
The connection manager determines which and how many connections to keep and can
be configured to keep. Kubo currently supports two connection managers:
none: never close idle connections.
basic: the default connection manager.
By default, this section is empty and the implicit defaults defined below
are used.
The basic connection manager uses a "high water", a "low water", and internal
scoring to periodically close connections to free up resources. When a node
using the basic connection manager reaches HighWater idle connections, it
will close the least useful ones until it reaches LowWater idle
connections. The process of closing connections happens every SilencePeriod.
The connection manager considers a connection idle if:
It has not been explicitly protected by some subsystem. For example, Bitswap
will protect connections to peers from which it is actively downloading data,
the DHT will protect some peers for routing, and the peering subsystem will
protect all "peered" nodes.
HighWater is the number of connections that, when exceeded, will trigger a
connection GC operation. Note: protected/recently formed connections don't count
towards this limit.
This is the max amount of memory to allow go-libp2p to use.
libp2p's resource manager will prevent additional resource creation while this limit is reached.
This value is also used to scale the limit on various resources at various scopes
when the default limits (discussed in libp2p resource management) are used.
For example, increasing this value will increase the default limit for incoming connections.
It is possible to inspect the runtime limits via ipfs swarm resources --help.
Important
Swarm.ResourceMgr.MaxMemory is the memory limit for go-libp2p networking stack alone, and not for entire Kubo or Bitswap.
To set memory limit for the entire Kubo process, use GOMEMLIMIT environment variable which all Go programs recognize, and then set Swarm.ResourceMgr.MaxMemory to less than your custom GOMEMLIMIT.
This is the maximum number of file descriptors to allow libp2p to use.
libp2p's resource manager will prevent additional file descriptor consumption while this limit is reached.
A list of [multiaddrs][libp2p-multiaddrs] that can bypass normal system limits (but are still limited by the allowlist scope).
Convenience config around go-libp2p-resource-manager#Allowlist.Add.
Configuration section for libp2p network transports. Transports enabled in
this section will be used for dialing. However, to receive connections on these
transports, multiaddrs for these transports must be added to Addresses.Swarm.
Supported transports are: QUIC, TCP, WS, Relay, WebTransport and WebRTCDirect.
Caution
SECURITY CONSIDERATIONS FOR NETWORK TRANSPORTS
Enabling network transports allows your node to accept connections from the internet.
Ensure your firewall rules and Addresses.Swarm configuration
align with your security requirements.
See Security section for network exposure considerations.
TCP is a simple
and widely deployed transport, it should be compatible with most implementations
and network configurations. TCP doesn't directly support encryption and/or
multiplexing, so libp2p will layer a security & multiplexing transport over it.
QUIC is the most widely used transport by
Kubo nodes. It is a UDP-based transport with built-in encryption and
multiplexing. The primary benefits over TCP are:
It takes 1 round trip to establish a connection (our TCP transport
currently takes 4).
Libp2p Relay proxy
transport that forms connections by hopping between multiple libp2p nodes.
Allows IPFS node to connect to other peers using their /p2p-circuit
[multiaddrs][libp2p-multiaddrs]. This transport is primarily useful for bypassing firewalls and
NATs.
This transport is special. Any node that enables this transport can receive
inbound connections on this transport, without specifying a listen address.
This is a spiritual descendant of WebSocket but over HTTP/3.
Since this runs on top of HTTP/3 it uses QUIC under the hood.
We expect it to perform worst than QUIC because of the extra overhead,
this transport is really meant at agents that cannot do TCP or QUIC (like browsers).
WebTransport is a new transport protocol currently under development by the IETF and the W3C, and already implemented by Chrome.
Conceptually, it’s like WebSocket run over QUIC instead of TCP. Most importantly, it allows browsers to establish (secure!) connections to WebTransport servers without the need for CA-signed certificates,
thereby enabling any js-libp2p node running in a browser to connect to any kubo node, with zero manual configuration involved.
The previous alternative is websocket secure, which require installing a reverse proxy and TLS certificates manually.
WebRTC Direct
is a transport protocol that provides another way for browsers to
connect to the rest of the libp2p network. WebRTC Direct allows for browser
nodes to connect to other nodes without special configuration, such as TLS
certificates. This can be useful for browser nodes that do not yet support
WebTransport,
which is still relatively new and has known issues.
Enabling this transport allows Kubo node to act on /udp/4001/webrtc-direct
listeners defined in Addresses.Swarm, Addresses.Announce or
Addresses.AppendAnnounce.
Note
WebRTC Direct is browser-to-node. It cannot be used to connect a browser
node to a node that is behind a NAT or firewall (without UPnP port mapping).
The browser-to-private requires using normal
WebRTC,
which is currently being worked on in
go-libp2p#2009.
Configuration section for libp2p security transports. Transports enabled in
this section will be used to secure unencrypted connections.
This does not concern all the QUIC transports which use QUIC's builtin encryption.
Security transports are configured with the priority type.
When establishing an outbound connection, Kubo will try each security
transport in priority order (lower first), until it finds a protocol that the
receiver supports. When establishing an inbound connection, Kubo will let
the initiator choose the protocol, but will refuse to use any of the disabled
transports.
Supported transports are: TLS (priority 100) and Noise (priority 200).
No default priority will ever be less than 100. Lower values have precedence.
Noise is slated to replace
TLS as the cross-platform, default libp2p protocol due to ease of
implementation. It is currently enabled by default but with low priority as it's
not yet widely supported.
Configuration section for libp2p multiplexer transports. Transports enabled in
this section will be used to multiplex duplex connections.
This does not concern all the QUIC transports which use QUIC's builtin muxing.
Multiplexer transports are configured the same way security transports are, with
the priority type. Like with security transports, the initiator gets their
first choice.
Options for configuring DNS resolution for DNSLink and /dns* [Multiaddrs][libp2p-multiaddrs] (including peer addresses discovered via DHT or delegated routing).
This allows for overriding the default DNS resolver provided by the operating system,
and using different resolvers per domain or TLD (including ones from alternative, non-ICANN naming systems).
Currently only https:// URLs for DNS over HTTPS (DoH) endpoints are supported as values.
The default catch-all resolver is the cleartext one provided by your operating system. It can be overridden by adding a DoH entry for the DNS root indicated by . as illustrated above.
Out-of-the-box support for selected non-ICANN TLDs relies on third-party centralized services provided by respective communities on best-effort basis.
The special value "auto" uses DNS resolvers from AutoConf when enabled. For example: {".": "auto"} uses any custom DoH resolver (global or per TLD) provided by AutoConf system.
When AutoTLS.SkipDNSLookup is enabled (default), domains matching AutoTLS.DomainSuffix (default: libp2p.direct) are resolved locally by parsing the IP directly from the hostname. Set AutoTLS.SkipDNSLookup=false to force network DNS lookups for these domains.
Maximum duration for which entries are valid in the DoH cache.
This allows you to cap the Time-To-Live suggested by the DNS response (RFC2181).
If present, the upper bound is applied to DoH resolvers in DNS.Resolvers.
Note: this does NOT work with Go's default DNS resolver. To make this a global setting, add a . entry to DNS.Resolvers first.
Examples:
"1m" DNS entries are kept for 1 minute or less.
"0s" DNS entries expire as soon as they are retrieved.
HTTP requests for application/vnd.ipld.raw will be made instead of Bitswap when a peer has a /tls/http multiaddr
and the HTTPS server returns HTTP 200 for the probe path.
Important
This feature is relatively new. Please report any issues via Github.
Important notes:
TLS and HTTP/2 are required. For privacy reasons, and to maintain feature-parity with browsers, unencrypted http:// providers are ignored and not used.
This feature works in the same way as Bitswap: connected HTTP-peers receive optimistic block requests even for content that they are not announcing.
For performance reasons, and to avoid loops, the HTTP client does not follow redirects. Providers should keep announcements up to date.
Optional list of hostnames for which HTTP retrieval is allowed for.
If this list is not empty, only hosts matching these entries will be allowed for HTTP retrieval.
Tip
To limit HTTP retrieval to a provider at /dns4/example.com/tcp/443/tls/http (which would serve HEAD|GET https://example.com/ipfs/cid?format=raw), set this to ["example.com"]
The number of worker goroutines to use for concurrent HTTP retrieval operations.
This setting controls the level of parallelism for HTTP-based block retrieval operations.
Higher values can improve performance when retrieving many blocks but may increase resource usage.
Sets the maximum size of a block that the HTTP retrieval client will accept.
Note
This setting is a security feature designed to protect Kubo from malicious providers who might send excessively large or invalid data.
Increasing this value allows Kubo to retrieve larger blocks from compatible HTTP providers, but doing so reduces interoperability with Bitswap, and increases potential security risks.
Disables TLS certificate validation.
Allows making HTTPS connections to HTTP/2 test servers with self-signed TLS certificates.
Only for testing, do not use in production.
Options to configure the default parameters used for ingesting data, in commands such as ipfs add or ipfs block put. All affected commands are detailed per option.
These options implement IPIP-499: UnixFS CID Profiles for reproducible CID generation across IPFS implementations. Instead of configuring individual options, you can apply a predefined profile with ipfs config profile apply <profile-name>. See Profiles for available options like unixfs-v1-2025.
Note that using CLI flags will override the options defined here.
The maximum accepted value for size-<bytes> and rabin max parameter is
2MiB - 256 bytes (2096896 bytes). The 256-byte overhead budget is reserved
for protobuf/UnixFS framing so that serialized blocks stay within the 2MiB
block size limit defined by the
bitswap spec.
The buzhash chunker uses a fixed internal maximum of 512KiB and is not
affected by this limit.
Only the fixed-size chunker (size-<bytes>) guarantees that the same data
will always produce the same CID. The rabin and buzhash chunkers may
change their internal parameters in a future release.
Immediately provide root CIDs to the routing system in addition to the regular provide queue.
This complements the reprovide system: fast-provide handles the urgent case (root CIDs that users share and reference), while the reprovide cycle provides all blocks according to the Provide.Strategy over time.
When disabled, only the reprovide cycle handles content announcement.
Applies to ipfs add, ipfs dag import, ipfs pin add, and ipfs pin update. Can be overridden per-command with the --fast-provide-root flag.
Walk and provide the full DAG immediately after content is added or pinned, using the active Provide.Strategy to determine scope.
When enabled with +unique, the DAG walk deduplicates via a bloom filter. When enabled with +entities, only entity roots (files, directories, HAMT shards) are provided.
When disabled (default), only the root CID is provided immediately (via Import.FastProvideRoot) and child blocks are deferred to the reprovide cycle.
Applies to ipfs add, ipfs dag import, ipfs pin add, and ipfs pin update. Can be overridden per-command with the --fast-provide-dag flag. Has no effect when Provide.Strategy=all (the blockstore already provides every block on write).
Wait for the immediate provide to complete before returning.
When enabled, the command blocks until the provide completes, ensuring guaranteed discoverability before returning. When disabled (default), the provide happens asynchronously in the background without blocking the command. Applies to both Import.FastProvideRoot and Import.FastProvideDAG.
Use this when you need certainty that content is discoverable before the command returns (e.g., sharing a link immediately after adding).
Applies to ipfs add, ipfs dag import, ipfs pin add, and ipfs pin update. Can be overridden per-command with the --fast-provide-wait flag.
Ignored when DHT is not available for routing (e.g., Routing.Type=none or delegated-only configurations).
The maximum size of a single write-batch (computed as the sum of the sizes of the blocks). The total size of the batch is limited by BatchMaxnodes and BatchMaxSize.
Increasing this will batch more items together when importing data with ipfs dag import, which can speed things up.
Must be positive (> 0). Setting to 0 would cause immediate batching after any data, which is inefficient.
The maximum number of links that a node part of a UnixFS basic directory can
have when building the DAG while importing.
This setting controls both the fanout for basic, non-HAMT folder nodes. It
sets a limit after which directories are converted to a HAMT-based structure.
When unset (0), no limit exists for children. Directories will be converted to
HAMTs based on their estimated size only.
This setting will cause basic directories to be converted to HAMTs when they
exceed the maximum number of children. This happens transparently during the
add process. The fanout of HAMT nodes is controlled by MaxHAMTFanout.
Must be non-negative (>= 0). Zero means no limit, negative values are invalid.
The maximum number of children that a node part of a UnixFS HAMT directory
(aka sharded directory) can have.
HAMT directories have unlimited children and are used when basic directories
become too big or reach MaxLinks. A HAMT is a structure made of UnixFS
nodes that store the list of elements in the folder. This option controls the
maximum number of children that the HAMT nodes can have.
According to the UnixFS specification, this value must be a power of 2, between 8 (for byte-aligned bitfields) and 1024 (to prevent denial-of-service attacks).
The sharding threshold to decide whether a basic UnixFS directory
should be sharded (converted into HAMT Directory) or not.
This value is not strictly related to the size of the UnixFS directory block
and any increases in the threshold should come with being careful that block
sizes stay under 2MiB in order for them to be reliably transferable through the
networking stack. At the time of writing this, IPFS peers on the public swarm
tend to ignore requests for blocks bigger than 2MiB.
Uses implementation from boxo/ipld/unixfs/io/directory, where the size is not
the exact block size of the encoded directory but just the estimated size
based byte length of DAG-PB Links names and CIDs.
Setting to 1B is functionally equivalent to always using HAMT (useful in testing).
The block estimation is recommended for new profiles as it provides more
accurate threshold decisions and better cross-implementation consistency.
See IPIP-499 for more details.
Configuration profiles allow to tweak configuration quickly. Profiles can be
applied with the --profile flag to ipfs init or with the ipfs config profile apply command. When a profile is applied a backup of the configuration file
will be created in $IPFS_PATH.
Configuration profiles can be applied additively. For example, both the unixfs-v1-2025 and lowpower profiles can be applied one after the other.
The available configuration profiles are listed below. You can also find them
documented in ipfs config profile --help.
The server profile hardens a node for public-internet operation. Recommended
on machines with public IPv4 addresses (no NAT, no uPnP) at providers that
interpret local IPFS discovery and traffic as netscan abuse
(example).
If you need peering over one of the prefixes above, remove that entry from
Swarm.AddrFilters and
Addresses.NoAnnounce after applying the profile.
Or skip the profile and populate those fields manually.
Local reverse proxy fronting a /ws (or other libp2p) listener on 127.0.0.1
/ip4/127.0.0.0/ipcidr/8 from Swarm.AddrFilters only (keep it in Addresses.NoAnnounce); also drop /ip6/::1/ipcidr/128 and /ip6/::/ipcidr/3 from Swarm.AddrFilters if the proxy uses IPv6 loopback
Added after bogus IPv6 prefixes such as 1e::/16 (unallocated space
inside 0000::/3) started leaking into DHT self-records from public
Kubo nodes with go-libp2p v0.47. See
go-libp2p#3460.
Most overlay networks (WireGuard, Tailscale, Nebula, ZeroTier,
cjdns) use ULA fc00::/7 and are blocked by the separate
/ip6/fc00::/ipcidr/7 entry, not by this one. The notable exception is
Yggdrasil, which uses 0200::/7 inside 0000::/3.
NAT64 translators rarely emit 64:ff9b:: (RFC 6052) or
64:ff9b:1::/48 (RFC 8215) as a source address, so the rule's
announce-side impact on NAT64 deployments is typically none. Removal is
warranted only if a 64:ff9b:: address is bound directly to a node
interface.
Disables AutoConf and clears all networking fields for manual configuration.
Use this for private networks or when you want explicit control over all endpoints.
Configures the node to use the flatfs datastore.
Flatfs is the default, most battle-tested and reliable datastore.
You should use this datastore if:
You need a very simple and very reliable datastore, and you trust your
filesystem. This datastore stores each block as a separate file in the
underlying filesystem so it's unlikely to lose data unless there's an issue
with the underlying file system.
You need to run garbage collection in a way that reclaims free space as soon as possible.
You want to minimize memory usage.
You are ok with the default speed of data import, or prefer to use --nocopy.
Warning
This profile may only be applied when first initializing the node via ipfs init --profile flatfs
Configures the node to use the legacy badgerv1 datastore.
Caution
Badger v1 datastore is deprecated and will be removed in a future Kubo release.
This is based on very old badger 1.x, which has not been maintained by its
upstream maintainers for years and has known bugs (startup timeouts, shutdown
hangs, file descriptor
exhaustion, and more). Do not use it for new deployments.
To migrate: create a new IPFS_PATH with flatfs
(ipfs init --profile=flatfs), move pinned data via
ipfs dag export/import or ipfs pin ls -t recursive|add, and decommission the
old badger-based node. When it comes to block storage, use experimental
pebbleds only if you are sure modern flatfs does not serve your use case
(most users will be perfectly fine with flatfs, it is also possible to keep
flatfs for blocks and replace leveldb with pebble if preferred over
leveldb).
Also, be aware that:
This datastore will not properly reclaim space when your datastore is
smaller than several gigabytes. If you run IPFS with --enable-gc, you plan on storing very little data in
your IPFS node, and disk usage is more critical than performance, consider using
flatfs.
This datastore uses up to several gigabytes of memory.
Good for medium-size datastores, but may run into performance issues if your dataset is bigger than a terabyte.
Warning
This profile may only be applied when first initializing the node via ipfs init --profile badgerds
Configures the node to use the legacy badgerv1 datastore with metrics. This is the same as badgerds profile with the addition of the measure datastore wrapper. This profile will be removed in a future Kubo release.
Disables Provide system (and announcing to Amino DHT).
Caution
The main use case for this is setups with manual Peering.Peers config.
Data from this node will not be announced on the DHT. This will make
DHT-based routing an data retrieval impossible if this node is the only
one hosting it, and other peers are not already connected to it.
Legacy UnixFS import profile for backward-compatible CID generation.
Produces CIDv0 with no raw leaves, sha2-256, 256 KiB chunks, and
link-based HAMT size estimation.
Use Gateway.NoFetch to prevent arbitrary CID retrieval if Kubo is acting as a public gateway available to anyone
Configure firewall rules to restrict access to exposed ports. Note that Addresses.Swarm is special - all incoming traffic to swarm ports should be allowed to ensure proper P2P connectivity
Flags allow enabling and disabling features. However, unlike simple booleans,
they can also be null (or omitted) to indicate that the default value should
be chosen. This makes it easier for Kubo to change the defaults in the
future unless the user explicitly sets the flag to either true (enabled) or
false (disabled). Flags have three possible states: