fly-log-shipper

April 17, 2026 ยท View on GitHub

Ship logs from fly to other providers using NATS and Vector

In this repo you will find various Vector Sinks along with the required fly config. The end result is a Fly.IO application that automatically reads your organisation logs and sends them to external providers.

Quick start

  1. Create a new fly logger app based on our docker image
fly launch --image flyio/log-shipper:latest --no-public-ips
  1. Set NATS source secrets for your new app
  2. Set your desired provider from below

Thats it - no need to setup NATs clients within your apps, as fly apps are already sending monitoring information back to fly which we can read.

However for advanced uses you can still configure a NATs client in your apps to talk to this NATs server. See NATS

NATS source configuration

SecretDescription
ORGOrganisation slug (default to personal)
ACCESS_TOKENFly personal access token (required; set with fly secrets set ACCESS_TOKEN=$(fly tokens create readonly personal))
SUBJECTSubject to subscribe to. See [[NATS]] below (defaults to logs.>)
QUEUEArbitrary queue name if you want to run multiple log processes for HA and avoid duplicate messages being shipped
NETWORK6PN network, if you want to run log-shipper through a WireGuard connection (defaults to fdaa:0:0)

After generating your fly.toml, remember to update the internal port to match the vector internal port defined in vector-configs/vector.toml. Not doing so will result in health checks failing on deployment.

[[services]]
  http_checks = []
  internal_port = 8686

Set the secrets below associated with your desired log destination

Provider configuration

AppSignal

SecretDescription
APPSIGNAL_PUSH_API_KEYAppSignal push API key

AWS S3

SecretDescription
AWS_ACCESS_KEY_IDAWS Access key with access to the log bucket
AWS_SECRET_ACCESS_KEYAWS secret access key
AWS_BUCKETAWS S3 bucket to store logs in
AWS_REGIONRegion for the bucket
S3_ENDPOINT(optional) Endpoint URL for S3 compatible object stores such as Cloudflare R2 or Wasabi

AWS CloudWatch

SecretDescription
AWS_ACCESS_KEY_IDAWS Access key with access to the log bucket
AWS_SECRET_ACCESS_KEYAWS secret access key
AWS_REGIONRegion for CloudWatch
CLOUDWATCH_LOG_GROUP_NAMELog Group to send logs to in CloudWatch
CLOUDWATCH_ENCODING_CODECCloudWatch codec ( default is "json" )

Axiom

SecretDescription
AXIOM_TOKENAxiom token
AXIOM_DATASETAxiom dataset

Baselime

SecretDescription
BASELIME_API_KEYBaselime API key
BASELIME_DATASET(optional) Baselime dataset (default "flyio")

Better Stack Logs (formerly Logtail)

SecretDescription
BETTER_STACK_SOURCE_TOKENBetter Stack Telemetry source token
BETTER_STACK_INGESTING_HOSTBetter Stack source ingesting host (default is in.logs.betterstack.com)

Datadog

SecretDescription
DATADOG_API_KEYAPI key for your Datadog account
DATADOG_SITE(optional) The Datadog site. ie: datadoghq.eu

Highlight

SecretDescription
HIGHLIGHT_PROJECT_IDHighlight Project ID

Honeybadger

SecretDescription
HONEYBADGER_API_KEYHoneybadger API key

Honeycomb

SecretDescription
HONEYCOMB_API_KEYHoneycomb API key
HONEYCOMB_DATASETHoneycomb dataset

Humio

SecretDescription
HUMIO_TOKENHumio token
HUMIO_ENDPOINT(optional) Endpoint URL to send logs to

HyperDX

SecretDescription
HYPERDX_API_KEYHyperDX API key

Logdna

SecretDescription
LOGDNA_API_KEYLogDNA API key

Logflare

SecretDescription
LOGFLARE_API_KEYLogflare ingest API key
LOGFLARE_SOURCE_TOKENLogflare source token (uuid on your Logflare dashboard)

Loki

SecretDescription
LOKI_URLLoki Endpoint
LOKI_USERNAMELoki Username
LOKI_PASSWORDLoki Password

New Relic

One of these is required for New Relic logs. New Relic recommend the license key be used (ref: https://docs.newrelic.com/docs/logs/enable-log-management-new-relic/enable-log-monitoring-new-relic/vector-output-sink-log-forwarding/)

SecretDescription
NEW_RELIC_INSERT_KEY(optional) New Relic Insert key
NEW_RELIC_LICENSE_KEY(optional) New Relic License key
NEW_RELIC_REGION(optional) eu or us (default us)
NEW_RELIC_ACCOUNT_IDNew Relic Account Id

OpenObserve

SecretDescription
OPENOBSERVE_URIOpenObserve URI
OPENOBSERVE_USEROpenObserve user
OPENOBSERVE_PASSWORDOpenObserve password

OpsVerse

SecretDescription
OPSVERSE_LOGS_ENDPOINTOpsVerse Logs Endpoint
OPSVERSE_USERNAMEOpsVerse Username
OPSVERSE_PASSWORDOpsVerse Password

Papertrail

SecretDescription
PAPERTRAIL_ENDPOINTPapertrail endpoint
PAPERTRAIL_ENCODING_CODECPapertrail codec (default is "json")

Sematext

SecretDescription
SEMATEXT_REGIONSematext region
SEMATEXT_TOKENSematext token

SigNoz

SecretDescription
SIGNOZ_INGESTION_KEYSigNoz Ingestion Key
SIGNOZ_INGESTION_URLSigNoz Ingestion URL (default is https://ingest.us.signoz.cloud/logs/vector)

See SigNoz Docs for region-specific Ingestion URLs and Keys.

For self-hosted SigNoz, set SIGNOZ_INGESTION_URL to your own ingestion endpoint โ€” see Self-Hosted Ingestion. SIGNOZ_INGESTION_KEY is only required for SigNoz Cloud and can be left unset for self-hosted deployments.

Uptrace

SecretDescription
UPTRACE_API_KEYUptrace API key
UPTRACE_PROJECTUptrace project ID
UPTRACE_SINK_INPUT"log_json", etc.
UPTRACE_SINK_ENCODING"json", etc.

For UPTRACE_SINK_ENCODING Vector expects one of avro, gelf, json, logfmt, native, native_json, raw_message, text for key sinks.uptrace.

EraSearch

SecretDescription
ERASEARCH_URLEraSearch Endpoint
ERASEARCH_AUTHEraSearch User
ERASEARCH_INDEXEraSearch Index you want to use

HTTP

SecretDescription
HTTP_URLHTTP/HTTPS Endpoint
HTTP_TOKENHTTP Bearer auth token

Slack ( experimental )

HTTP sink that can be used for sending log alerts to Slack.

SecretDescription
SLACK_WEBHOOK_URLSlack WebHook URL
SLACK_ALERT_KEYWORDSKeywords to alert on

Example for setting keywords fly secrets set SLACK_ALERT_KEYWORDS="[r'SIGTERM', r'reboot']"


NATS

The log stream is provided through the NATS protocol and is limited to subscriptions to logs in your organisations.

Connecting

Note: You do not have to manually connect a NAT Client, see Quick Start

If you want to add custom behaviours or modify the subject sent from your app, then you can connect your app to the NATs server manually.

Any fly app can connect to the NATs server on nats://[fdaa::3]:4223 (IPV6).

Note: you will need to supply a user / password.

User: is your Fly organisation slug, which you can obtain from fly orgs list > Password: is your fly token, which you can obtain from fly tokens create readonly personal

Example using the NATs client

Launch a nats client based on the nats-server image

fly launch --image="synadia/nats-server:nightly" --name="nats-client"

SSH into the new app

fly -a nats-client ssh console
nats context add nats --server [fdaa::3]:4223 --description "NATS Demo" --select \
  --user <YOUR FLY ORG SLUG> \
  --password <YOUR PAT>
nats pub "logs.test" "hello world"

Subject

The subject schema is logs.<app_name>.<region>.<instance_id> and the standard NATS wildcards can be used. In this app, the SUBJECT secret can be used to set the subject and limit the scope of the logs streamed.

Queue

If you would like to run multiple vm's for high availability, the NATS endpoint supports subscription queues to ensure messages are only sent to one subscriber of the named queue. The QUEUE secret can be set to configure a queue name for the client.


Vector

The nats source component sends logs to other downstream transforms and sinks in the Vector config. This processes the log lines and sends them to various providers. The config is generated from a shell wrapper script which uses conditionals on environment variables to decide which Vector sinks to configure in the final config.