Stream logs to onlylogs.io

If self-hosting your logs is not what you want, onlylogs.io has you covered. There are many reasons why you might not want to self-host, for example ephemeral disks on Heroku or deploio, or you simply want your logs to live in a space separated from your application.

This page explains allm the options you have to stream your logs to onlylogs.io.

Create a new project on this website to receive an ONLYLOGS_DRAIN_URL, then configure your application using one of the methods below.

PaaS log drains

Most platform-as-a-service providers offer a built-in way to forward stdout to an HTTPS endpoint. Point it at your ONLYLOGS_DRAIN_URL and you're done.

Heroku

Heroku has first-class log drain support:

heroku drains:add <ONLYLOGS_DRAIN_URL>

Other Heroku-style PaaS

Providers that advertise Heroku-compatible log drains (Scalingo's scalingo log-drains-add, Clever Cloud's log drain configuration, etc.) expect the same HTTPS POST shape that onlylogs.io accepts. Consult your provider's drain documentation and point it at your ONLYLOGS_DRAIN_URL.

Vector

For platforms without native drains (Fly.io, Kubernetes, bare metal, Docker Compose), deploy a log shipper such as Vector or Fluent Bit and ship to your ONLYLOGS_DRAIN_URL. The endpoint accepts newline-delimited plain text over HTTPS POST.

Example Vector configuration:

[sinks.onlylogs]
type = "http"
inputs = ["your_source"]
uri = "${ONLYLOGS_DRAIN_URL}"
method = "post"
encoding.codec = "text"
framing.method = "newline_delimited"

Fly.io users can deploy the fly-log-shipper app and set its Vector sink to the snippet above.

Dokku

Dokku forwards logs via its built-in Vector plugin. Use the text codec so Rails lines arrive verbatim (rather than wrapped in Docker metadata). <URL_ENCODED_DRAIN_URL> is your drain URL passed through CGI.escape:

dokku logs:set <app-name> vector-sink "http://?uri=<URL_ENCODED_DRAIN_URL>&encoding[codec]=text&method=post"

Ruby on Rails

If your platform doesn't expose a log drain — or you want log shipping from every process type (Puma, GoodJob, Sidekiq, rake, migrations) — use the onlylogs gem. It includes two loggers: HttpLogger (recommended) and SocketLogger (for Puma-only deployments).

HttpLogger (recommended)

The HttpLogger ships logs directly via HTTP from any Ruby process: Puma, GoodJob, Sidekiq, rake tasks, migrations. No sidecar or Puma plugin required.

# config/environments/production.rb
config.logger = Onlylogs::HttpLogger.new

Set the ONLYLOGS_DRAIN_URL environment variable and you're done.

Optional environment variables:

Variable Default Description
ONLYLOGS_DRAIN_URL The drain URL (required)
ONLYLOGS_BATCH_SIZE 100 Number of lines to batch before sending
ONLYLOGS_FLUSH_INTERVAL 0.5 Flush interval in seconds

Or pass them as keyword arguments:

config.logger = Onlylogs::HttpLogger.new(
  drain_url: "https://onlylogs.io/drain/your-token",
  batch_size: 50,
  flush_interval: 1.0
)

SocketLogger + Puma sidecar

If you prefer the sidecar approach (lower latency, useful when Puma is the only process that needs log shipping):

# config/environments/production.rb
config.logger = Onlylogs::SocketLogger.new

This writes logs to ONLYLOGS_SIDECAR_SOCKET (default: tmp/sockets/onlylogs-sidecar.sock). From there, stream them to onlylogs.io using a Puma sidecar process:

# config/puma.rb
plugin :onlylogs_sidecar

Then configure the ONLYLOGS_DRAIN_URL environment variable.

You can also run the sidecar process separately:

bin/onlylogs_sidecar

Note: the SocketLogger only works when the sidecar process is running. Background jobs and migrations that run outside of Puma will not have access to the socket. If you need log shipping from all process types, use the HttpLogger instead.

Keeping local files while streaming

Use HttpLogger's built-in local_fallback parameter to write to a local file and stream to onlylogs.io with a single logger:

# config/environments/production.rb
log_file = Logger::LogDevice.new(
  Rails.root.join("log", "production.log"), shift_age: 5, shift_size: 100.megabytes
)
config.logger = Onlylogs::HttpLogger.new(local_fallback: log_file)

Every log line is written to the local file and forwarded to onlylogs.io. The LogDevice handles rotation (keep 5 files of 100 MB each in the example above). If the remote connection fails, logs continue to be written locally without interruption.

Warning — do not use ActiveSupport::BroadcastLogger with two tagged loggers. This is a known Rails bug: rails/rails#56669.

# DO NOT do this:
local = Onlylogs::Logger.new("production.log", 5, 100.megabytes)
remote = Onlylogs::HttpLogger.new
config.logger = ActiveSupport::BroadcastLogger.new(local, remote)

Recommended log format

You don't need our gem to stream logs to onlylogs.io: any log drain (Heroku, Dokku, Vector, Fluent Bit, ...) can forward a plain-text stream. The backend accepts arbitrary lines, but here is a good default:

[2026-04-17T12:34:56+00:00] [I] Your log message here

That is: an ISO 8601 timestamp in square brackets, the severity letter (D, I, W, E, F) in square brackets, then the message.

In a Rails app you can produce this format with a one-line formatter — no gem required:

# config/environments/production.rb
config.log_formatter = proc do |severity, time, _progname, msg|
  "[#{time.iso8601}] [#{severity[0]}] #{msg}\n"
end

This pairs well with config.log_tags = [:request_id] for per-request correlation, and works the same whether you're logging to $stdout (Heroku, Dokku) or to a file that Vector tails.