you@laptop ~ $ man tellr

tellr, the whole thing

Everything you need to run tellr. It's a single binary, a YAML file, and one webhook. This page has no search box. Use cmd-F.

version 0.9.2 updated 2 days ago read time ~8 min lines 214
$ tellr help
  1. 01installone binary
  2. 02hello, world60s
  3. 03config: tellr.yamlthe whole file
  4. 04checkshttp, tcp, disk…
  5. 05alert channelsslack, tg, discord
  6. 06bring-your-own LLMopenai, ollama…
  7. 07silencing & routesquiet hours
  8. 08deploysystemd, docker, fly
  9. 09cli referenceevery flag
  10. 10faqno fluff
$tellr help install

01 · install

One binary. No runtime. No sidecar. No operator. It runs on anything that can run a statically-linked 18 MB Go binary.

$ curl -fsSL tellr.dev/install.sh | sh copy

Prefer to read before you curl-pipe? Good instinct. Here's the same thing in pieces:

# 1. download the binary for your OS/arch
curl -L https://github.com/tellr/tellr/releases/latest/download/tellr-$(uname -s)-$(uname -m) \
     -o /usr/local/bin/tellr

# 2. make it executable
chmod +x /usr/local/bin/tellr

# 3. verify checksum (printed on every release)
sha256sum /usr/local/bin/tellr
Platforms. linux/amd64, linux/arm64, darwin/arm64, darwin/amd64. Windows is not on the list and is not on the roadmap.
$tellr help hello

02 · hello, world

The fastest path from nothing to a real alert in Slack. This takes about a minute.

  1. Make a file called tellr.yaml. Put one check in it (copy the snippet below).
  2. Paste your Slack webhook URL where it says url:.
  3. Run tellr watch. You'll see checks tick by.
  4. Break something on purpose: curl example.com/does-not-exist until errors climb, or just set the threshold low.
  5. Watch an alert land in Slack, with a written explanation. Done.
# tellr.yaml · minimum viable config
checks:
  - name: api.prod
    type: http
    url: https://api.example.com/healthz
    every: 30s
    expect: 2xx

alerts:
  - to: slack
    url: https://hooks.slack.com/services/T0.../B0.../xxx
    channel: #ops-alerts
That's the whole thing. You can stop reading here and tellr will work. The rest of this page is details you'll want when the config grows.
$tellr help config

03 · config: tellr.yaml

Tellr has exactly two config sections: checks (things to watch) and alerts (where to send them when something breaks). Optional: llm, silence, routes.

The file is watched for changes. Save, the new config takes effect in the next check tick. No restart.

llm:
  provider: openai       # openai | anthropic | ollama | off
  model: gpt-4o-mini
  key: ${OPENAI_API_KEY}
  budget_per_day: $5        # hard cap; over budget = plain alerts

checks:
  - name: api.prod
    type: http
    url: https://api.example.com/healthz
    every: 30s
    expect: 2xx
    timeout: 5s

  - name: db.prod
    type: postgres
    dsn: ${DATABASE_URL}
    disk_warn: 80%
    replica_lag_warn: 10s

  - name: signup_funnel
    type: llm                  # ask the model to read the data
    query: select date, count(*) from signups group by 1
    every: 1h
    ask: "Is today's signup count materially below the 7-day median?"

alerts:
  - to: slack
    url: ${SLACK_WEBHOOK}
    channel: #ops-alerts
    only: [error, warn]
$tellr help checks

04 · checks

Every check has a name, a type, and an every. The rest depends on type.

httprequest a url, check status + body + timing
tcpopen a socket, time the handshake
pingicmp reachability
postgresconnect, query, check disk/replica/locks
mysqlsame, for mysql
redisping, memory, keys
diskfilesystem usage on a path
processis it running, how much memory
queuesidekiq, sqs, bullmq depth
llmrun a query, ask the model to judge the result
shellrun a command, check its exit + stdout

When a check fails, tellr does not alert immediately. It waits for the same failure twice in a row, on most types. This eliminates roughly 90% of false positives without meaningfully slowing you down.

$tellr help alerts

05 · alert channels

Tellr speaks Slack, Telegram, Discord, email, and plain webhook. That covers every team we've met. If you need PagerDuty or Opsgenie, send the webhook there.

alerts:
  - to: slack
    url: ${SLACK_WEBHOOK}
    channel: #ops-alerts

  - to: telegram
    bot_token: ${TG_BOT_TOKEN}
    chat_id: -100123456789

  - to: discord
    url: ${DISCORD_WEBHOOK}

  - to: email
    from: alerts@yourco.com
    to_addr: oncall@yourco.com

  - to: webhook               # pagerduty, opsgenie, your own thing
    url: https://events.pagerduty.com/v2/enqueue
    headers:
      Authorization: Token ${PD_TOKEN}

Every alert carries three things: a one-line title, a written paragraph, and a machine-readable JSON blob. The paragraph is what the LLM writes. If you turn the LLM off, you get a cleaner plain-text version.

$tellr help byok

06 · bring-your-own LLM

Alerts are written by a model. You plug in the key. The model provider never goes through us.

  1. Set llm.provider to openai, anthropic, or ollama.
  2. Put the key in the env. Reference it as ${OPENAI_API_KEY} so it never hits disk in the config file.
  3. Set a budget_per_day. Once hit, tellr falls back to plain alerts until tomorrow.
  4. If you run Ollama locally, set provider: ollama and url: http://localhost:11434. No key needed.
What the model actually sees. Check name, type, metric values, deploy SHA, relevant log lines (last 50), and your prompt template. It does not see your source code, your DB contents beyond the query you wrote, or any secret. See data.md for the exact payload.
Off is a valid choice. Set llm.provider: off and tellr writes alerts the old way. Less personality, zero token cost, the same thresholds.
$tellr help silence

07 · silencing & routes

The most requested feature of monitoring tools, usually solved by tabs of muted channels. Tellr puts it in the config.

silence:
  - match: check=api.prod
    between: "2026-04-18T02:00Z"
    and: "2026-04-18T04:00Z"
    reason: planned migration

  - match: severity=info
    weekly: "mon-fri 22:00-07:00 Europe/Berlin"
    reason: quiet hours

routes:
  - match: severity=error
    to: [slack, telegram]
  - match: severity=warn
    to: [slack]
  - match: severity=info
    to: [email]
$tellr help deploy

08 · deploy

Tellr is a long-lived process. Run it however you run long-lived processes.

# systemd unit (recommended for a box you own)
sudo tellr install --systemd
systemctl status tellr

# docker
docker run -d --name tellr \
  -v /etc/tellr:/etc/tellr \
  -e OPENAI_API_KEY=$OPENAI_API_KEY \
  ghcr.io/tellr/tellr:0.9.2

# fly.io (this is how we run our own instance)
fly launch --from https://github.com/tellr/tellr-fly
Resource footprint. Idle: ~40 MB RSS, single-digit CPU. At 1,200 checks/min on our own box: 142 MB RSS, 8% CPU. You'll notice your grafana before you notice tellr.
$tellr --help

09 · cli reference

tellr watchrun checks continuously (default)
tellr check <name>run one check right now, print result
tellr testvalidate config, don't run anything
tellr explain <name>write a sample alert for this check, without sending
tellr silence <name>silence a check from the command line
tellr installinstall a systemd unit or launchd plist
tellr versionversion + build commit
--config <path>path to yaml (default: ./tellr.yaml, /etc/tellr/tellr.yaml)
--verboselog every check tick
--dry-runrun checks, print alerts, don't send
$tellr help faq

10 · faq

Is this going to be yet another dashboard I have to check?
No. That's the whole point. Tellr has a small status page and no graphs tab. If you want graphs, keep your current stack; tellr sits on top and shuts up when things are fine.
What happens when tellr itself goes down?
We run a second process (or a cron) that pings tellr's /healthz and posts to a separate channel if it's silent for more than 10 minutes. Put that in your config; we've pre-written it at examples/watch-the-watcher.yaml.
Can I use this without the LLM part?
Yes. Set llm.provider: off. You get plain-text alerts with the same thresholds. Nothing else changes.
Will it page me at 3am for a blip?
Only if you tell it to. Default is two-in-a-row before a page, and a 10-minute quiet window between repeats of the same alert. Silencing rules cover the rest.
How is the $39/mo hosted version different from self-hosted?
Same binary. Same config. We host it, keep the LLM key under our budget (you can BYOK to escape that), and give you a status page. If you're watching fewer than ~30 services, it's cheaper than a VM.
Is my data going anywhere weird?
Check results stay in tellr's local store (SQLite, ~5 MB for 90 days). The LLM sees only what we listed in §06. We don't call home.
you@laptop ~ $ tellr version
tellr 0.9.2 · commit a3f21c · built 2 days ago · apache-2.0
 
you@laptop ~ $