tellr, the whole thing
Everything you need to run tellr. It's a single binary, a YAML file, and one webhook. This page has no search box. Use cmd-F.
01 · install
One binary. No runtime. No sidecar. No operator. It runs on anything that can run a statically-linked 18 MB Go binary.
Prefer to read before you curl-pipe? Good instinct. Here's the same thing in pieces:
# 1. download the binary for your OS/arch curl -L https://github.com/tellr/tellr/releases/latest/download/tellr-$(uname -s)-$(uname -m) \ -o /usr/local/bin/tellr # 2. make it executable chmod +x /usr/local/bin/tellr # 3. verify checksum (printed on every release) sha256sum /usr/local/bin/tellr
02 · hello, world
The fastest path from nothing to a real alert in Slack. This takes about a minute.
- Make a file called
tellr.yaml. Put one check in it (copy the snippet below). - Paste your Slack webhook URL where it says
url:. - Run
tellr watch. You'll see checks tick by. - Break something on purpose:
curl example.com/does-not-existuntil errors climb, or just set the threshold low. - Watch an alert land in Slack, with a written explanation. Done.
# tellr.yaml · minimum viable config checks: - name: api.prod type: http url: https://api.example.com/healthz every: 30s expect: 2xx alerts: - to: slack url: https://hooks.slack.com/services/T0.../B0.../xxx channel: #ops-alerts
03 · config: tellr.yaml
Tellr has exactly two config sections: checks (things to watch) and alerts (where to send them when something breaks). Optional: llm, silence, routes.
The file is watched for changes. Save, the new config takes effect in the next check tick. No restart.
llm: provider: openai # openai | anthropic | ollama | off model: gpt-4o-mini key: ${OPENAI_API_KEY} budget_per_day: $5 # hard cap; over budget = plain alerts checks: - name: api.prod type: http url: https://api.example.com/healthz every: 30s expect: 2xx timeout: 5s - name: db.prod type: postgres dsn: ${DATABASE_URL} disk_warn: 80% replica_lag_warn: 10s - name: signup_funnel type: llm # ask the model to read the data query: select date, count(*) from signups group by 1 every: 1h ask: "Is today's signup count materially below the 7-day median?" alerts: - to: slack url: ${SLACK_WEBHOOK} channel: #ops-alerts only: [error, warn]
04 · checks
Every check has a name, a type, and an every. The rest depends on type.
When a check fails, tellr does not alert immediately. It waits for the same failure twice in a row, on most types. This eliminates roughly 90% of false positives without meaningfully slowing you down.
05 · alert channels
Tellr speaks Slack, Telegram, Discord, email, and plain webhook. That covers every team we've met. If you need PagerDuty or Opsgenie, send the webhook there.
alerts: - to: slack url: ${SLACK_WEBHOOK} channel: #ops-alerts - to: telegram bot_token: ${TG_BOT_TOKEN} chat_id: -100123456789 - to: discord url: ${DISCORD_WEBHOOK} - to: email from: alerts@yourco.com to_addr: oncall@yourco.com - to: webhook # pagerduty, opsgenie, your own thing url: https://events.pagerduty.com/v2/enqueue headers: Authorization: Token ${PD_TOKEN}
Every alert carries three things: a one-line title, a written paragraph, and a machine-readable JSON blob. The paragraph is what the LLM writes. If you turn the LLM off, you get a cleaner plain-text version.
06 · bring-your-own LLM
Alerts are written by a model. You plug in the key. The model provider never goes through us.
- Set
llm.providertoopenai,anthropic, orollama. - Put the key in the env. Reference it as
${OPENAI_API_KEY}so it never hits disk in the config file. - Set a
budget_per_day. Once hit, tellr falls back to plain alerts until tomorrow. - If you run Ollama locally, set
provider: ollamaandurl: http://localhost:11434. No key needed.
llm.provider: off and tellr writes alerts the old way. Less personality, zero token cost, the same thresholds.07 · silencing & routes
The most requested feature of monitoring tools, usually solved by tabs of muted channels. Tellr puts it in the config.
silence: - match: check=api.prod between: "2026-04-18T02:00Z" and: "2026-04-18T04:00Z" reason: planned migration - match: severity=info weekly: "mon-fri 22:00-07:00 Europe/Berlin" reason: quiet hours routes: - match: severity=error to: [slack, telegram] - match: severity=warn to: [slack] - match: severity=info to: [email]
08 · deploy
Tellr is a long-lived process. Run it however you run long-lived processes.
# systemd unit (recommended for a box you own) sudo tellr install --systemd systemctl status tellr # docker docker run -d --name tellr \ -v /etc/tellr:/etc/tellr \ -e OPENAI_API_KEY=$OPENAI_API_KEY \ ghcr.io/tellr/tellr:0.9.2 # fly.io (this is how we run our own instance) fly launch --from https://github.com/tellr/tellr-fly
09 · cli reference
10 · faq
Is this going to be yet another dashboard I have to check?
What happens when tellr itself goes down?
/healthz and posts to a separate channel if it's silent for more than 10 minutes. Put that in your config; we've pre-written it at examples/watch-the-watcher.yaml.Can I use this without the LLM part?
llm.provider: off. You get plain-text alerts with the same thresholds. Nothing else changes.