Skip to main content
Back to Insights
Engineering & ReliabilityGuide

How to Route Prometheus AlertManager Notifications to Microsoft Teams and Slack

Prometheus AlertManager routes alerts through receivers — one receiver per destination. Here is how SRE teams fan one AlertManager route to Slack, Teams, and every other platform simultaneously.

5 min read
Alex Morgan

Alex Morgan is a solutions architect at SyncRivo focused on Prometheus observability, AlertManager configuration, and cross-platform notification infrastructure for SRE teams.

How to Route Prometheus AlertManager Notifications to Microsoft Teams and Slack

AlertManager's Receiver Architecture

Prometheus AlertManager routes alerts through a configuration file (alertmanager.yml) that defines receivers and routing rules. Each receiver specifies a destination: a Slack webhook URL, a PagerDuty routing key, an OpsGenie API key, or a generic webhook URL.

To notify both Slack and Teams from one alert, AlertManager supports grouping multiple receivers into a receiver list. You define a Slack receiver, a Teams receiver (via webhook_configs), and combine them. This works — but creates a maintenance pattern that compounds over time.

Every new messaging destination requires a new receiver block in alertmanager.yml. Every URL rotation requires an alertmanager.yml change, a config reload (amtool reload or a pod restart in Kubernetes), and validation. For teams running multiple AlertManager instances (production, staging, regional clusters), the config change must be propagated everywhere.

The receiver list also grows with organizational complexity: post-M&A environments add Teams for the acquired entity alongside existing Slack. Expanding to Webex for a NOC team adds another receiver. Each addition is a configuration change in a file that lives in version control and requires deployment.

Consolidating to a Single Webhook Receiver

Replace all per-platform receivers with a single webhook_configs receiver pointing to SyncRivo. SyncRivo handles fan-out to Slack, Teams, Webex, Google Chat, and Zoom. Routing rules in SyncRivo replace the per-receiver routing logic in alertmanager.yml.

AlertManager configuration (alertmanager.yml):

receivers:
  - name: syncrivo-fanout
    webhook_configs:
      - url: https://hooks.syncrivo.ai/webhook/YOUR_ENDPOINT_ID
        send_resolved: true

route:
  receiver: syncrivo-fanout
  group_by: [alertname, cluster, service]
  group_wait: 30s
  group_interval: 5m
  repeat_interval: 4h

In SyncRivo, configure routing to replace the routing tree you previously maintained in alertmanager.yml: critical severity → Slack #oncall + Teams #engineering simultaneously; warning → Slack only; cluster-level alerts → Slack #infra; security alerts → Slack #security + Teams #ciso.

Setup (20 minutes):

  1. In SyncRivo, connect Slack and Teams via OAuth. Create a Webhook source, copy the URL.
  2. Update alertmanager.yml to replace existing receiver definitions with a single webhook_configs receiver pointing to SyncRivo.
  3. Reload AlertManager config: run amtool reload or restart the AlertManager pod.
  4. In SyncRivo, configure routing rules using AlertManager's label fields: severity, alertname, cluster, env.

Routing AlertManager Labels to the Right Audience

AlertManager labels are the routing primitives. SyncRivo routing rules can match on any label:

severity: critical: Route to Slack #oncall for the on-call engineer and Teams #engineering for management visibility. Critical alerts are always dual-platform.

severity: warning: Route to Slack #alerts for the engineering team. Warning-level alerts are pre-critical monitoring signals — engineering needs them, management does not.

env: production: Production alerts always route to both platforms. Staging and development environment alerts route to Slack only.

team: security: Security-labeled alerts route to Slack #security and Teams for the CISO or security leadership channel.

alertname: Watchdog: The AlertManager heartbeat alert — route to a dedicated Slack channel for monitoring the monitoring system. Not a real alert, should not reach Teams.

Resolved alerts: With send_resolved: true, AlertManager sends a resolution notification when an alert clears. Route resolved notifications back to the originating channel threads on both platforms. SREs in Slack and managers in Teams both see the resolution without manual follow-up.

Works with Grafana, Thanos, and Mimir

Prometheus AlertManager is commonly used as the alerting layer for Grafana, Thanos (long-term storage), and Grafana Mimir. SyncRivo receives AlertManager webhook payloads regardless of the upstream metric storage layer — the alertmanager.yml configuration is the same whether your metrics come from single-cluster Prometheus or a Thanos query layer spanning multiple clusters.

For the full AlertManager configuration reference, label-based routing examples, and multi-cluster setup patterns, see the Prometheus AlertManager in Slack & Teams integration guide.

Ready to connect your messaging platforms?

Bridge your messaging platforms in 15 minutes

Connect Slack, Teams, Google Chat, Webex, and Zoom with any-to-any routing. No guest accounts. No migration. SOC 2 & HIPAA ready.