New Relic Alert Channels — The Multi-Platform Problem
New Relic supports Slack, Microsoft Teams, PagerDuty, and webhook as alert notification destinations. On paper, this covers the common use cases. In practice, there is a structural limitation: each notification channel is configured independently, and alert policies must be connected to each channel separately.
For an observability team that needs Slack for on-call engineers and Teams for engineering management, the workflow looks like this: create a Slack notification channel, connect alert policies to it, create a Teams notification channel, connect the same alert policies to it. Then maintain both configurations in parallel.
For teams with New Relic alert policies that have complex conditions — APM error rate thresholds, infrastructure CPU spikes, Synthetic monitor failures, browser error rate anomalies — maintaining duplicate channel connections doubles configuration complexity. A new alert condition must be connected to both channels. A policy change must be verified across both.
One Endpoint, All Platforms
The alternative architecture: connect New Relic's generic webhook destination to SyncRivo, and let SyncRivo handle distribution across all platforms.
Configure a New Relic "Webhook" notification channel pointing to your SyncRivo endpoint. In SyncRivo, define routing: critical alerts to Slack #incidents and Teams #engineering simultaneously; warning-level alerts to Slack only; infrastructure alerts to a dedicated channel on whichever platform the SRE team uses; resolved alerts back to the originating thread on all platforms.
Setup (15 minutes):
- Connect Slack and Teams to SyncRivo via OAuth.
- Create a Webhook source in SyncRivo. Copy the endpoint URL.
- In New Relic, go to Alerts & AI → Notification Channels → New Channel → Webhook. Paste the SyncRivo endpoint. Configure the payload template (New Relic's default JSON payload is supported by SyncRivo without modification).
- Connect your alert policies to this notification channel.
- In SyncRivo, configure routing rules based on New Relic's severity and entity fields.
Alert Types and Routing by Role
APM error rate breach: Route to Slack for the development team that owns the service. If the breach exceeds a critical threshold, also route to Teams for the engineering manager who handles incident escalation.
Infrastructure alert (host CPU, memory, disk): Route to Slack for the SRE or DevOps team. These are operational alerts that require technical response, not executive visibility.
Synthetic monitor failure (external endpoint down): Route to both platforms. External-facing failures have customer impact and need both the technical responder and the account/support owner notified.
NRQL alert condition (custom business metric): Route based on the metric's business significance. A revenue-related metric breach should reach Teams (where business leadership monitors); an application performance metric should reach Slack (where engineers operate).
Alert resolved: Route to the originating channel threads on all platforms. Engineers on Slack and managers on Teams both see the resolution without a manual update.
For teams expanding beyond Slack and Teams — adding Webex for a newly acquired entity, or Google Chat for a remote office — SyncRivo adds the destination without requiring any New Relic reconfiguration.
For the full routing matrix, New Relic webhook payload reference, and comparison with native per-channel configuration, see the New Relic Alerts in Slack & Teams integration guide.
Ready to connect your messaging platforms?