-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Open
Description
Problem Statement
We operate a real-time AI chat platform (LibreChat-based) that streams LLM responses to clients using Server-Sent Events (SSE).
When the Sentry Node.js SDK (@sentry/node) is initialized, SSE responses stop streaming incrementally and instead appear to be buffered, causing the client UI to hang or fail to display tokens in real time.
If Sentry initialization is removed entirely, SSE streaming works correctly.
We have confirmed:
- The issue affects Express-based SSE endpoints
- It occurs even when streaming routes are fully excluded from:
Transactions
Spans
Events
Profiling - The behavior suggests response buffering occurs before route-level filtering is applied
This makes it currently impossible to use Sentry in production for applications that rely on long-lived SSE connections (e.g., AI token streaming).
Solution Brainstorm
Potential approaches that would enable safe SSE support:
- Opt-out of response buffering
A global or per-route option to disable any response buffering or wrapping behavior in the Node SDK. - SSE-aware HTTP instrumentation
Detect Content-Type: text/event-stream and bypass any logic that intercepts or delays writes to res. - Dedicated streaming-safe integration
A lightweight HTTP integration mode for long-lived streaming responses that:
- Captures errors only
- Avoids transaction lifecycle tracking tied to response completion - Official best-practice guidance
Document a recommended pattern for monitoring SSE / AI streaming endpoints without breaking real-time delivery.
Additional Context
- Stack: Node.js + Express
- Streaming protocol: Server-Sent Events (SSE)
- Use case: Real-time AI / LLM token streaming
- Related discussion: Performance Monitoring for Web Streams (
ResponseAPI) #9633 - We attempted extensive route exclusions (ignoreTransactions, beforeSend, beforeSendTransaction, shouldCreateSpanForRequest) with no success, indicating buffering happens prior to those hooks.
This issue significantly limits Sentry adoption for modern AI platforms where streaming is a core requirement.
Priority
React with 👍 to help prioritize this issue. Please use comments to provide useful context, avoiding +1 or me too, to help us triage it.
Metadata
Metadata
Assignees
Projects
Status
Waiting for: Product Owner