-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(nextjs): Send events consistently on platforms that don't support streaming #6578
Conversation
size-limit report 📦
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So as discussed in person, the biggest concern here is an increase in response time of the serverless functions. I'd say we accept this for now in favour of consistency, especially given that theres precedence for this (axiom).
We can always revert, in case this becomes necessary.
Understand that this has been implemented already, but you should be able to subscribe to the shutdown signal when there is a lambda extension to be able to both send concurrently (because the lambda wakes up) during execution, and to drain the remainder on shutdown. Since lambdas with external extensions gives 2s of shutdown time past serving the request, it might be possible to detect that in the lambda environment too even if sentry's own extension isn't available. I use this method to batch requests to SQS and not pay the overhead on every call. |
I re-read it and realised it seems like this only affects nextjs so not my concern now haha. But you can register an event handler on the shutdown signal anywhere. I guess the question now is if you would be able to tell if there's an extension registered so lambda gives you shutdown time, I would assume yes. Certainly if you involve your own in some capacity. |
New to the extension API. Would you care to quickly explain how that would work? |
Resolves #6117
This PR ensures that the sending of events consistently works on platforms that don't support streaming. (i.e. Platforms where computation is impossible after a response has been sent.) For now, we're targeting AWS lambdas and Vercel with this.
API Routes
For API routes we're patching
res.send
,res.json
, andres.end
to block sending the response until events have been flushed. This comes with the big drawback that we're delaying responses but sending of events is unreliable after eitherres.send
,res.json
, orres.end
has been called so it is a necessary evil.Data Fetchers
For data fetchers, we face two problems. First, doing stuff after
res.end
is again very inconsistent. Secondly, we generally do not know what data fetchers are gonna be called for a particular request so apart fromres.end
we have no way of knowing when a request has been completed, and thus when we should finish a transaction.The workaround we came up with for now is to create a transaction for the incoming request and to create individual transactions for each data fetcher that are then attached to the request transaction. That way we don't have to worry to send the transaction(s) in time, we just send them before the data fetcher should return.
This again comes with a drawback: We're sending a lot of transactions which probably will affect people's quota by a not insignificant amount.