-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple metrics endpoints when using expose
+ multliple workers
#253
Comments
We are observing the same issue and would be interested to hear as well |
Hey, did you see this discussion. |
Hey, |
My current approach:
Then these metrics are exposed on the same endpoint as the ones from Hope it helps someone :) |
@tahsintahsin, note that prometheus_client library is used by this library here as well. It is just a wrapper around it that provides a middleware for FastAPI and a few metrics. So the same limitations apply |
I have a fastapi app that i annotate and expose with the instrumentator, and it works as expected in that it adds a
/metrics
GET
endpoint with all metrics. When i serve this app using more than 1 (lets say uvicorn) servers, i naturally get more than one/metrics
endpoint. which of the various works i connect to when visiting{url}/metrics
seems to be randomly decided by the load balancing logic of the parent worker process.this however is an issue, as prometheus presumably would only ever connect to one randomly selected worker process'
/metrics
endpoint and scrape what is essentialy thenth
part of the metrics it should get.is there a way to avoid this (breaking imo) issue and consolidate the metrics resulting from exposing the app to the instrumentator into one metrics endpoint? I have seen other libraries' approach to this (e.g.
kserve
- they dont use this library and rely on the more low levelprometheus-client
library directly), but would be interested in your opinionThe text was updated successfully, but these errors were encountered: