-
-
Notifications
You must be signed in to change notification settings - Fork 754
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Shutdown process is broken in 0.15 #1160
Comments
When I press I'm debugging it. |
Ok... It was quite fast to find it 😅 The problem is that on reload mode, we terminate the process on the |
seems more like an issue in op lifespan event, cant reproduce here |
I see. I am using a script for same purpose (send SIGTERM to process group) in docker exactly to avoid dead reloader while server still working. It runs into the same problem with |
this simple app reloads fine and go through lifespan events fine too:
|
I`m using just fastapi app itself And it shutdown properly without |
@euri10 I'm reproducing your code, and it does raise EDIT: On reload is fine. It's just when you |
ok can get the error on ctrl+c, thought it was the same as kill -2 which works fine, as well as kill -15 |
That's because |
I get the same error with Starlette 0.16.0 when I send a
I never had to create a lifespan function in previous versions, but I found that the error disappears if I bootstrap Starlette like so: async def _lifespan(app):
import asyncio
try:
yield
except asyncio.exceptions.CancelledError:
pass
server_app = Starlette(routes=routes, lifespan=_lifespan) |
…ードでの起動に変更 Uvicorn の --reload オプションはファイルに変更があった時に自動でリロードしてくれる優れものなんだけど、Linux 環境では Uvicorn 0.15 のバグで Ctrl+C が効かない そもそも通常利用でリロードモードにする必要はないので、この機会にコマンドを分ける ref: encode/uvicorn#1160
I'm having the exactly same issue!! Here's the code https://github.com/igormcsouza/kitty-api/blob/master/api/__init__.py I downgrade to 0.14 just to avoid error |
Downgrade is not recommended. Reload works normally, it's just when you CTRL+C, it will force exit, but network resources are closed anyway. |
Even though the problem persist! I tried once more yesterday! That error at the end causes issues on my routine! I will keep on 0.14 until it is fixed! |
For passers-by: I encountered a similar issue when setting spaCy's |
Hello, we're having this issue when we Nuitka compile our app. Here is the output
|
Hello all, I got this only under a container, but if not running inside a container it works fine. Hope this helps a little. |
I am running uvicorn programatically with workers and getting this issue when I Ctrl-C the main process. It only happens with workers, I think one of the worker processes is generating the exception since catching the asyncio exception on my uvicorn.run call does not stop the stack trace. Also, it does not happen every time I shutdown, only randomly/sporadically. Since I am starting uvicorn from within python, is there some way to prevent the default Ctrl-C behavior and shutdown uvicorn programatically? Referring to: https://twitter.com/graingert/status/1539697480631197703
Is this applicable here?
|
I still have this issue on the latest version |
please the open a discussion in potential issues with as many details as possible |
I have made a new repo with this exact problem, but the problem occurs (for me) only when using Docker. If it makes any difference: I use m1 Mac, so my docker image is linux/arm64. You can play with the repo: |
I have the same issue in the FastAPI app and it caused pod restarts.
However, decreasing the number of |
I get this issue with uvicorn==0.19.0, fastapi== 0.87.0 and python 3.9 when using uvicorn together with krenew (I need kerberos). The code I run is contained in main.py with content:
When I run
and quit with ctrl+c everything is okay. Running
and stopping with ctrl+c yields the following error
|
I was looking at this issue again to see if I could track it down, and it comes down to handle_exit on Server: Lines 305 to 310 in dd9d5d7
Whenever handle_exit is called twice on one worker, force_exit is set which creates this stack trace. You can make it happen for all your workers by rapidly pressing Ctrl-C twice. I see the purpose of this behavior when the regular shutdown is blocked and you need to force your app to close. The bug described in this issue is that handle_exit is being called twice on one worker randomly/sporadically, causing it to force_exit even though Ctrl-C is only pressed once. I added some debug messages locally to show this, printing the process ID, and when it runs handle_exit/force_exit:
This is with 4 workers, and again this only happens sometimes (roughly 1 in 5 in my case). Since this issue is considered closed, I will figure out some hack for my app to remove this stack trace, but I wanted to share this info hoping it may lead to an eventual proper fix. |
I'm locking this issue since the issue mentioned on the OP was solved, and to avoid confusion. If you think you've found a similar issue, please create a discussion. |
Checklist
master
.Describe the bug
My FastAPI ASGI server cannot shutdown properly with uvicorn==0.15 while it can with 0.14
To reproduce
Setup minimal FastAPI app and add some functions with logs(prints) to shutdown event
Expected behavior
You see all logs(prints) from functions on shutdown
Actual behavior
Get
ASGI 'lifespan' protocol appears unsupported.
without --lifespan onGet error trace with --lifespan on
Debugging material
uvicorn scheduler.main:app --host=0.0.0.0 --port ${WEB_PORT:-8000} --reload --lifespan on
INFO: Will watch for changes in these directories: ['/home/dmytro/storage/chimplie/projects/raok-main/raok-scheduler']
INFO: Uvicorn running on http://0.0.0.0:8004 (Press CTRL+C to quit)
INFO: Started reloader process [177653] using statreload
INFO: Started server process [177655]
INFO: Waiting for application startup.
INFO: Tortoise-ORM started, {'default': <tortoise.backends.asyncpg.client.AsyncpgDBClient object at 0x7f63d4a10e50>}, {'models': {'Task': <class 'scheduler.models.task.Task'>, 'Aerich': <class 'aerich.models.Aerich'>}}
INFO: Application startup complete.
^CINFO: Shutting down
INFO: Finished server process [177655]
ERROR: Exception in 'lifespan' protocol
Traceback (most recent call last):
File "/home/dmytro/.local/share/virtualenvs/raok-scheduler-hpGGYNLi/lib/python3.8/site-packages/uvicorn/lifespan/on.py", line 84, in main
await app(scope, self.receive, self.send)
File "/home/dmytro/.local/share/virtualenvs/raok-scheduler-hpGGYNLi/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 75, in call
return await self.app(scope, receive, send)
File "/home/dmytro/.local/share/virtualenvs/raok-scheduler-hpGGYNLi/lib/python3.8/site-packages/fastapi/applications.py", line 199, in call
await super().call(scope, receive, send)
File "/home/dmytro/.local/share/virtualenvs/raok-scheduler-hpGGYNLi/lib/python3.8/site-packages/starlette/applications.py", line 112, in call
await self.middleware_stack(scope, receive, send)
File "/home/dmytro/.local/share/virtualenvs/raok-scheduler-hpGGYNLi/lib/python3.8/site-packages/starlette/middleware/errors.py", line 146, in call
await self.app(scope, receive, send)
File "/home/dmytro/.local/share/virtualenvs/raok-scheduler-hpGGYNLi/lib/python3.8/site-packages/starlette/middleware/cors.py", line 70, in call
await self.app(scope, receive, send)
File "/home/dmytro/.local/share/virtualenvs/raok-scheduler-hpGGYNLi/lib/python3.8/site-packages/starlette/exceptions.py", line 58, in call
await self.app(scope, receive, send)
File "/home/dmytro/.local/share/virtualenvs/raok-scheduler-hpGGYNLi/lib/python3.8/site-packages/starlette/routing.py", line 569, in call
await self.lifespan(scope, receive, send)
File "/home/dmytro/.local/share/virtualenvs/raok-scheduler-hpGGYNLi/lib/python3.8/site-packages/starlette/routing.py", line 544, in lifespan
await receive()
File "/home/dmytro/.local/share/virtualenvs/raok-scheduler-hpGGYNLi/lib/python3.8/site-packages/uvicorn/lifespan/on.py", line 135, in receive
return await self.receive_queue.get()
File "/usr/lib64/python3.8/asyncio/queues.py", line 163, in get
await getter
asyncio.exceptions.CancelledError
INFO: Stopping reloader process [177653]
Environment
uvicorn main:app --host=0.0.0.0 --port 8000 --reload
The text was updated successfully, but these errors were encountered: