async endpoints holding back responses #1690
-
I have noticed when using starlette (first discovered in FastAPI) that starlette occasionally, but often, will hold back responses when using async endpoints. This is devastating to applications that process the requests for some time. Example of behaviour when using defLet's say I have a simple application that does some time consuming loops before returning the request: simple_app.py from starlette.applications import Starlette
from starlette.responses import JSONResponse
from starlette.routing import Route
from time import perf_counter
def homepage(request):
start = perf_counter()
print('entered /')
val = 0
for i in range(250_000_000):
val = val + i
stop = perf_counter()
print(f'exiting after {stop-start:.2f}')
return JSONResponse({'hello': 'world'})
routes = [
Route("/", endpoint=homepage)
]
app = Starlette(debug=True, routes=routes) Since this uses a def endpoint I assume multiple concurrent requests will result in multiple asyncio threads. Since the loop takes about 12 seconds to complete on my computer, I expect 4 concurrent requests to respond at the same time around 48 seconds. multirequest.py from multiprocessing import Process
from time import perf_counter
import requests
def send_req(id: int):
print(f'client {id}: sending request')
start = perf_counter()
requests.get('http://localhost:8000/')
stop = perf_counter()
print(f'client {id}: received response after {stop-start:.2f}s')
if __name__ == '__main__':
for i in range(4):
Process(target=send_req, args=(i,)).start() I run the starlette app with uvicorn print from application
print from multirequest.py
Just as expected! 💯 That is fantastic. But... Example of behaviour when using async defNow if I change the aforementioned simple_app.py endpoint to async: async def homepage(request): I would expect the code to run completely synchronously with no concurrency or parallelism. Uvicorn is started with only 1 worker, and I only use blocking code with zero use of "await". I start the application again print from application
As expected this runs synchronously! Wonderful! Each request is handled in around 12 seconds. But the logs from the clients tell another story:
The first response is returned as expected, but the last 3 are being held back and all returned at the same time. On the application side they are processed and ready to respond, but starlette won't return that response before finishing all queued requests. This is devastating for performance when you run endpoints that can take up to a minute before responding, as previous requests will have to wait for several others to finish before getting a response. |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 6 replies
-
your client is incorrect simple_app.py from starlette.applications import Starlette
from starlette.responses import JSONResponse
from starlette.routing import Route
from time import perf_counter
async def homepage(request):
start = perf_counter()
print('entered /')
val = 0
for i in range(250_000_000):
val = val + i
stop = perf_counter()
print(f'exiting after {stop-start:.2f}')
return JSONResponse({'hello': 'world'})
routes = [
Route("/", endpoint=homepage)
]
app = Starlette(debug=True, routes=routes) client.py from multiprocessing import Process
from time import perf_counter
import requests
def send_req(id: int):
print(f'client {id}: sending request')
start = perf_counter()
requests.get('http://localhost:8000/')
stop = perf_counter()
print(f'client {id}: received response after {stop-start:.2f}s')
if __name__ == '__main__':
for i in range(4):
p = Process(target=send_req, args=(i,))
p.start()
p.join() /home/lotso/PycharmProjects/uvicorn/venv/bin/python /home/lotso/PycharmProjects/uvicorn/dev/client.py
client 0: sending request
client 0: received response after 25.23s
client 1: sending request
client 1: received response after 25.19s
client 2: sending request
client 2: received response after 25.11s
client 3: sending request
client 3: received response after 25.26s
Process finished with exit code 0 ❯ uvicorn dev.simple_app:app
INFO: Started server process [648626]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
entered /
exiting after 25.16
INFO: 127.0.0.1:50398 - "GET / HTTP/1.1" 200 OK
entered /
exiting after 25.22
INFO: 127.0.0.1:55642 - "GET / HTTP/1.1" 200 OK
entered /
exiting after 25.18
INFO: 127.0.0.1:49378 - "GET / HTTP/1.1" 200 OK
entered /
exiting after 25.10
INFO: 127.0.0.1:42684 - "GET / HTTP/1.1" 200 OK
entered /
exiting after 25.24
INFO: 127.0.0.1:35202 - "GET / HTTP/1.1" 200 OK |
Beta Was this translation helpful? Give feedback.
-
Tried to run the same scripts in a poetry environment, and the bug did not persist. Closing this discussion. Feel free to @ me if you also run into a similar problem in the future. |
Beta Was this translation helpful? Give feedback.
-
I got here looking for a completely different issue. I don't think this is a problem with Starlette, just the way asyncio works. The thing with asyncio is that tasks can't be too greedy, otherwise nothing works properly. If they block the thread for too long, nothing else runs. My guess for what's happening here is:
|
Beta Was this translation helpful? Give feedback.
Tried to run the same scripts in a poetry environment, and the bug did not persist.
This considering, I suspect the bug occured due to an issue with transative dependencies on my system.
Have not been able to reproduce this in controlled poetry environments.
Closing this discussion. Feel free to @ me if you also run into a similar problem in the future.