Unlimited async operation with Python, e.g., performing an unlimited number of HTTP requests.
This method is inspired by the article Making an Unlimited Number of Requests with Python aiohttp + pypeln.
Original Method Cristian Garcia:
from aiohttp import ClientSession, TCPConnector
import asyncio
import sys
import pypeln as pl
limit = 1000
urls = ("http://007-server-1:8080/{}".format(i) for i in range(int(sys.argv[1])))
async def main():
async with ClientSession(connector=TCPConnector(limit=0)) as session:
async def fetch(url):
async with session.get(url) as response:
print(url)
return await response.read()
await pl.task.each(
fetch,
urls,
workers=limit,
)
asyncio.run(main())
- Method in
client/bench.py
file - This using
pypeln
for parallelizing and pipelining data processing tasks limit=1000
: maximum number of concurrent HTTP requests allowed, so if you increase thelimit
the Memory and CPU usage will increase to.
from aiohttp import ClientSession, TCPConnector
import asyncio
import sys
limit = 1000
sem = asyncio.Semaphore(limit)
async def fetch(url, session):
async with sem:
async with session.get(url) as response:
print(url)
return await response.read()
async def main():
urls = [f"http://007-server-1:8080/{i}" for i in range(int(sys.argv[1]))]
async with ClientSession(connector=TCPConnector(limit=0)) as session:
tasks = [fetch(url, session) for url in urls]
await asyncio.gather(*tasks)
asyncio.run(main())
- Method in
client/bench_v2.py
file. - It use
asyncio.Semaphore
to limit the number of concurrent tasks, thereby reducing Memory usage by approximately 50%.
- OS: Ubuntu 22.04.4 LTS, jammy
- 8GB Memory, CPU Intel(R) i3-10100F 3.60GHz - 8 Cors, 4 Core ber socket
- Docker version: 26.1.0, build 9714adc
Method Version 2 that use asyncio.Semaphore
in client/bench_v2.py
, is better then origin Cristian Garcia method in client/bench.py
file , just in memory usage, he lees then it almost by 50%
, but other thing like the time he takes and the CUP usage they almost same.
This chart explain some results for bench.py
VS bench_v2
, in Memory usage by diffracts request count:
Like we see V2 in
bench_v2.py
wins 💪
# For 100K Request
# bench.py
./timed.sh python bench.py 100_000
# Memory usage: 402924KB Time: 157.43s CPU usage: 44%
# bench_v2.py
./timed.sh python bench_v2.py 100_000
# Memory usage: 206204KB Time: 154.36s CPU usage: 31%
# For 1M Request
# bench.py
./timed.sh python bench.py 1_000_000
# Memory usage: 3639804KB Time: 1551.67s CPU usage: 38%
# bench_v2.py
./timed.sh python bench_v2.py 1_000_000
# Memory usage: 1607608KB Time: 1522.06s CPU usage: 30%
Each service is separated in different container, client
and server
, you can use the server container if you wanted a tests local, or you can just run the client
.
-
Server service: in
./server
it contain./server/server.py
which functions as a simpleHTTP
server using aiohttp, we create a http://localhost:8080/{name} route, he simply have a delay with a random time between1-3
second and return a response with the name you pass. -
Client service: in
./client
, includes the original method(./client/bench.py
) by Cristian Garcia, and the enhanced version (./client/bench_v2.py
) withasyncio.Semaphore
, we usetimed.sh
script to calc the python app time taken and also the CPU and Memory usage
We use docker compose
to handle it:
# first you need to go in the project directory and simple run:
docker compose up
# to run in background or detach mode add '-d' flag:
docker compose up -d
This command starts both server and client services. The server runs on http://localhost:8080 on your system.
The client service does not execute any services but keeps the container alive by running tail -f /dev/null
. Alternatively, you can modify the ./client/Dockerfile
to execute the Python script or benchmark test.
-
Build the image:
docker build -t py-bench ./client # ./client is the directory containing the Dockerfile
-
Run the container:
docker run -d py-bench
For development, utilize the VSCode Dev Container. Run the container, then use "Attach to Running Container" in VSCode for a seamless development environment. Any changes made will be applied automatically.
To execute commands inside the container, open a shell in VSCode or use the following in your terminal:
docker ps
# List all running containers and note the client name.
docker exec -it <container name> sh
# Replace <container name> with the actual container name.
# Use a specific command instead of 'sh' for just running a command in the container and close the shell
-
Use
./client/timed.sh
to measure the Python app's runtime, CPU, and Memory usage. For example:./timed.sh python bench.py 10000
-
bench.py
andbench_v2.py
takes the request count as a parameter