问题
We are trying to chose between technologies at my work. And I thought I'd run a benchmark using both libraries (aiohttp and requests).
I want it to be as fair / unbiased as possible, and would love a look from the community into this.
So this is my current code :
import asyncio as aio
import aiohttp
import requests
import time
TEST_URL = "https://a-domain-i-can-use.tld"
def requests_fetch_url(url):
with requests.Session() as session:
with session.get(url) as resp:
html = resp.text
async def aio_fetch_url(url):
async with aiohttp.ClientSession() as session:
async with session.get(url) as resp:
html = await resp.text()
t_start_1 = time.time()
for i in range(10):
[requests_fetch_url(TEST_URL) for i in range(16)]
t_end_1 = time.time()
print("using requests : %.2fs" % (t_end_1-t_start_1))
t_start_2 = time.time()
for i in range(10):
aio.get_event_loop().run_until_complete(aio.gather(
*[aio_fetch_url(TEST_URL) for i in range(16)]
))
t_end_2 = time.time()
print("using aiohttp : %.2fs" % (t_end_2-t_start_2))
ratio = (t_end_1-t_start_1)/(t_end_2-t_start_2)
print("ratio : %.2f" % ratio)
So is that biased ? Are there any ways to improve it to be more reliable ? Should I also monitor CPU and/or RAM usage ? anything else I'm missing ? Are there ways to improve this ?
来源:https://stackoverflow.com/questions/50030865/is-that-benchmark-reliable-aiohttp-vs-requests