Last active
March 9, 2019 07:03
-
-
Save nszceta/087a14d8896e47bc6e7e441fd60b6ff4 to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# thanks Eli! https://github.com/seemethere | |
import os | |
import asyncio | |
import uvloop | |
from asyncpg import connect, create_pool | |
from sanic import Sanic | |
from sanic.response import json | |
DB_CONFIG = {} # FIXME: your DB config here | |
def jsonify(records): | |
""" | |
Parse asyncpg record response into JSON format | |
""" | |
return [dict(r.items()) for r in records] | |
app = Sanic(__name__) | |
@app.listener('before_server_start') | |
async def register_db(app, loop): | |
app.pool = await create_pool(**DB_CONFIG, loop=loop, max_size=100) | |
async with app.pool.acquire() as connection: | |
await connection.execute('DROP TABLE IF EXISTS sanic_post') | |
await connection.execute("""CREATE TABLE sanic_post ( | |
id serial primary key, | |
content varchar(50), | |
post_date timestamp | |
);""") | |
for i in range(0, 1000): | |
await connection.execute(f"""INSERT INTO sanic_post | |
(id, content, post_date) VALUES ({i}, {i}, now())""") | |
@app.get('/') | |
async def root_get(request): | |
async with app.pool.acquire() as connection: | |
results = await connection.fetch('SELECT * FROM sanic_post') | |
return json({'posts': jsonify(results)}) | |
if __name__ == '__main__': | |
app.run(host='127.0.0.1', port=8080) |
Thanks for this demo, I'm a beginner of sanic and asyncpg, it help me much. But I ran this code and it seems my server can only handle about 220 requests per seconds, I'm super confused.
My environment:
- sanic 0.5.2
- asyncpg 0.10.1
- i5 CPU + 16G RAM + MacOS 10.12
- PostgreSQL 9.6.1 (local)
- ApacheBench, Version 2.3
Concurrency Level: 100
Time taken for tests: 46.375 seconds
Complete requests: 10000
Failed requests: 0
Total transferred: 498840000 bytes
HTML transferred: 497910000 bytes
Requests per second: 215.64 [#/sec] (mean)
Time per request: 463.746 [ms] (mean)
Time per request: 4.637 [ms] (mean, across all concurrent requests)
Transfer rate: 10504.63 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 1.0 1 12
Processing: 79 462 269.8 426 3157
Waiting: 79 440 233.8 412 3157
Total: 83 463 269.8 427 3158
Percentage of the requests served within a certain time (ms)
50% 427
66% 442
75% 457
80% 465
90% 496
95% 533
98% 617
99% 2990
100% 3158 (longest request)
Is there any possible reason could lead to this result? Thanks!
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
In this benchmark I describe how to obtain a 17% performance improvement over the current asyncpg demo code by using a global connection pool. All benchmarks were performed on my plugged in 8 core i7 laptop.
Use Revision 3 if possible. If you are trying to integrate other database drivers you may find inspiration in the other revisions, but the third one offers the cleanest integration and best performance.
# Apache Benchmark tool (https://www.cambus.net/benchmarking-http-servers/): ab -c100 -n10000 http://127.0.0.1:8000/
Revision 1
Revision 2
Revision 2.1 (unpublished, same as 2 but without using asyncio.Lock)
Revision 2.2 (same as 2.1, but with transaction management as in Revision 1)
Revision 3 (use this one) (thanks to Eli https://github.com/seemethere)
The official Sanic asyncpg example in upstream opens a new connection for each request:
https://github.com/channelcat/sanic/blob/88bf78213ffdc168330cfc135b8a25706ef0b1ef/examples/sanic_asyncpg_example.py