All consumers consuming over 1K queues, growing from 1 to 1K messages.
import pika
import threading
import sys
import time
N = 100000 | |
# First approximation, brute force | |
def is_prime(number): | |
if number == 2: | |
return True | |
for x in range(2, number/2+1): | |
if number % x == 0: | |
return False | |
return True |
Leave ready the DB with a set of 10K documents with a length arround of 1K
from pymongo import MongoClient
from bson.objectid import ObjectId
client = MongoClient('localhost', 27017)
db = client['test']
collection = db['test']
collection.remove()
Size operation arround 50 bytes, write concern is acknowledgment.
Bulk trhoughput vs number of update operations
2 | 4 | 8 | 16 | 32 | 64 | 128 | 256 | 512 |
---|---|---|---|---|---|---|---|---|
4.5K | 9K | 12K | 16K | 19k | 22K | 23K | 24K | 27K |
1 publisher > 1 queue | |
queues = 10K | |
messages per queue = 100 | |
concurrence consumers = 20 | |
queues binded by consumer = 50 | |
qos by consumer = 40 | |
asyncronous pattern | |
Throughput got 16K messages, why ? |
+-------------------+-------------------------+---------+---------+---------+-------+ | |
|Name |Parameters | Real| User| Sys| Msg/s| | |
+-------------------+-------------------------+---------+---------+---------+-------+ | |
|Pika_Threads |{'threads': 2} | 3.03| 1.24| 0.15| 1650| | |
|Pika_Threads |{'threads': 4} | 1.78| 1.26| 0.19| 2808| | |
|Pika_Threads |{'threads': 8} | 1.48| 1.12| 0.16| 3378| | |
|Pika_Threads |{'threads': 16} | 1.43| 1.10| 0.27| 3496| | |
|Pika_Threads |{'threads': 32} | 1.31| 1.14| 0.30| 3816| | |
|Pika_Async |{'connections': 2} | 2.75| 0.96| 0.07| 1818| | |
|Pika_Async |{'connections': 4} | 1.98| 0.88| 0.09| 2525| |
# More info about ANSI escape sequences | |
# http://ascii-table.com/ansi-escape-sequences.php | |
import sys | |
import random | |
from time import sleep | |
ROWS = 10 | |
while True: |
Output of this command [1]
$ python set_memmory_usage.py
1. Differnece between list, dict and set containers with 1M of numbers regarding the size and container overhead
Sizeof with dict type: 48M, overhead per item 50b
Sizeof with set type: 32M, overhead per item 33b
Sizeof with list type: 8M, overhead per item 8b
----------
class ImmutableRecord(object): | |
def __init__(self, name): | |
self.__name = name | |
@property | |
def name(self): | |
return self.__name | |
def __hash__(self): |
The numbers claimed by this benchamark about Gevent [1] comparaed with the numbers got by Asyncio with the uvloop
and even with the default loop has left me a bit frozen. Ive repeated a few of them : gevent, asyncio, asyncio-uvloop and go for
the echo server and these are the numbers roughly:
For gevent
$ ./echo_client
685393 0.98KiB messages in 30 seconds
Latency: min 0.04ms; max 4.48ms; mean 0.126ms; std: 0.048ms (37.68%)
Latency distribution: 25% under 0.088ms; 50% under 0.122ms; 75% under 0.158ms; 90% under 0.182ms; 99% under 0.242ms; 99.99% under 0.91ms