Skip to content

Instantly share code, notes, and snippets.

@hyc
hyc / mdb run1
Created May 17, 2014 08:03
shared hash key perf test
Just using MDB_WRITEMAP|MDB_NOSYNC, the put speed went from .1M/s to .3M/s
perf testing: LMDB aka Lightning MDB
running tests on: via command: 'cat /proc/cpuinfo | egrep 'model name' | head -n 1'
running tests on: `model name : Intel(R) Core(TM)2 Extreme CPU Q9300 @ 2.53GHz`
-OP MMAP REMAP SHRK PART TOTAL ------PERCENT OPERATIONS PER PROCESS PER SECOND -OPS
--- -k/s --k/s --/s --/s M-OPS 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 -M/s
PUT 0.0 0.0 0 0 0.0100 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.0
PUT 0.0 0.0 0 0 1.1 99 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1.1 -
PUT 0.0 0.0 0 0 2.0 1 2 96 0 0 0 0 0 0 0 0 0 0 0 0 0 0.8 -
Was trying to duplicate this test: https://github.com/facebook/rocksdb/wiki/RocksDB-In-Memory-Workload-Performance-Benchmarks
using git rev 0365eaf12e9e896ea5902fb3bf3db5e6da275d2
but all I get is this:
./run ./db_bench.rocks --db=/tmp/test1 --num_levels=6 --key_size=20 --prefix_size=20 --keys_per_prefix=0 --value_size=100 --block_size=4096 --cache_size=17179869184 --cache_numshardbits=6 --compression_type=none --compression_ratio=1 --min_level_to_compress=-1 --disable_seek_compaction=1 --hard_rate_limit=2 --write_buffer_size=134217728 --max_write_buffer_number=2 --level0_file_num_compaction_trigger=8 --target_file_size_base=134217728 --max_bytes_for_level_base=1073741824 --disable_wal=0 --wal_dir=/mnt/hyc/WAL --sync=0 --disable_data_sync=1 --verify_checksum=1 --delete_obsolete_files_period_micros=314572800 --max_grandparent_overlap_factor=10 --max_background_compactions=4 --max_background_flushes=0 --level0_slowdown_writes_trigger=16 --level0_stop_writes_trigger=24 --statistics=0 --stats_per_interval=0 --s
@hyc
hyc / In-Memory DB Tests
Last active February 2, 2017 03:12
In-Memory DB tests, part 1
Inspired by this RocksDB post, I started playing with this "readwhilewriting" test as well.
https://github.com/facebook/rocksdb/wiki/RocksDB-In-Memory-Workload-Performance-Benchmarks
Of course the data here is only a tiny fraction of the size of the RocksDB report. (I have
a larger test running that's of a more comparable size, but it will probably be a couple
more days before all of the tests finish.)
So - tested LevelDB 1.17, Basho LevelDB 1.9, BerkeleyDB 5.3.21, HyperLevelDB 1.14,
LMDB 0.9.12rc, RocksDB 3.2, TokuDB 4.6.119, and WiredTiger 2.2.0 (both LSM and Btree).
That's a lot of tests to run, trust me.
@hyc
hyc / AA
Last active August 29, 2015 14:02
InfluxDB testing
SSD
These tests are incomplete because I still can't get the HyperLevelDB driver to build, and various other oddities in the build tree.
violino:/home/software/influxdb> ./benchmark-storage -path=/home/test/db -points=3000000
################ Benchmarking: lmdb
Writing 3000000 points in batches of 1000 points took 8.579291321s (2.859764 microsecond per point)
Querying 3000000 points took 1.743559302s (0.581186 microseconds per point)
Size: 11M
Took 1.925530865s to delete 1500000 points
@hyc
hyc / 00before
Last active July 20, 2017 11:19
More fun with InfluxDB
I believe this is directly comparable to the results published here
http://influxdb.com/blog/2014/06/20/leveldb_vs_rocksdb_vs_hyperleveldb_vs_lmdb_performance.html
My laptop has 8GB of RAM but I pared it down to 4GB by turning off swap and creating a large enough file
in tmpfs to drop free RAM down to 4GB.
This is the code prior to using Sorted Duplicates.
The RocksDB performance is amazingly poor.
@hyc
hyc / 00SmallValues
Last active August 29, 2015 14:03
On-Disk Microbench
16GB data, 8GB RAM, Samsung 830 512GB SSD, 160000000 records, 16 byte keys, 100 byte values
Sophia: version 1.1
fillrandsync : 8.519 micros/op 117387 ops/sec; 13.0 MB/s (160000 ops)
20784 /home/test/dbbench_sph-1
20788 /home/test
fillrandom : 13.487 micros/op 74143 ops/sec; 8.2 MB/s
26220260 /home/test/dbbench_sph-2
26220264 /home/test
fillrandbatch : 7.334 micros/op 136352 ops/sec; 15.1 MB/s
@hyc
hyc / 00Before
Created July 5, 2014 17:57
PGO results for LMDB in-memory
violino:~/OD/mdb/libraries/liblmdb> rm -rf /tmp/leveldbtest-1000/*
violino:~/OD/mdb/libraries/liblmdb> ./db_bench_mdb.no_profile --num=10000000
LMDB: version LMDB 0.9.14: (June 20, 2014)
Date: Sat Jul 5 10:47:17 2014
CPU: 4 * Intel(R) Core(TM)2 Extreme CPU Q9300 @ 2.53GHz
CPUCache: 6144 KB
Keys: 16 bytes each
Values: 100 bytes each (50 bytes after compression)
Entries: 10000000
RawSize: 1106.3 MB (estimated)
@hyc
hyc / gist:7593d1bf21a804c1c5be
Last active August 29, 2015 14:06
Samba NTDB microbench
https://github.com/hyc/leveldb/commit/d05251bb51138f8f77a08e0e01f22c5048d3ccb5
violino:/home/software/leveldb> ./db_bench_ntdb
NTDB: version 1.0
Date: Mon Sep 15 16:51:55 2014
CPU: 4 * Intel(R) Core(TM)2 Extreme CPU Q9300 @ 2.53GHz
CPUCache: 6144 KB
Keys: 16 bytes each
Values: 100 bytes each (50 bytes after compression)
@hyc
hyc / 01tdb
Last active August 29, 2015 14:06
Samba TDB/NTDB concurrency test
Using https://github.com/hyc/leveldb/commit/0cbfeaa4caa6f6615c0a0caf611e4cdff909465d
Had to revert the TDB_MUTEX_LOCKING code since that doesn't support transactions, and all of the other tests are transactional.
./db_bench_tdb --stats_interval=100000 --benchmarks=fillseqbatch --new_hash=1
TDB: version 1.3.0
Date: Tue Sep 16 02:56:01 2014
CPU: 4 * Intel(R) Core(TM)2 Extreme CPU Q9300 @ 2.53GHz
CPUCache: 6144 KB
Keys: 16 bytes each
@hyc
hyc / 01vl32
Last active August 29, 2015 14:06
VL32 mode
violino:/home/software/leveldb> ./db_bench_mdb.vl32
LMDB: version LMDB 0.9.14: (September 15, 2014)
Date: Thu Sep 18 20:52:49 2014
CPU: 4 * Intel(R) Core(TM)2 Extreme CPU Q9300 @ 2.53GHz
CPUCache: 6144 KB
Keys: 16 bytes each
Values: 100 bytes each (50 bytes after compression)
Entries: 1000000
RawSize: 110.6 MB (estimated)
FileSize: 62.9 MB (estimated)