Skip to content

Instantly share code, notes, and snippets.

@mmmries
mmmries / benchmarks.ex
Last active September 16, 2021 07:14
Benchmarking lbm_kv
defmodule Benchmarks do
def init do
:ok = :lbm_kv.create(Web.Job)
end
def count_entries do
IO.puts "Web.Job => #{:lbm_kv.match_key(Web.Job, :_) |> elem(1) |> Enum.count}"
end
def measure_throughput(fun, num_items) do
@mmmries
mmmries / keybase.md
Created November 28, 2016 20:52
Keybase proof

Keybase proof

I hereby claim:

  • I am mmmries on github.
  • I am mmmries (https://keybase.io/mmmries) on keybase.
  • I have a public key whose fingerprint is 99B1 CC8A FE86 779F 3F7C 3D80 DA6C FBE7 2629 D9CF

To claim this, I am signing this object:

@mmmries
mmmries / quick_benchmark.exs
Created February 28, 2017 21:29
Quick benchmark tool for Elixir
quick_benchmark = fn(tcfn, tcn) ->
tc_l = :lists.seq(1,tcn) |> Enum.map(fn(_)-> tcfn |> :timer.tc |> elem(0) end)
tc_min = :lists.min(tc_l)
tc_max = :lists.max(tc_l)
tc_med = :lists.nth(round((tcn - 1) / 2), :lists.sort(tc_l))
tc_avg = round(Enum.sum(tc_l) / tcn)
%{min: tc_min, max: tc_max, median: tc_med, average: tc_avg}
end
# quick_benchmark.(fn() -> do_some_work() end, 1000) will do 1000 iterations of work and report median, average, min and max back to you in microseconds
@mmmries
mmmries / 0-README.md
Last active August 10, 2017 20:35
protobuf benchmarking

Protobuf Benchmarking

A really basic protobuf benchmark comparing the protobuf encoding/decoding performance. For details please see the benchmark.rb file in this gist and check the gpb benchmark files.

Latest Results

  • protobuf-3.6.12
  • gpb-3.26.6
@mmmries
mmmries / 01.README.md
Last active November 11, 2022 16:59
Load Test Phoenix Presence

Phoenix Nodes

First I created 3 droplets on digital ocean with 4-cores and 8GB of RAM. Login as root to each and run:

sysctl -w fs.file-max=12000500
sysctl -w fs.nr_open=20000500
ulimit -n 4000000
sysctl -w net.ipv4.tcp_mem='10000000 10000000 10000000'
@mmmries
mmmries / benchmark_roundtrips.exs
Created October 10, 2017 17:19
Gnat Benchmark Script
defmodule LatencyBenchmark do
@default_settings %{
num_actors: 1,
actions_per_actor: 1,
}
def benchmark(action_fn, settings) do # = num_actors, actions_per_actor, setup_fn, action_fn) do
settings = Map.merge(@default_settings, %{setup_fn: fn -> %{} end}) |> Map.merge(settings)
settings = Map.put(settings, :action_fn, action_fn)
{:ok, collector_pid} = Agent.start_link(fn -> [] end)
@mmmries
mmmries / language_job_postings.csv
Last active March 19, 2018 06:26
Language Job Postings
type language indeed.com postings stackoverflow postings
functional erlang 199 12
functional elixir 293 33
functional clojure 429 56
functional haskell 356 17
functional f# 126 10
functional akka 532 31
functional functional reactive programming 810 899
both scala 5260 189
both javascript 33347 1201
@mmmries
mmmries / A.md
Created June 7, 2018 12:36
A "Plug-ish" approach to flexible shared behavior

The main idea here is compose shared functionality into a pipeline of functions that all implement some shared behaviour.

defmodule Notification.Event do
  # The event (probably a bad name) is where you would put the structified JSON event you got from RabbitMQ
  defstruct [:sent_at, :user, :event]
end

defmodule Notification.Plug do
  @callback init(opts) :: opts
@mmmries
mmmries / suggested_topics.md
Created October 4, 2018 03:39
Nerves Remote Meetup
  • Testing
    • mocking hardware
    • what is worth testing?
  • Dev environment vs firmware
  • Sensors:
    • Moisture Sensors
    • Temperator Sensors
    • Magnetic Sensors
  • Touch Screen
  • Carry the touch screen from device-to-device
@mmmries
mmmries / README.md
Last active January 1, 2019 18:34
Gnat Request Benchmark

I wanted to run another round of performance benchmarks for gnat to see how it's request throughput has changed with the introduction of the ConsumerSupervisor which handles things like processing each request in its own supervised process.

I used a CPU-optimized digital ocean droplet with 16 cores, gnatsd 1.3.0, erlang 21.2.2 and elixir 1.8.0.rc0 You can read the setup instructions below for more details and the results_by_concurrency.md contains details about a lot of different runs.

I'm trying to measure the overhead in the system, so the requests are random byte strings that just get echoed back with processing. The measurements use byte strings of 4 bytes up to 1024 bytes.

TL/DR; You can do 170k+ synchronous requests/sec of small messages or 192MB+/sec of 1kb messages