I hereby claim:
- I am mmmries on github.
- I am mmmries (https://keybase.io/mmmries) on keybase.
- I have a public key whose fingerprint is 99B1 CC8A FE86 779F 3F7C 3D80 DA6C FBE7 2629 D9CF
To claim this, I am signing this object:
defmodule Benchmarks do | |
def init do | |
:ok = :lbm_kv.create(Web.Job) | |
end | |
def count_entries do | |
IO.puts "Web.Job => #{:lbm_kv.match_key(Web.Job, :_) |> elem(1) |> Enum.count}" | |
end | |
def measure_throughput(fun, num_items) do |
I hereby claim:
To claim this, I am signing this object:
quick_benchmark = fn(tcfn, tcn) -> | |
tc_l = :lists.seq(1,tcn) |> Enum.map(fn(_)-> tcfn |> :timer.tc |> elem(0) end) | |
tc_min = :lists.min(tc_l) | |
tc_max = :lists.max(tc_l) | |
tc_med = :lists.nth(round((tcn - 1) / 2), :lists.sort(tc_l)) | |
tc_avg = round(Enum.sum(tc_l) / tcn) | |
%{min: tc_min, max: tc_max, median: tc_med, average: tc_avg} | |
end | |
# quick_benchmark.(fn() -> do_some_work() end, 1000) will do 1000 iterations of work and report median, average, min and max back to you in microseconds |
A really basic protobuf benchmark comparing the protobuf encoding/decoding performance.
For details please see the benchmark.rb
file in this gist and check the gpb benchmark files.
defmodule LatencyBenchmark do | |
@default_settings %{ | |
num_actors: 1, | |
actions_per_actor: 1, | |
} | |
def benchmark(action_fn, settings) do # = num_actors, actions_per_actor, setup_fn, action_fn) do | |
settings = Map.merge(@default_settings, %{setup_fn: fn -> %{} end}) |> Map.merge(settings) | |
settings = Map.put(settings, :action_fn, action_fn) | |
{:ok, collector_pid} = Agent.start_link(fn -> [] end) |
type | language | indeed.com postings | stackoverflow postings | |
---|---|---|---|---|
functional | erlang | 199 | 12 | |
functional | elixir | 293 | 33 | |
functional | clojure | 429 | 56 | |
functional | haskell | 356 | 17 | |
functional | f# | 126 | 10 | |
functional | akka | 532 | 31 | |
functional | functional reactive programming | 810 | 899 | |
both | scala | 5260 | 189 | |
both | javascript | 33347 | 1201 |
The main idea here is compose shared functionality into a pipeline of functions that all implement some shared behaviour.
defmodule Notification.Event do
# The event (probably a bad name) is where you would put the structified JSON event you got from RabbitMQ
defstruct [:sent_at, :user, :event]
end
defmodule Notification.Plug do
@callback init(opts) :: opts
I wanted to run another round of performance benchmarks for gnat
to see how it's request throughput has changed with the introduction of the ConsumerSupervisor
which handles things like processing each request in its own supervised process.
I used a CPU-optimized digital ocean droplet with 16 cores, gnatsd 1.3.0, erlang 21.2.2 and elixir 1.8.0.rc0
You can read the setup instructions below for more details and the results_by_concurrency.md
contains details about a lot of different runs.
I'm trying to measure the overhead in the system, so the requests are random byte strings that just get echoed back with processing. The measurements use byte strings of 4 bytes up to 1024 bytes.