Skip to content

Instantly share code, notes, and snippets.

@monadplus
Last active June 12, 2019 15:03
Show Gist options
  • Save monadplus/ce2c8e5f506ed351c3783502a5726076 to your computer and use it in GitHub Desktop.
Save monadplus/ce2c8e5f506ed351c3783502a5726076 to your computer and use it in GitHub Desktop.
JMH: how to

About JMH (Java Harness)

Samples and more documentation on: https://github.com/ktoso/sbt-jmh/tree/master/plugin/src/sbt-test/sbt-jmh/run/src/main/scala/org/openjdk/jmh/samples

To benchmark a method add the annotation. You can have multiple benchmark methods within the same class.

class Foo {
   @Benchmark
   def foo
}

Modes:

  • Mode.Throughput Execute as many as possible in a given time.
  • Mode.AverageTime Execution time of a single call
  • Mode.SampleTime Executes N, selects one.
  • Mode.SingleShotTime Single method invocation
  • Mode.All

Multiple modes: @BenchmarkMode(Array(Mode.Throughput, Mode.AverageTime, Mode.SampleTime, Mode.SingleShotTime))

Example:

@Benchmark
@BenchmarkMode(Array(Mode.Throughput))
@OutputTimeUnit(TimeUnit.SECONDS)
def measureThroughput: Unit = TimeUnit.MILLISECONDS.sleep(100)

You can tag the whole class mode or a single method:

@Benchmark 
@BenchmarkMode(Array(Mode.AverageTime))
def initializationCost(): Unit = {

}

TimeUnit & Params:

@Params: computes the outer product of params.

Each param is considered a new Benchmark.

@Param(Array("1", "100"))
  var arg: Int = _

@Param(Array("0", "1"))
var certainty: Int = _

The benchmark will run four times: {(1,0),(1,1),(100,0),(100,1)}

Example:

@BenchmarkMode(Array(Mode.AverageTime))
@OutputTimeUnit(TimeUnit.NANOSECONDS)
@Warmup(iterations = 5, time = 1, timeUnit = TimeUnit.SECONDS)
@Measurement(iterations = 5, time = 1, timeUnit = TimeUnit.SECONDS)
@(1)
@State(Scope.Benchmark)
class JMHSample_27_Params {

  @Param(Array("1", "31", "65", "101", "103"))
  var arg: Int = _

  @Param(Array("0", "1", "2", "4", "8", "16", "32"))
  var certainty: Int = _

  @Benchmark
  def bench: Boolean = BigInteger.valueOf(arg).isProbablePrime(certainty)

}

WarmUp & Measurement

The params for both annotations have the following meaning:

n iterations and each one will run for time timeUnit.

@Warmup(iterations = 5, time = 1, timeUnit = TimeUnit.SECONDS) @Measurement(iterations = 5, time = 1, timeUnit = TimeUnit.SECONDS)

Examples:

  • @Measurement(iterations = 5, time = 1, timeUnit = TimeUnit.SECONDS): this will run the benchmark method as many times as possible in under 1 second. This will be repeated 5 times. If the method takes longer than 1 second to finish, it will only run once. You must play with the params to achieve the desired result.

To disable warmup:

@Warmup(iterations = 0)

Batch Sampling:

For SingleShotTime (not pretty sure about how to use it),

@Benchmark
  @Warmup(iterations = 5, batchSize = 5000)
  @Measurement(iterations = 5, batchSize = 5000)
  @BenchmarkMode(Array(Mode.SingleShotTime))
  def measureRight: List[String] = {
    list.add(list.size / 2, "something")
    list
}

State:

State objects are going to be initialized before the benchmark and they are not taken into account.

A state object is loaded once per benchmark (unless you have @Param that will load them once per each combination of params).

  • @State(Scope.Thread): each will have it's own instance.
  • @State(Scope.Benchmark): shared among all threads.
@State(Scope.Benchmark)
  class BenchmarkState {
    var x = Math.PI
}

@Benchmark
  def measureShared(state: BenchmarkState): Unit = {
    state.x += 1
}

In most cases you just need a single state object. In that case, we can mark the benchmark instance itself to be the @State.

@State(Scope.Thread)
class JMHSample_04_DefaultState {

  var x = Math.PI

  @Benchmark
  def measure: Unit = x += 1
}

Setup:

If you want to change the loaded object at each benchmark, iteration, invocation (e.g. reseting a counter, introducing variance, etc) you must use @Setup(Level.Trial|Iteration|Invocation).

Fixtures:

  • Level.Trial: before or after the entire benchmark run (the sequence of iterations)
  • Level.Iteration: before or after the benchmark iteration (the sequence of invocations)
  • Level.Invocation: before or after the benchmark method invocation (WARNING: read the Javadoc before using)
@Setup(Level.Trial)
  def prepare: Unit = x = Math.PI

@TearDown(Level.Iteration)
def check: Unit = assert(x > Math.PI, "Nothing changed?")

Be careful with:

  • dead code
@Benchmark
  def measureRight_2(bh: Blackhole): Unit = {
    bh.consume(Math.log(x1))
    bh.consume(Math.log(x2))
}
  • cpu consumption
@Benchmark
def consume_0064: Unit = Blackhole.consumeCPU(64)
  • constant-fold

Example: Math.log(Math.PI) <--- this will be optimized, log won't be executed.

Force it via parameter:

private val x = Math.PI
@Benchmark
  def measureRight: Double = Math.log(x)

Forking:

Forking on test will avoid mixing profiles together.

For example:

trait Counter {
    def inc: Int
  }

  class Counter1 extends Counter {
    private var x: Int = _

    override def inc: Int = {
      val a = x
      x += 1
      a
    }
  }

  class Counter2 extends Counter {
    private var x: Int = _

    override def inc: Int = {
      val a = x
      x += 1
      a
    }
}

Even though Counter1 and Counter2 is not the same class, the JIT optimizer will mess with the benchmarks. To solve this issue, fork your JVM.

By default @Fork(1)

Use -f 1 line option

Variance in multiple tests using fork:

@State(Scope.Thread)
  class SleepyState {
    var sleepTime: Long = _

    @Setup
    def setup: Unit = sleepTime = (Math.random() * 1000).toLong
}

@Benchmark
@Fork(20)
def fork_2(s: SleepyState): Unit = TimeUnit.MILLISECONDS.sleep(s.sleepTime)

Example of a benchmark class annotations:

@State(Scope.Benchmark)
@BenchmarkMode(Array(Mode.Throughput))
@Fork(1)
@Threads(1)
@Warmup(iterations = 10, time = 5, timeUnit = TimeUnit.SECONDS, batchSize = 1)
@Measurement(iterations = 10, time = 15, timeUnit = TimeUnit.SECONDS, batchSize = 1)
class ActorBenchmark {

Example of a real benchmark


Integration & running the benchmark

For the benchmark we are going to use: https://github.com/ktoso/sbt-jmh

Help:

sbt> benchmark/jmh:run -h

Example:

sbt> jmh:run TestHexString.* -i 20 -wi 10 -f1 -t1

By default jmh:run will run all benchmark on src. You can add a regex matching a single benchmark.

Useful input commands:

  -i                 Iterations (the more, the better)
  -wi                Warm up iterations
  -f <int>           Fork
  -t <int>           #threads
  -e <regexp+>       To exclude
  -jvmArgs <string>
  -l                 List the benchmarks that match a filter, and exit.
  -o <filename>      Rediret output
  ...

Profiler - Oracle Flight Recorder

JFR is a JVM profiler build in Orable JDK Add the following option:

sbt> jmh:run -prof jmh.extras.JFR ...

This will result in flight recording file which you can then open and analyse offline using JMC.

The output directory can be specified by:

jmh:run -prof jmh.extras.JFR:dir={absolute}/{path}/{of}/{folder} -t1 -f 1 -wi 10 -i 20 .*TestBenchmark.*
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment