Thread pools on the JVM should usually be divided into the following three categories:
- CPU-bound
- Blocking IO
- Non-blocking IO polling
Each of these categories has a different optimal configuration and usage pattern.
I was talking to a coworker recently about general techniques that almost always form the core of any effort to write very fast, down-to-the-metal hot path code on the JVM, and they pointed out that there really isn't a particularly good place to go for this information. It occurred to me that, really, I had more or less picked up all of it by word of mouth and experience, and there just aren't any good reference sources on the topic. So… here's my word of mouth.
This is by no means a comprehensive gist. It's also important to understand that the techniques that I outline in here are not 100% absolute either. Performance on the JVM is an incredibly complicated subject, and while there are rules that almost always hold true, the "almost" remains very salient. Also, for many or even most applications, there will be other techniques that I'm not mentioning which will have a greater impact. JMH, Java Flight Recorder, and a good profiler are your very best friend! Mea
Provisional benchmarks of AST-free serialization puts my WIP branch of uPickle about ~40% faster than circe on my current set of ad-hoc benchmarks, if the encoders/decoders are cached (bigger numbers is better)
playJson Read 2761067
playJson Write 3412630
circe Read 6005895
circe Write 5205007
upickleDefault Read 4543628
upickleDefault Write 3814459
upickleLegacy Read 8393416
| def fix[A](f: A => A): A = f(fix(f)) | |
| val fib = fix[Int => Int]( recur => n => | |
| if (n <= 1) 1 | |
| else recur(n - 1) + recur(n - 2)) |
| import cats.Traverse | |
| import cats.effect._ | |
| import cats.effect.concurrent.Semaphore | |
| import cats.temp.par._ | |
| import cats.syntax.all._ | |
| import scala.concurrent.duration._ | |
| object Main extends IOApp { | |
| import ParTask._ |
| import scalaz.nio._ | |
| import scalaz.nio.channels.{AsynchronousServerSocketChannel, AsynchronousSocketChannel} | |
| import scalaz.zio.console._ | |
| import scalaz.zio._ | |
| object TestSocket extends App { | |
| override def run(args: List[String]): ZIO[Environment, Nothing, Int] = { | |
| theSocket.foldM( |
With the rise of so-called bifunctor IO types, such as ZIO,
questions have naturally arisen of how one could leverage the cats-effect type classes to make use of this new power.
So far suggestions have mostly focused on duplicating the existing hierarchy into two distinct branches,
one parameterized over F[_] and another parameterized over F[_, _].
To me this is not a great situation, as now library maintainers would have to write code for both of these hierarchies or choose one and leave the other one in the dust.
Instead we should find a way to unite the two shapes under a single hierarchy. This is a draft on how to enable this unification using polymorphic function types in Dotty.