Thread pools on the JVM should usually be divided into the following three categories:
- CPU-bound
- Blocking IO
- Non-blocking IO polling
Each of these categories has a different optimal configuration and usage pattern.
import java.io.File | |
import java.io.FileInputStream | |
case class Chunk(length: Int, bytes: Array[Byte]) | |
def fileContentStream(fileIn: FileInputStream): Stream[Chunk] = { | |
val bytes = Array.fill[Byte](1024)(0) | |
val length = fileIn.read(bytes) | |
Chunk(length, bytes) #:: fileContentStream(fileIn) | |
} |
Latency Comparison Numbers (~2012) | |
---------------------------------- | |
L1 cache reference 0.5 ns | |
Branch mispredict 5 ns | |
L2 cache reference 7 ns 14x L1 cache | |
Mutex lock/unlock 25 ns | |
Main memory reference 100 ns 20x L2 cache, 200x L1 cache | |
Compress 1K bytes with Zippy 3,000 ns 3 us | |
Send 1K bytes over 1 Gbps network 10,000 ns 10 us | |
Read 4K randomly from SSD* 150,000 ns 150 us ~1GB/sec SSD |
-- Theme generated by vim2theme | |
Description = "vim darcula" | |
Default = { Colour="#a9b7c6" } | |
Canvas = { Colour="#2b2b2b" } | |
Number = { Colour="#6897bb" } | |
Escape = { Colour="#cc7832" , Italic=true} | |
String = { Colour="#a5c25c" } | |
BlockComment = { Colour="#808080" } |
import {Injectable, NgModuleFactory, NgModuleFactoryLoader, Compiler, Type} from '@angular/core'; | |
class LoaderCallback { | |
constructor(public callback) {} | |
} | |
export let load: Type = (callback: Function) => { | |
return new LoaderCallback(callback); | |
}; |
CREATE TRIGGER person_notify AFTER INSERT OR UPDATE OR DELETE ON income | |
FOR EACH ROW EXECUTE PROCEDURE notify_trigger( | |
'id', | |
'email', | |
'username' | |
); | |
CREATE TRIGGER income_notify AFTER INSERT OR UPDATE OR DELETE ON income | |
FOR EACH ROW EXECUTE PROCEDURE notify_trigger( | |
'id', |
#!/bin/bash -uxe | |
VERSION=2.7.13.2713 | |
PACKAGE=ActivePython-${VERSION}-linux-x86_64-glibc-2.3.6-401785 | |
# make directory | |
mkdir -p /opt/bin | |
cd /opt | |
wget http://downloads.activestate.com/ActivePython/releases/${VERSION}/${PACKAGE}.tar.gz |
I was talking to a coworker recently about general techniques that almost always form the core of any effort to write very fast, down-to-the-metal hot path code on the JVM, and they pointed out that there really isn't a particularly good place to go for this information. It occurred to me that, really, I had more or less picked up all of it by word of mouth and experience, and there just aren't any good reference sources on the topic. So… here's my word of mouth.
This is by no means a comprehensive gist. It's also important to understand that the techniques that I outline in here are not 100% absolute either. Performance on the JVM is an incredibly complicated subject, and while there are rules that almost always hold true, the "almost" remains very salient. Also, for many or even most applications, there will be other techniques that I'm not mentioning which will have a greater impact. JMH, Java Flight Recorder, and a good profiler are your very best friend! Mea
// Save as ~/.ammonite/predef.sc | |
// To use fs2 from ammonite repl, type `load.fs2` from repl prompt. | |
// You'll get all fs2 & cats imports, ContextShift and Timer instances | |
// for IO, and a globalBlocker | |
import $plugin.$ivy.`org.typelevel:::kind-projector:0.11.0` | |
if (!repl.compiler.settings.isScala213) | |
repl.load.apply("interp.configureCompiler(_.settings.YpartialUnification.value = true)") |