Get the JDK source (per the OpenJDK instructions):
hg clone http://hg.openjdk.java.net/portola/portola
cd portola
bash ./get_source.sh
You need an existing Alpine with an already-built JDK. I have a Docker image of Alpine with glibc-based Zulu JDK:
Get the JDK source (per the OpenJDK instructions):
hg clone http://hg.openjdk.java.net/portola/portola
cd portola
bash ./get_source.sh
You need an existing Alpine with an already-built JDK. I have a Docker image of Alpine with glibc-based Zulu JDK:
# stop active raid
mdadm --stop /dev/md[01]
# destroy partition table on hdds
dd if=/dev/zero of=/dev/sda bs=1M count=512
dd if=/dev/zero of=/dev/sdb bs=1M count=512
# create new partition table
sgdisk -og /dev/sda
#cloud-config | |
coreos: | |
etcd: | |
# generate a new token for each unique cluster from https://discovery.etcd.io/new | |
discovery: https://discovery.etcd.io/<token> | |
# multi-region deployments, multi-cloud deployments, and droplets without | |
# private networking need to use $public_ipv4 | |
addr: $private_ipv4:4001 | |
peer-addr: $private_ipv4:7001 |
This document describes the guidelines for managing two important
namespaces: the package scala._
, and the Maven group org.scala-lang
.
These questions become more important as we modularize the distribution, and as we publish new modules (such as ScalaJS, scala-async, scala-pickling.)
This script can be used to feed collectd with cpu and memory usage statistics for running docker containers using the collectd exec
plugin.
This script will report the used and cached memory as well as the user and system cpu usage by inspecting the appropriate cgroup stat file for each running container.
This script is intented to be executed by collectd on a host with running docker containers. To use, simply configure the exec
plugin in collectd to execute the collectd-docker.sh
script. You may need to adjust the script to match your particulars, such as the mount location for cgroup.
import com.yourkit.probes.*; | |
import com.yourkit.api.*; | |
@MethodPattern("scala.tools.nsc.Global$Run:advancePhase()") | |
public class MemoryProbe { | |
public static void onEnter(@This scala.tools.nsc.Global.Run run) { | |
scala.reflect.internal.Phase patmatPhase = run.phaseNamed("patmat"); | |
scala.reflect.internal.Phase postErasurePhase = run.phaseNamed("posterasure"); | |
scala.reflect.internal.Phase icodePhase = run.phaseNamed("icode"); |
import java.util.concurrent.atomic.AtomicReference | |
import java.util.concurrent.CountDownLatch | |
import scala.concurrent.Future | |
import scala.concurrent.ExecutionContext | |
import ExecutionContext.Implicits.global | |
object TxMapTest { | |
/* | |
* Example Usage | |
* We want to show two threads working with the same data source having both of their effects succeed |
package s | |
object Test { | |
// Observe that x.companion is statically typed such that foo is callable | |
def f1() = { | |
val x = new Foo | |
println(x) // Foo instance | |
println(x.companion) // Foo companion | |
println(x.companion.foo) // I'm foo! |
L1 cache reference ......................... 0.5 ns
Branch mispredict ............................ 5 ns
L2 cache reference ........................... 7 ns
Mutex lock/unlock ........................... 25 ns
Main memory reference ...................... 100 ns
Compress 1K bytes with Zippy ............. 3,000 ns = 3 µs
Send 2K bytes over 1 Gbps network ....... 20,000 ns = 20 µs
SSD random read ........................ 150,000 ns = 150 µs
Read 1 MB sequentially from memory ..... 250,000 ns = 250 µs