It's now here, in The Programmer's Compendium. The content is the same as before, but being part of the compendium means that it's actively maintained.
import java.lang.instrument.Instrumentation; | |
import java.lang.reflect.Layer; | |
import java.lang.reflect.Module; | |
import java.util.*; | |
public class WeakeningAgent { | |
public static void premain(String argument, Instrumentation instrumentation) { | |
boolean full = argument != null && argument.equals("full"); | |
Set<Module> importing = new HashSet<>(), exporting = new HashSet<>(); |
% sbtx dependencyGraph | |
... blah blah ... | |
[info] *** Welcome to the sbt build definition for Scala! *** | |
[info] Check README.md for more information. | |
[error] Not a valid command: dependencyGraph | |
[error] Not a valid project ID: dependencyGraph | |
% sbtx -Dplugins=graph dependencyGraph | |
... blah blah ... |
miles@frege:~$ ./shapeless.sh | |
Loading... | |
Welcome to the Ammonite Repl 0.5.2 | |
(Scala 2.11.7 Java 1.8.0_51) | |
@ val l = 23 :: "foo" :: true :: HNil | |
l: Int :: String :: Boolean :: HNil = ::(23, ::("foo", ::(true, HNil))) | |
@ |
Recently, I found myself in need to precisely understand Scala's core typechecking rules. I was particulary interested in understanding rules responsible for typechecking signatures of members defined in classes (and all types derived from them). Scala Language Specification (SLS) contains definition of the rules but lacks any examples. The definition of the rules uses mutual recursion and nested switch-like constructs that make it hard to follow. I've written down examples together with explanation how specific set of rules (grouped thematically) is applied. These notes helped me gain confidence that I fully understand Scala's core typechecking algorithm.
Let's quote the Scala spec for As Seen From (ASF) rules numbered for an easier reference:
I'm going to start off by motivating what I'm doing here. And I want to be clear that I'm not "dissing" the existing collections implementation or anything as unproductively negative as that. It was a really good experiment, it was a huge step forward given what we knew back in 2.8, but now it's time to learn from that experiment and do better. This proposal uses what I believe are the lessons we can learn about what worked, what didn't work, and what is and isn't important about collections in Scala.
This is going to start out sounding really negative and pervasively dismissive, but bear with me! There's a point to all my ranting. I want to be really clear about my motivations for the proposal being the way that it is.
#!/usr/bin/env scalas | |
/*** | |
scalaVersion := "2.11.7" | |
libraryDependencies += "com.typesafe.play" %% "play-netty-server" % "2.4.6" | |
*/ | |
import play.core.server._ | |
import play.api.routing.sird._ | |
import play.api.mvc._ |
Java 8 introduced lambdas to the Java language. While the design choices differ in many regards from Scala's functions, the underlying mechanics used to represent Java lambdas is flexible enough to be used as a target for the Scala compiler.
Java does not have canonical heirarchy of generic function types (ala scala.FunctionN
), but instead allows a lambda to be used as a shorthand for an anonymous implementation of an Functional Interface
Here's an example of creating a predicate that closes over one value:
The first thing to understand is that the head
field inside of scalaz.concurrent.Actor
is not the "head" of the message queue in any traditional sense of the word. A better description would be "last". The there are no pointers to the head of the queue, which one of the very clever things about this implementation.
Consider the case where the actor has no outstanding messages. This new message will go into the following code:
def !(a: A): Unit = {
Every application ever written can be viewed as some sort of transformation on data. Data can come from different sources, such as a network or a file or user input or the Large Hadron Collider. It can come from many sources all at once to be merged and aggregated in interesting ways, and it can be produced into many different output sinks, such as a network or files or graphical user interfaces. You might produce your output all at once, as a big data dump at the end of the world (right before your program shuts down), or you might produce it more incrementally. Every application fits into this model.
The scalaz-stream project is an attempt to make it easy to construct, test and scale programs that fit within this model (which is to say, everything). It does this by providing an abstraction around a "stream" of data, which is really just this notion of some number of data being sequentially pulled out of some unspecified data source. On top of this abstraction, sca