The Category Theoretic Understanding of Universal Algebra: Lawvere Theories andMonads: | |
- http://www.pps.univ-paris-diderot.fr/~mellies/mpri/mpri-ens/articles/hyland-power-lawvere-theories-and-monads.pdf | |
- interesting for the historical part on how both concepts were developed | |
Just do it: simple monadic equational reasoning | |
- http://www.cs.ox.ac.uk/jeremy.gibbons/publications/mr.pdf | |
- laws for effects | |
Lawvere theories made a bit easier | |
- http://blog.sigfpe.com/2012/02/lawvere-theories-made-bit-easier.html |
sealed trait Character | |
case class Player(hp: Int) extends Character | |
case class Civilian(name: String, hp: Int) extends Character | |
case class Monster(hp: Int, weakness: String) extends Character | |
object HittableDemo extends App { | |
import Hittable.ops._ | |
val c1: Character = Player(1) | |
assert(c1.hit == Player(0)) |
The first thing to understand is that the head
field inside of scalaz.concurrent.Actor
is not the "head" of the message queue in any traditional sense of the word. A better description would be "last". The there are no pointers to the head of the queue, which one of the very clever things about this implementation.
Consider the case where the actor has no outstanding messages. This new message will go into the following code:
def !(a: A): Unit = {
Every application ever written can be viewed as some sort of transformation on data. Data can come from different sources, such as a network or a file or user input or the Large Hadron Collider. It can come from many sources all at once to be merged and aggregated in interesting ways, and it can be produced into many different output sinks, such as a network or files or graphical user interfaces. You might produce your output all at once, as a big data dump at the end of the world (right before your program shuts down), or you might produce it more incrementally. Every application fits into this model.
The scalaz-stream project is an attempt to make it easy to construct, test and scale programs that fit within this model (which is to say, everything). It does this by providing an abstraction around a "stream" of data, which is really just this notion of some number of data being sequentially pulled out of some unspecified data source. On top of this abstraction, sca
/** | |
* To get started: | |
* git clone https://github.com/twitter/algebird | |
* cd algebird | |
* ./sbt algebird-core/console | |
*/ | |
/** | |
* Let's get some data. Here is Alice in Wonderland, line by line | |
*/ |
Benchmarking seems not to be a main focus of any specific academic field, although the problem has been addressed by many different groups in CS.
Some papers I found interesting:
import java.util.concurrent.atomic._ | |
import collection.mutable.ArrayBuffer | |
/** | |
* Buffer type with purely functional API, using mutable | |
* `ArrayBuffer` and cheap copy-on-write scheme. | |
* Idea described by Bryan O'Sullivan in http://www.serpentine.com/blog/2014/05/31/attoparsec/ | |
*/ | |
class Buffer[A](id: AtomicLong, stamp: Long, values: ArrayBuffer[A], size: Int) { | |
trait ConsoleAlg[F[_]] { | |
def readLine: F[Option[String]] | |
def printLine(line: String): F[Unit] | |
} | |
trait Console[+A] { | |
def run[F[+_]](F: ConsoleAlg[F]): F[A] | |
} | |
object Console { |