Peter Naur's classic 1985 essay "Programming as Theory Building" argues that a program is not its source code. A program is a shared mental construct (he uses the word theory) that lives in the minds of the people who work on it. If you lose the people, you lose the program. The code is merely a written representation of the program, and it's lossy, so you can't reconstruct
package spartan.training | |
object TheMonadProblem { | |
sealed trait Parser[+A] { self => | |
def map[B](f: A => B): Parser[B] = self.flatMap(a => Parser.succeed(f(a))) | |
def flatMap[B](f: A => Parser[B]): Parser[B] = Parser.FlatMap(self, f) | |
} | |
object Parser { | |
def succeed[A](a: A): Parser[A] = Succeed(a) |
package fpmax | |
import scala.util.Try | |
import scala.io.StdIn.readLine | |
object App0 { | |
def main: Unit = { | |
println("What is your name?") | |
val name = readLine() |
Tageless Final interpreters are an alternative to the traditional Algebraic Data Type (and generalized ADT) based implementation of the interpreter pattern. This document presents the Tageless Final approach with Scala, and shows how Dotty with it's recently added implicits functions makes the approach even more appealing. All examples are direct translations of their Haskell version presented in the Typed Tagless Final Interpreters: Lecture Notes (section 2).
The interpreter pattern has recently received a lot of attention in the Scala community. A lot of efforts have been invested in trying to address the biggest shortcomings of ADT/GADT based solutions: extensibility. One can first look at cats' Inject
typeclass for an implementation of [Data Type à la Carte](http://www.cs.ru.nl/~W.Swierstra/Publications/DataTypesA
Copyright © 2016-2018 Fantasyland Institute of Learning. All rights reserved.
A function is a mapping from one set, called a domain, to another set, called the codomain. A function associates every element in the domain with exactly one element in the codomain. In Scala, both domain and codomain are types.
val square : Int => Int = x => x * x
scala> import cats.implicits._ | |
import cats.implicits._ | |
scala> (1 -> 2) === (1 -> 3) | |
res0: Boolean = true | |
scala> import scala.reflect.runtime.universe._ | |
import scala.reflect.runtime.universe._ | |
scala> showCode(reify { (1 -> 2) === (1 -> 3) }.tree) |
# Install QEMU OSX port with ARM support | |
sudo port install qemu +target_arm | |
export QEMU=$(which qemu-system-arm) | |
# Dowload kernel and export location | |
curl -OL \ | |
https://github.com/dhruvvyas90/qemu-rpi-kernel/blob/master/kernel-qemu-4.1.7-jessie | |
export RPI_KERNEL=./kernel-qemu-4.1.7-jessie | |
# Download filesystem and export location |
A primer/refresher on the category theory concepts that most commonly crop up in conversations about Scala or FP. (Because it's embarassing when I forget this stuff!)
I'll be assuming Scalaz imports in code samples, and some of the code may be pseudo-Scala.
A functor is something that supports map
.
Every application ever written can be viewed as some sort of transformation on data. Data can come from different sources, such as a network or a file or user input or the Large Hadron Collider. It can come from many sources all at once to be merged and aggregated in interesting ways, and it can be produced into many different output sinks, such as a network or files or graphical user interfaces. You might produce your output all at once, as a big data dump at the end of the world (right before your program shuts down), or you might produce it more incrementally. Every application fits into this model.
The scalaz-stream project is an attempt to make it easy to construct, test and scale programs that fit within this model (which is to say, everything). It does this by providing an abstraction around a "stream" of data, which is really just this notion of some number of data being sequentially pulled out of some unspecified data source. On top of this abstraction, sca