This gist has been upgraded to a blog post here.
scala> val buf = ListBuffer(1) | |
buf: scala.collection.mutable.ListBuffer[Int] = ListBuffer(1) | |
scala> val xs = buf.toIterable match { case xs: List[Int] => xs } | |
xs: List[Int] = List(1) | |
scala> buf ++= 1 to 100 | |
res11: buf.type = ListBuffer(1, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100) | |
scala> xs |
import Web.Scotty.Trans | |
import Network.Wai.Middleware.RequestLogger | |
import Network.Wai.Middleware.Static | |
import Text.Blaze.Html.Renderer.Text (renderHtml) | |
import System.Environment | |
import Database.Persist.Sql (SqlPersistT(..), Connection, runMigration, runSqlPersistM, runSqlConn, unSqlPersistT) | |
import Database.Persist.Postgresql (withPostgresqlConn) | |
setup :: ScottyT Text (SqlPersistT (NoLoggingT (ResourceT IO))) () -> IO () | |
setup m = do |
import scala.util.control.Exception | |
import scalaz._ | |
import scalaz.Free.FreeC | |
import scalaz.Scalaz._ | |
object Kasten { | |
///////////////////////// | |
// The State stuff | |
///////////////////////// |
import scalaz._ | |
import \/._ | |
import Free._ | |
import scalaz.syntax.monad._ | |
import java.util.concurrent.atomic.AtomicReference | |
import java.util.concurrent.CountDownLatch | |
object Experiment { | |
sealed trait OI[A] { | |
def map[B](f: A => B): OI[B] |
-- http://hackage.haskell.org/package/base-4.6.0.1/docs/src/GHC-Base.html#map | |
map :: (a -> b) -> [a] -> [b] | |
map _ [] = [] | |
map f (x:xs) = f x : map f xs |
trait ConsoleAlg[F[_]] { | |
def readLine: F[Option[String]] | |
def printLine(line: String): F[Unit] | |
} | |
trait Console[+A] { | |
def run[F[+_]](F: ConsoleAlg[F]): F[A] | |
} | |
object Console { |
# thanks to this gist for the iTerm2 tab naming stuff: https://gist.github.com/phette23/5270658 | |
# can't remember where I cribbed the rest of this from! | |
# hacked it a bit to work on OS X | |
_bold=$(tput bold) | |
_normal=$(tput sgr0) | |
__vcs_dir() { | |
local vcs base_dir sub_dir ref | |
sub_dir() { |
Every application ever written can be viewed as some sort of transformation on data. Data can come from different sources, such as a network or a file or user input or the Large Hadron Collider. It can come from many sources all at once to be merged and aggregated in interesting ways, and it can be produced into many different output sinks, such as a network or files or graphical user interfaces. You might produce your output all at once, as a big data dump at the end of the world (right before your program shuts down), or you might produce it more incrementally. Every application fits into this model.
The scalaz-stream project is an attempt to make it easy to construct, test and scale programs that fit within this model (which is to say, everything). It does this by providing an abstraction around a "stream" of data, which is really just this notion of some number of data being sequentially pulled out of some unspecified data source. On top of this abstraction, sca
This document is licensed CC0.
These are some questions to give a sense of what you know about FP. This is more of a gauge of what you know, it's not necessarily expected that a single person will breeze through all questions. For each question, give your answer if you know it, say how long it took you, and say whether it was 'trivial', 'easy', 'medium', 'hard', or 'I don't know'. Give your answers in Haskell for the questions that involve code.
Please be honest, as the interviewer may do some spot checking with similar questions. It's not going to look good if you report a question as being 'trivial' but a similar question completely stumps you.
Here's a bit more guidance on how to use these labels: