Thread pools on the JVM should usually be divided into the following three categories:
- CPU-bound
- Blocking IO
- Non-blocking IO polling
Each of these categories has a different optimal configuration and usage pattern.
Copyright © 2016-2018 Fantasyland Institute of Learning. All rights reserved.
A function is a mapping from one set, called a domain, to another set, called the codomain. A function associates every element in the domain with exactly one element in the codomain. In Scala, both domain and codomain are types.
val square : Int => Int = x => x * x
package se.kth.edx.id2203.core | |
import java.net.{ InetAddress, InetSocketAddress } | |
import se.kth.edx.id2203.core.ExercisePrimitives.PerfectP2PLink._ | |
import se.kth.edx.id2203.core.Ports._ | |
import se.sics.kompics.{ Init, KompicsEvent } | |
import se.sics.kompics.network.{ Address, Network, Transport } | |
import se.sics.kompics.sl.{ ComponentDefinition, _ } |
Should be work with 0.18
Destructuring(or pattern matching) is a way used to extract data from a data structure(tuple, list, record) that mirros the construction. Compare to other languages, Elm support much less destructuring but let's see what it got !
myTuple = ("A", "B", "C")
myNestedTuple = ("A", "B", "C", ("X", "Y", "Z"))
API | Status Codes |
---|---|
[Twitter][tw] | 200, 304, 400, 401, 403, 404, 406, 410, 420, 422, 429, 500, 502, 503, 504 |
[Stripe][stripe] | 200, 400, 401, 402, 404, 429, 500, 502, 503, 504 |
[Github][gh] | 200, 400, 422, 301, 302, 304, 307, 401, 403 |
[Pagerduty][pd] | 200, 201, 204, 400, 401, 403, 404, 408, 500 |
[NewRelic Plugins][nr] | 200, 400, 403, 404, 405, 413, 500, 502, 503, 503 |
[Etsy][etsy] | 200, 201, 400, 403, 404, 500, 503 |
[Dropbox][db] | 200, 400, 401, 403, 404, 405, 429, 503, 507 |
package utils | |
import java.nio.ByteBuffer | |
import cats.data.Xor | |
import io.circe.Json | |
import play.api.http.LazyHttpErrorHandler | |
import play.api.http.Status._ | |
import play.api.libs.iteratee.{Iteratee, Traversable} | |
import play.api.mvc.BodyParsers.parse._ |
Every application ever written can be viewed as some sort of transformation on data. Data can come from different sources, such as a network or a file or user input or the Large Hadron Collider. It can come from many sources all at once to be merged and aggregated in interesting ways, and it can be produced into many different output sinks, such as a network or files or graphical user interfaces. You might produce your output all at once, as a big data dump at the end of the world (right before your program shuts down), or you might produce it more incrementally. Every application fits into this model.
The scalaz-stream project is an attempt to make it easy to construct, test and scale programs that fit within this model (which is to say, everything). It does this by providing an abstraction around a "stream" of data, which is really just this notion of some number of data being sequentially pulled out of some unspecified data source. On top of this abstraction, sca
import scalaz.\/ | |
import scalaz.syntax.either._ | |
object Example2 { | |
// This example simulates error handling for a simple three tier web application | |
// | |
// The tiers are: | |
// - the HTTP service | |
// - a user authentication layer | |
// - a database layer |
##TCP FLAGS## | |
Unskilled Attackers Pester Real Security Folks | |
============================================== | |
TCPDUMP FLAGS | |
Unskilled = URG = (Not Displayed in Flag Field, Displayed elsewhere) | |
Attackers = ACK = (Not Displayed in Flag Field, Displayed elsewhere) | |
Pester = PSH = [P] (Push Data) | |
Real = RST = [R] (Reset Connection) | |
Security = SYN = [S] (Start Connection) |