Simple example using tagless final functional pattern leveraging cats-effect. Taken from here and updated a bit (basically using explicit types)
Also it can be worth to have a look to https://www.baeldung.com/scala/tagless-final-pattern.
Simple example using tagless final functional pattern leveraging cats-effect. Taken from here and updated a bit (basically using explicit types)
Also it can be worth to have a look to https://www.baeldung.com/scala/tagless-final-pattern.
First, we define the type classes, which are just trait
s in scala:
// Type classes for Semigroup and Monoid (we will be using monoid, but this is more formal)
trait Semigroup[A] {
def op(x: A, y: A): A // should be associative
}
trait Monoid[A] extends Semigroup[A] {
def zero: A
val l1 = List(1,3,5,7) | |
val l2 = List(2,4,6,8) | |
val result = for { | |
x <- l1 | |
y <- l2 | |
} yield x * y | |
val flatMapResult = l1.flatMap(i => l2.map(_ * i)) |
(Mostly taken from here)
If I want to make a benchmark module, I'll usually resort to clock facilities provided by the standard libraries or, at worse, I'll declare a clock type and request to be initialized with a class implementing it before the module's functionality is ready to be used.
When module support in the language is available, however, I'll not only declare the benchmark interfaces I provide, but I'll also declare that I need a "clock module" -- a module exporting certain interfaces that I need.
A client of my module would not be required to do anything to use my interfaces -- it could just go ahead and use it. Or it could not even declare that my benchmark module would be used and, instead, declare that it has a requirement for that module.
L1 cache reference ......................... 0.5 ns
Branch mispredict ............................ 5 ns
L2 cache reference ........................... 7 ns
Mutex lock/unlock ........................... 25 ns
Main memory reference ...................... 100 ns
Compress 1K bytes with Zippy ............. 3,000 ns = 3 µs
Send 2K bytes over 1 Gbps network ....... 20,000 ns = 20 µs
SSD random read ........................ 150,000 ns = 150 µs
Read 1 MB sequentially from memory ..... 250,000 ns = 250 µs
class Action { } | |
class VisitPage extends Action { | |
constructor(pageUrl) { | |
super(); | |
this.pageUrl = pageUrl; | |
} | |
} | |
class ViewUser extends Action { | |
constructor(userName) { | |
super(); |
def sumx(a: Int)(b: Int)(c: Int) = a + b + c | |
def sumy(a: Int) = (b: Int) => a + b | |
val sumxx = sumx(1)(1)_ // mind the _ | |
val sumyy = sumy(2) | |
sumxx(3) // 5 | |
sumyy(3) // 5 |
A system is said to be concurrent if it can support two or more actions in progress at the same time. A system is said to be parallel if it can support two or more actions executing simultaneously.
The Art of concurrency
We come up with a (curried) function with which we are able to evaluate a variable number of possibly asynchronous functions over a value in a concurrent way.
const checkConditions = (...predicates) => (value) =>
Promise.all(predicates.map((p) => p(value))).then((results) => results.reduce((acc, value) => acc && value, true));
const initArray = (n, v) => Array(n).fill(v); | |
const initList = (size, fromZero = true) => Array.from({ length: size }, (_, i) => fromZero? i: i+1) | |
const l = initArray(5, 0); | |
// l = [0, 0, 0, 0, 0] | |
const l2 = initList(5); | |
// l2 = [0, 1, 2, 3, 4] | |
const l3 = initList(5, false); | |
// l3 = [1, 2, 3, 4, 5] |