% It's traverse! % Clément Delafargue % DDDDD 2020-05-15
Details
I'm not a regular DDD guy, i come from FPDetails
Even though most of the DDD litterature is illustrated with OO langs FP maps _really well_ to DDD (some would argue _way better_. I would argue that). An obvious difference would be defaulting on entities vs value objects (mutability) but this goes farther than that- DDD Made Functional (Scott Wlaschin)
- Functional And Reactive Domain Modelling (Debasish Gosh)
Details
The first is illustrated with F# the second with scala, but that's not the biggest difference I'll be closer to the second oneDetails
Describe the domain model as an "algebra". _very_ roughly speaking, it's a collection of function types representing the possible operations of your model that sounds a lot like "regular" DDD (and that's a way of working you're naturally nudged into when doing FP). That's part of why i'm saying FP is adapted to DDD: it's a natural way of working in FP languges. No need to write books or organize conferences to advertise itDetails
One thing that makes FP really shine, is how it allows abstractions. You may have heard that it makes composition and code reuse easy thanks to immutability, all that. That's true, but here i'm talking about something more specific: common abstractions (shared across projects, and even across languages)Details
these abstractions are so robust and so useful (ie not tied to the language semantics), that it's feasible to make them part of a shared vocabulary, and ultimately part of the modelDetails
that's a key difference with "regular" DDD. Normally, technical concerns should not be exposed in the model. They can be used to ease implementation, but not directly exposed
`Money` has a monoid instance
-- addMoney a (addMoney b c) == (addMoney (addMoney a b) c)
addMoney :: Money -> Money -> Money
zeroMoney :: Money
Details
the two are equivalent. One requires to know what a monoid is One has to explain it both are valid choices, with different strengthsDetails
maintaining a set of common abstractions as a shared vocabulary can be _extremely_ effective it works because with properly defined abstractions with a precise meaning and general applicabilityDetails
I will talk about two such abstractions today. Maybe they can be used in a shared vocabulary maybe they can just help you with implementation.Details
Properly defined abstractions usually provide strong intuition even if you don't make those part of the model, you can rely on them when implementing, and that allows you to offload a lot of work to intuition (system 1 / system 2, as "Thinking Fast And Slow" calls them)Details
as said in the abstract, it's pervasive it showcases really well how much mileage you can get from good abstractions it also shows how FP lets you turn abstract concepts into concrete benefitsDetails
it's so pervasive it's a joke, but its pervasivity is not immediately obvious "it's a for loop" would be a bit less funny, I thinkDetails
that's it for the longest intro ever (and i'm saying this as a prog rock fan).Promise.all([p1, p2, p3])
.then(([v1, v2, v3]) => {
console.log("Got values", v1, v2, v3);
});
Details
Promise.all collects all the promise in a single one
const myMap = new Map([
["p1", p1], ["p2", p2], ["p3", p3]
]);
// any iterable!
Promise.all(myMap)
.then((vs) => {
console.log("Got values", vs);
});
Details
it works with any iterable (but does not retain the original shape)- collects results
- works on any iterable
- does not retain the original iterable
Details
`Promise.all` is nice because it's not hardcoded with lists. Sadly it "forgets" the original shapesequenceA [p1, p2, p3]
>>= (\[v1, v2, v3] ->
print ("Got values", v1, v2, v3)
)Details
Same as `Promise.all` (except async values in haskell are lazy, not eager as JS promises)
Promise.all([p1, p2, p3])
.then(([v1, v2, v3]) => {
console.log("Got values", v1, v2, v3);
});
Details
see how it's close to the JS version?Details
We usually don't happen to have a list of async values lying around. most often we have a list of values, and a function turning them into async values.```
Promise.all(userIds.map(getUser)) .then(users => …);
</big>
<details role="note">
It's really common to first apply a function with map, and then collect the
results. That's exactly what traverse does
</details>
---
```haskell
userIds :: [UserId]
userIds = ["1", "2", "3"]
getUser :: UserId -> IO User
getUser uid = …
allUsers :: IO [User]
allUsers = traverse getUser userIds
Details
That's what usually happens in real lifeDetails
ok so it works on lists, but is at as generic as `Promise.all`?Details
this may sound like a joke, but it's actually the sign that traverse is an important function it works on lists, maps, trees…
getUsers :: Map Role UserId
-> IO (Map Role User)
getUsers usersMap =
traverse getUser usersMap
getUsers :: Map Role [UserId]
-> IO (Map Role [User])
getUsers usersMap =
traverse (traverse getUser) usersMap
getUsers :: Map Role [UserId]
-> IO (Map Role [User])
getUsers usersMap =
getCompose
(traverse getUser (compose usersMap))Details
if you nest traversables, you can traverse them all at once (you need to tell the compiler, though)Details
`Promise.all` is a super useful function traverse is already way more powerful because it retains shapes and composes naturally
traverse :: Traversable t
=> (a -> IO b)
-> t a -> IO (t b)Details
It allows us to "move" the `IO` from "inside" the `t`, to "outside" it. it works for any `t` that is traversable- Lists
- Maps
- Trees
- Maybe
- Either
- your own types
Details
`Traversable` can be derived automatically for many types.data BinaryTree a
= Leaf a
| Node (BinaryTree a) (BinaryTree a)
deriving (Eq, Show, Ord,
Functor, Foldable,
Traversable)Details
The implementation of traverse is mechanically derivable from the type `Foldable` is interesting because it captures a bit the idea of promise.all iterating over a value while forgetting its shapeDetails
generalizing on many data types is _super_ useful. Maps, lists, trees, it's nice, but not 100% mindblowing. Let's look at another data type that's traversableDetails
i want to run this code, only if i have a value
traverse :: (a -> IO b)
-> Maybe a -> IO (Maybe b)Details
maybe is like optional. this is what traverse looks like with Maybe
traverse_ :: (a -> IO b)
-> Maybe a -> IO ()Details
here we don't care about the result, we care only about side effectsDetails
That's a sub-result of what we've seen earlier, but it's slightly unexpected this showcases a great strength of (typed fp): use data structures for control flowDetails
a few slides ago, I showed you the signature of `traverse`. This was a lie.
traverse :: (Traversable t, Applicative f)
=> (a -> f b)
-> t a -> f (t b)Details
not just IO, any "context" worksDetails
i won't delve into details, but the idea is that we talk about values that comes with a context. For IO, it's side effects and asynchronicity
compose :: Applicative f
=> (f a, f b)
-> f (a, b)Details
i have two value with two contexts: i can combine the two contexts into a single one
pure :: Applicative f
-> a -> f a
lift :: Applicative f
=> (a -> b)
-> f a -> f bDetails
i also need two things: creating an "empty" context and making a regular function work within a context with all this, you may be able to see how it relates to traverse: while iterating, you need to collect the resultsDetails
this seems quite abstract, let's see a couple examplesDetails
A nice improvement over exceptions for input parsing
parseInt :: String -> Validation [Error] Int
parseInt = …
parseValues :: [String] -> Validation [Error] [Int]
parseValues values =
traverse parseInt valuesDetails
parse each value independently, collect errors as needed a bogus value won't prevent the other values to be checked we still need each value to parse for the result to be a success
checks :: [String -> Validation [Error] ()]
checks = …
checkValue :: String -> Validation [Error] ()
checkValue v =
traverse_ (\check -> check v) checksDetails
runs every check in the list on a single value, and collects errors, if any here we are only interested in the context, not in the value being wrapped each check is run independently. a check failure won't prevent other checks to be runDetails
"I can parse this from the environment" is a contextparseString :: String -> Parser String
parseString = …
parseVariables :: [String]
-> Parser [String]
parseVariables names =
traverse parseStringDetails
Here we build a parser of list from a list of parsersDetails
remember how we combined traversables with compose?
readFile :: FilePath -> IO String
readFile = …
parseFile :: String -> Validation [Errors] Value
parseFile = …Details
we can do the same with contexts
parseFiles :: [FilePath]
-> IO (Validation [Errors] [Value])
parseFiles =
let readAndParse =
Compose . fmap parseFile . readFile path
in getCompose . traverse readAndParseDetails
here we compose two contexts (`IO` and `Validation`) we still need to tell the compiler we want to fuse the two contexts so `Compose` and `getCompose`, but apart from that, it's a regular traverse
combineResults :: [Context -> Value]
-> Context -> [Value]
combineResults fns = sequenceA fnsDetails
I'm using sequence again since here it makes more sense the tricky part here is that the context itself is `Context ->` what this actually does is call every function in the list with the provided value and collect the results a common use case for this is dependency injection (which is glorified function application)Details
here, we have been using mostly the traversable as a data structure and the applicative as a control flow context turns out data structures can be view as applicatives
(a, Maybe b) -> Maybe (a, b)
Either a [b] -> [Either a b]
[(String, a)] -> (String, [a])Details
here both the traversable and the context are data structures here, sequence makes things more obvious it's super handy to quickly move between representations it's not unusual to have multiple calls of sequenceA working on different contexts. Just the implementation may be hard to read, but what matters is types. chained `sequenceA` calls just tell you that the inversions are standard and that no trick business is happening. that's good!
(a, Maybe b) -> Maybe (a, b)Details
Here, we invert the tuple and the maybe in a way, we lose information about the tuple's left (it was always defined, now it's in the maybe)
Either a [b] -> [Either a b]Details
Here, we invert the either and the list think of what can happen: if the either is a right, we'll get a list of rights. If the either is a left, we'll get a list containing a single left
[Either a b] -> Either a [b]Details
the other direction now if all the eithers are right, we collect them if at least one of them is a Left, we get it (the first one)
Monoid a => [Validation a b]
-> Validation a [b]