Generic programming is a discipline that encourages the construction of abstractions and the reuse of software component without loss of efficiency. Subtyping through inheritance can be adversary to this goal because it encourages type erasure, promising that efficiency is maintained thanks to dynamic dispatch. Sadly, this promise gets broken when generic data structures and algorithms depend on more than one generic receiver. In those instances, one must resort to generic parameters to preserve type information, introducing expensive annotation costs. This short article examine this problem and then discusses how type classes, an alternative approach to practice generic programming, can address many of the issues presented by subtyping through inheritance.
Disclaimer:
This article doesn't pretend to be a scientific paper. It is only the opinion of someone who's without any significant Scala background and who's obviously biased toward idioms she knows in her favorite languages. She's only been using Scala for a little over 4 months and still has a lot to learn; she doesn't even know what she doesn't know. So clearly, everything she says must be taken with the apprioriate amount of salt. It also wouldn't be surprising if her analysis overlooked well-established Scala tricks.
When I say "generic promming" I don't simply mean the ability to define methods with generic parameters or use subtyping. I mean applying generic programming as a discipline, borrowing my definition from Musser and Stepanov:
Generic programming is a paradigm in which algorithms are written in terms of the algebraic properties of the data structures on which they operate, without loss of efficiency.
The last bit of the definition is important. In principle, using a generic algorithm should be as efficient as using its concrete counterpart on all inputs for which it's defined. Anything less would introduce abstraction fees that could disincentivise people from reusing existing code.
Let's look at an example to understand what that means. I'll start by defining a couple of bare-bones collections from the ground up to make my example as self-contained as possible. The first is a resizeable array and the second is a linked-list. These implementations are quite straighfroward (and not as optimized as they could be), so feel free to skip the code. I'll highlights the important bits later.
Source code
/** A sequence of integers whose elements can be randomly accessed. */
final class CustomArray:
/** The contents of the `this`. */
private var storage = scala.Array[Int]()
/** Adds `newElement` at the end of `this`. */
def append(newElement: Int) =
storage = storage.appended(newElement)
/** Returns the element at `position`. */
def apply(position: Int): Int =
storage(position)
/** The result of applying `combine` on an accumulator, initialized with
* `initialValue`, and each element of `this`.
*/
def fold[T](initialValue: T)(combine: (T, Int) => T): T =
var r = initialValue
for (e <- storage) r = combine(r, e)
r
/** The number of elements in `this`. */
def length: Int =
storage.length
/** The number of elements in `this` that satisfy `predicate`. */
def count(predicate: (Int) => Boolean): Int =
var r = 0
for (e <- storage if predicate(e)) r += 1
r
/** A sequence of integers stored in a linked-list. */
final class CustomList:
/** A node in a singly linked-list. */
private final class Node(val value: Int, val tail: Option[Node])
/** The first node of `this`, if any. */
private var head: Option[Node] = None
/** Inserts `newElement` at the start of `this`. */
def prepend(newElement: Int) =
head = Some(Node(newElement, head))
/** The result of applying `combine` on an accumulator, initialized with
* `initialValue`, and each element of `this`.
*/
def fold[T](initialValue: T)(combine: (T, Int) => T): T =
var h = head
var r = initialValue
while (h.isDefined)
r = combine(r, h.get.value)
h = h.get.tail
r
/** The number of elements in `this`. */
def length: Int =
fold(0)((a, _) => a + 1)
/** The number of elements in `this` that satisfy `predicate`. */
def count(predicate: (Int) => Boolean): Int =
fold(0)((a, e) => if predicate(e) then a + 1 else a)
Looking at the code, it's pretty clear that CustomArray
and CustomList
look very similar.
So I may want to define a trait, say Sequence
, with their shared interface, i.e., the methods fold
, length
, and count
.
It also seems like the implementations of length
and count
defined in CustomList
could be moved to Sequence
, since they only rely on fold
.
So I may also want to factor out some code and eventually write the following trait.
/** A sequence of integers. */
trait Sequence:
/** The result of applying `combine` on an accumulator, initialized with
* `initialValue`, and each element of `this`.
*/
def fold[T](initialValue: T)(combine: (T, Int) => T): T
/** The number of elements in `this`. */
def length: Int =
fold(0)((a, _) => a + 1)
/** The number of elements in `this` that satisfy `predicate`. */
def count(predicate: (Int) => Boolean): Int =
fold(0)((a, e) => if predicate(e) then a + 1 else a)
Now I can have my CustomArray
and CustomList
extend this trait.
And since length
and count
are already defined, I could also delete these definitions from the concrete types.
Well, I could do that but it would be a mistake.
On closer inspection, we can see that CustomArray.length
has a better implementation than the one it would inherit from Sequence
.
It would be wrong to throw that implementation away, therefore, since I would lose efficiency.
All is fine for count
, though, because the inherited implementation is as efficient as the custom one, given that a reasonable optimizer should capable of inlining fold
.
This silly example hints at a principled methodology one can apply to do the right thing:
- Start with concrete, efficient implementations of algorithms and data structures
- Identify similarities between different implementations
- Define the most general abstractions capturing the properties of these similarities
- Eliminate redundant abstractions
One should never skip the first step! Coming up with abstractions is easy, but identifying the ones that preserve efficiency is hard. Concrete, efficient reference implementations can be used to make sure we're not over generalizing.
The 4th step is another important part of the process as it ensures we're not creating useless abstractions.
For example, I could have come up with two different traits while creating abstractions capturing the properties of my concrete data structres.
One Foldable
trait for the containers that implement a fold
method, and one Countable
tait for those that have a length
.
But then I would have realized that no Foldable
data structure isn't also Countable
(at least in this particular example), and so I would have merged thow two concepts into one.
Eliminating redundancies is important to avoid ending up with a zoo of tiny abstractions that do not make much sense on their own (I'm looking at you C++).
While generic programming is a discipline, polymorphism is a collection of mechanisms that can be used to support that discipline. The two notions are tied because one needs something to create abstractions, should it be subtyping, ducktyping, parameteric polymoprhism, function pointers, or any other technique you like.
In Scala, the two most popular techniques are aguably subtyping and parameteric polymorphism (i.e., generic type parameters). We can also combine the two approaches since generic type parameters can have bounds. For example, I can write a function that accepts any type that happens to extend a given trait.
def lengthOf[S <: Sequence](s: S): Int =
s.length
Subtyping has its own collection of flavors, in the sense that different mechanisms can result in a particular subtyping relationship. One of the most popular one is subclassing, which occurs when one defines a class to be extending another class. If I had to bet, I would say that subclassing is a go-to approach in the mind of many people because of two reasons:
- Almost anyone who's had a CS education after the rise of Java's popularity has had an course that taught about cats and dogs being four-legged animals.
- Subclassing (e.g., the fact that dogs are animals) is a pretty intuitive concept.
Sadly, subclassing comes with a wealth of issues. Most of them have been throroughly discussed, so I won't discuss them here and let the reader type "fragile base class" in their favorite search engine if they need a refresher.
Traits are another way to introduce subtyping in Scala. They solve many of the issues of subclassing as they can be composed, but they still suffer frustrating limitations in a generic setting.
Object-orientation and related concepts are heavily overloaded and may mean slightly different things depending on who you ask. To avoid misunderstanding, let me give my definitions.
First, note that I don't consider subyping, in any form, to be a defining property of an object-oriented system. To me, the most important aspect of this discipline is encapsulation and separation of concern. I think of objects as abstractions representing some notional values with which one may interact through public APIs. The representation of these values and the implementation of their APIs can (and should) remain completely hidden. Whether or not one can define other objects with priviledged access to these internal details (i.e., subclasses) is an orthogonal feature.
Second, note that I don't consider dynamic dispatch to be a defining property of object-oriented systems either. I'm heavily biased here, coming from a language in which (efficient) code is typically written in such a way that dynamic dispatch never comes into play. So I'm used to writing "object-oriented" code that doesn't require dynamic dispatch, even in generic contexts.
As an aside, note that even if you hold dynamic dispatch close to your heart, there are many ways to implement it without virtual calls dispatched through a class hierarchy. The simplest of all is perhaps to use a basic match statement.
def lengthOf(s: Any): Int =
s match
case t: CustomArray => t.length
case t: CustomList => t.length
case _ => throw MessageNotUnderstood()
That being said, it's obviously nice to have language features to generate such a boilerplate for us.
I don't need anyone to agree with my definitions. If you must, rename the concepts I described to Dimi's OO and leave your own interpretation untouched. Just come back to this section when you feel you disagree with what I consider to be upholding object-orientation.
Now that we've got most definitions out of the way, let's look at examples explaining why subtyping is seldom the right tool for generic programming.
So far I've only been talking about algorithic abstractions, that is, I've showed examples of generic algorithms being lifeted from concrete implementations. I haven't showed examples of generic data structures being lifted the same way. So let's imagine I want to use lists of things other than integers.
My first instinct might be to just abstract away the element type of my CustomList
.
/** A sequence of values stored in a linked-list. */
final class AnyCustomList:
/** A node in a singly linked-list. */
private final class Node(val value: Any, val tail: Option[Node])
// ...
It's probably obvious to any type system enthusiast this declaration is a poor solution.
I threw type safety by defining my list over Any
because now the type system is no longer able to guarantee that my instances are homogeneous.
I can create a list, prepend a bunch of integers and then a Boolean!
Note: In addition to the loss of type safety, this solution would also kill performance in a systems programming setting due to the indirection introduced by boxing. That isn't an issue in Scala, at least on the JVM, but nonetheless I think it's worth remembering that there's sometimes costs to type erasure. So if we're to define generic abstraction without loss of efficiency, such costs might be part of our equation.
A much better solution is to add a generic parameter to CustomList
for representing the type of the list's elements.
/** A sequence of values stored in a linked-list. */
final class CustomList[Element]:
/** A node in a singly linked-list. */
private final class Node(val value: Element, val tail: Option[Node])
// ...
So far so good, but notice that my list no longer extends Sequence
.
That's because the trait is defined as an abstraction over collections of integers rather than arbitrary data types.
So again I have two choices: rewrite Sequence
so that fold
and count
accept an Any
or add a generic parameter.
The former option has the same disadvantage as before w.r.t. type safety, so the latter looks like a better bet.
The problem is that now I've pushed a generic parameter in all APIs that mention Sequence
.
/** A sequence of values. */
trait Sequence[Element]:
// ...
/** A collection of integers whose elements can be randomly accessed. */
final class CustomArray extends Sequence[Int]:
// ...
/** A sequence of values stored in a linked-list. */
final class CustomList[Element] extends Sequence[Element]:
// ...
One may feel that the new parameter isn't that bad.
After all, it makes sense to say what kind of elements we want in our sequence, and CustomArray
should probably be defined more generically anway.
But let's see what happens if we keep augmenting the API of Sequence
.
I don't have any convenient way to iterate over my collections.
Using fold
is only a poor woman's solution that happened to work well for writing length
and count
, but I can definitely do better: I'll use the iterator pattern.
I'll also redefine fold
to use an iterator.
final class CustomList[Element] extends Sequence[Element]:
/** An iterator over the elements of a singly-linked list. */
private final class CustomIterator(private var head: Option[Node]):
/** Advances to the next element and returns it, or returns `None` if there
* are no more elements to produce.
*/
def next(): Option[Element] =
head match
case Some(n) =>
head = n.tail
Some(n.value)
case None => None
/** Returns an iterator over the elements in `this`. */
def makeIterator(): CustomIterator =
CustomIterator(head)
/** The result of applying `combine` on an accumulator, initialized with
* `initialValue`, and each element of `this`.
*/
def fold[T](initialValue: T)(combine: (T, Element) => T): T =
def loop(i: CustomIterator, r: T): T =
i.next() match
case Some(e) => loop(i, combine(r, e))
case None => r
loop(makeIterator(), initialValue)
// ...
Note:
My iterator doesn't have the same API as Scala's and that's intentional.
In my experience, having to define hasNext
is typically much more inconvenient than allowing next
to signal the end of the iteration with an optional.
It's tempting to also define an iterator for my CustomArray
.
Besides, the fact that I could rewrite CustomList.fold
using its iterator suggest that there's a way to lift the concrete implementation I have in CustomArray
.
More code reduction on the horizon! 🥳
Except that there's a problem.
The iterator over CustomList
is clearly not the same as the one over CustomArray
, so the thorny choice between type erasure and generic parameters comes up again.
I'm still the same type safety advocate I was five paragraphs ago so you probably guessed where things are headed.
Source code
/** An iterator over the elements of a singly-linked list. */
trait AbstractIterator[Element]:
/** Advances to the next element and returns it, or returns `None` if there
* are no more elements to produce.
*/
def next(): Option[Element]
/** A sequence of values. */
trait Sequence[Element, CustomIterator <: AbstractIterator[Element]]:
/** Returns an iterator over the elements in `this`. */
def makeIterator(): CustomIterator
/** The result of applying `combine` on an accumulator, initialized with
* `initialValue`, and each element of `this`.
*/
def fold[T](initialValue: T)(combine: (T, Element) => T): T =
def loop(i: CustomIterator, r: T): T =
i.next() match
case Some(e) => loop(i, combine(r, e))
case None => r
loop(makeIterator(), initialValue)
// ...
/** A collection of integers whose elements can be randomly accessed. */
final class CustomArray extends Sequence[Int, CustomArray.Iterator]:
// ...
object CustomArray:
/** An iterator over the elements of an array. */
final class Iterator(
private val base: CustomArray
) extends AbstractIterator[Int]:
// ...
/** A sequence of values stored in a linked-list. */
final class CustomList[Element]
extends Sequence[Element, CustomList.Iterator[Element]]:
// ...
object CustomList:
/** An iterator over the elements of a singly-linked list. */
final class Iterator[Element](
private val base: CustomList[Element],
private var head: Option[base.Node]
) extends AbstractIterator[Element]:
/** Returns the next element in the list or `None` if no such element exists. */
def next(): Option[Element] =
head match
case Some(n) =>
head = n.tail
Some(n.value)
case None => None
To be fair, there's already a quite well established way to write a better version of that setup without type erause.
I could define an abstract type member in Sequence
and let derived classes implement it the way they like.
This approach actually points at a fundamental need to construct generic abstractions: associated types.
The iterator of a sequence is related to the sequence in a similar way as that sequence's abstract operations.
To extend a sequence, a type T
has to define an iterator (or pick a default implementation).
Further, note that the iterator type will, in general, likely depend on T
on way or the other!
By now I hope I've convinced you that relying on generic parameters to define generic data structures doesn't work very well if we're also using subtyping to define generic algorithms, as base types must accumulate parameters to handle the different ways in which concrete types implement APIs. These generic parameters must then be threaded through all APIs, making documentation harder to write and code harder to read.
I have my own example of this problem, but I'm not the only one to have faced it. The Ox library shows real-life instances of this issue. Here's a pseudo-randomly-selected excerpt.
/**
* Parallelize a foreach operation. Runs the operation on each element of the iterable in parallel.
* Using not more than `parallelism` forks concurrently.
*
* @tparam I the type of the elements in the iterable
* @tparam C the type of the iterable, must be a subtype of Iterable[I]
*
* @param parallelism the number of threads to use
* @param iterable the collection to iterate over
* @param operation the operation to perform on each element
*/
def foreachPar[I, C <: Iterable[I]](parallelism: Int)(iterable: => C)(operation: I => Any): Unit
While foreachPar
takes two generic parameters, I strongly believe it should only require one.
The excellent documentaion of the method makes it clear that the intent is to apply an operation of the values produced by an iterator.
Therefore asking for I
is nothing but a way to extract an associated type.
Note: The reference to Ox is not meant as an attack to the library nor its author. I've heard only good things about Ox. I'm merely pointing at the fact that using generic parameters to represent associated types make APIs overly complex to use and specify, at least for my taste.
An easier to explain but equally annoying issue is that traditional subtyping is not amenable to retroactive modeling. That shouldn't be too surpising to many readers. Philip Wadler formulated the issue more than three decades ago when he wrote an email titled "The Expression Problem". In a nutshell, it's typically hard to extend a data type with new operations in a an object-oriented setting.
To illustrate, let us imagine that I import a library providing a highly optimized implementation of a deque, aka a double-ended queue.
A deque is notionally a sequence of elements, just like a singly-linked list or an array.
So it's likely that I'd like to reuse my generic algorithms (e.g., count
) on deques.
But since I'm not the author of that data structure, I can't declare that it extends my Sequence
trait.
Another poor woman's solution is to wrap deques into one of my own types and forward all necessary operations.
import somelib.Deque
class DequeWrapper extends Sequence[Int]:
// ...
Using such a wrapper is bound to be a constant pain, though.
Even if I rewrite the entire API of Deque
in DequeWrapper
, it's likely that somelib
defines other algorithms that operate directly on Deque
.
Those operations won't take a DequeWrapper
and so I'll have to frequently wrap and unwrap values at the boundary.
That's a lot of noise.
Note:
For a real occurrence of this wrapping pattern, look no further than Scala's WrappedString
!
Another issue is that inheritance can't be conditional, even if I have authored all type definitions.
For example, let's assume I define a trait Comparable
representing orderable things (see the next section).
I can't say that Sequence
extends Comparable
if and only if its elements extends Comparable
.
I can get close, though!
extension [E <: Comparable](self: Sequence[E])
// ...
Extension methods address parts of the expression problem but they don't compose with generic algorithms because they can't modify the subtype relationship.
A third issue I'd like to discuss is the binary method problem, which I believed was coined by Kim Bruce et al. in 1995.
To illustrate, let's imagine we want to define a lexicographic order on our collections.
As always, I'll start with concrete algorithms before trying to lift them.
I'll also assume that my collections implement an iterator defined as an abstract type in Sequence
, just like I pitched in the section on the proliferation of generic parameters.
/** Returns `true` if `a` is lexicographically ordered before `b`. */
def lexicographicallyPrecedes(a: Sequence[Int], b: Sequence[Int]): Boolean =
def loop(i: a.CustomIterator, j: b.CustomIterator): Boolean =
(i.next(), j.next()) match
case (Some(x), Some(y)) =>
if x <= y then loop(i, j) else false
case (_, y) =>
y.isDefined
loop(a.makeIterator(), b.makeIterator())
This algorithm is generic over the kind of sequence that it accepts, which is pretty cool.
I can create an instance of a CustomArray
and then check if it lexicographically precedes a CustomList
.
But I can only do that if these sequences contain integers.
Obviously, one would like the ability to use lexicographicallyPrecedes
on sequences of other data types!
If we look at the algorithm that I wrote, we can see that it makes only a single assumption on the API of the sequences's elements: they must have a <=
operator.
That's a simple requirement to lift in an abstraction.
Let's define a trait!
/** A type whose instances can be ordered. */
trait Comparable:
/** Returns `true` if `this` is equal to `other`. */
def isEqualTo(other: Comparable): Boolean
/** Returns `true` if `this` is less than `other`. */
def isLessThan(other: Comparable): Boolean
/** Returns `true` if `this` is equal to or less than `other`. */
def isEqualToOrLessThan(other: Comparable): Boolean =
isEqualTo(other) || isLessThan(other)
Now I can rewrite my geneirc algorithm like this:
/** Returns `true` if `a` is lexicographically ordered before `b`. */
def lexicographicallyPrecedes[E <: Comparable](a: Sequence[E], b: Sequence[E]): Boolean =
// ...
As we saw in the previous section, the first catch is that I can't retroactivelly make Int
be subtype of Comparable
.
But I can still define a wrapper and use lexicographicallyPrecedes
on a Sequence[IntWrapper]
.
So where is the issue?
Well, Int
isn't the only orderable data type in the world.
In fact the whole point of having defined Comparable
was precisely to use my generic algorithm on other data types.
So I might go ahead and write a StringWrapper
, but then I'll run into a pretty nasty issue.
What does it mean to call IntWrapper(1).isLessThan(StringWrapper("a"))
?
Probably not much, yet the interface of our trait suggests otherwise.
The problem is that isLessThan
isn't an operation that can be applied arbitrary pairs of comparable things.
It can only apply on pairs of the same type, which must be comparable.
There are multiple ways to deal with this problem.
One is to make the trait generic and define isLessThan
in terms of a generic parameter.
Sadly this approach introduces yet another parameter that will have to be threaded through all APIs, as I already dicussed.
/** A type whose instances can be ordered relative to instances of `T`. */
trait Comparable[T]:
/** Returns `true` if `this` is less than `other`. */
def isLessThan(other: T): Boolean
// ...
Note:
If you think Comparable[T]
looks like a type class, you're right!
I believe that is what people call foreshadowing.
Another is to define the operation as a free function or a method of a different object. In this specific case, that would be equivalent to create a Java-like comparator.
/** A helper for ordering instances of `T`. */
trait Comparator[T]:
/** Returns `true` if `a` is less than `b`. */
def isLessThan(a: T, b: T): Boolean
// ...
Sadly this second approach also eventually introduces another generic parameter because a Comparator[T]
is notionally an associated type of T
.
There are many binary methods out there, like addition or subtraction.
So I don't think the Comparator
's trick would scale very well.
We'd need to define Adder
s and a Subtractor
s, instantiate them, and pass them to generic algorithms.
Before I discuss how type classes can address the issues I've presented above, let me first state a couple of high-level personal opinons observations.
Most of the code that people write isn't about cats and dogs being four-legged animals and sharing a walk
method.
It's more about (hopefully reusable) algorithms on concrete, efficient data structures.
At the very least, I believe that observation holds for a standard library and that the algorithms and data structures of a standard library typically form the backbone of most applications.
The idea that one can write a generic algorithm in terms of a type sitting somewhere in the middle of an aribtrary class hierarchy is an illusion. Daniel Spiewak said it better than me back in 2015.
Choosing a collection is a very deliberate and intentional thing. We want to select our collection to match the complexity bounds of our problem. Encouraging (and even enabling!) users to work in terms of generic types that throw away this information is wrong, and very very much an anti-pattern.
[...]
Half of the performance issues in the current collections library stem from specific collection
A
inheriting an implementation that is shared withB
andC
, but which was written againstIterable
and thus is not optimized for any of the above!
Yet subtyping through inheritance tends to erase this important information with ... well erasue. That's because the alternative is to add generic parameters everywhere. And when type erasure is the cure for annotation overhead, it becomes very tempting to erase as much as we possibly can.
What I think is more important is to have a way to classify concepts (yes, just like in C++). A sequence is a concept, not a type, and a singly-linked list of integers is one implementation of that concept, not a subtype thereof.
Also note that it's quite difficult to write a class that can actually be meaningfully extended by user code. Again, Daniel Spiewak said it better.
Have you ever tried to implement a new collection that sits in the collections hierarchy? I have. It's awful. It is unbelievably difficult to get right. And as a consequence, NO ONE DOES IT!
Daniel's criticism was toward Scala's standard library but I actually think that it generalizes. The idea that we can easily write extensible framrworks based on inheritance is, in my opinion, another illusion that we keep selling to students using cats and dogs. As anedcotical evidence, I can present my experience using frameworks like UIKit or SpriteKit. So in a sense, subtyping through inheritance isn't even good at solving its side of the expression problem!
But what about dynamic dispatch then!?
Well, let me confess that I probably care far less than you about dynamic dispatch. I'm a Swift developer, and in Swift we typically strive for static dispatch since our programs are compiled ahead of time.
But I shouldn't fall into a false dichotomy. There are other forms of subtyping that enable dynamic dispatch, ranging from low-tech solutions like dumb match expressions to sophisticated ones like existential types. So the question isn't about whether or not we should discourage dynamic dispatch and more about investigating whether we really need subtyping through inheritance for that.
Most of the issues I presented revolve around a key missing piece of the puzzle: associated types.
The clearest example was the sequence's iterators, but note that the binary method problem is also related.
If all traits had a Self
associated type, then I could write Comparable
like this.
/** A type whose instances can be ordered. */
trait Comparable:
/** Returns `true` if `this` is less than `other`. */
def isLessThan(other: Self): Boolean
// ...
This issue comes back to my point about subtyping through inheritance being overrated. When we factor associated types into the equation, the issue becomes clearer. Traditional inheritance wants to beleive every generic algorithm only ever depends on one receiver. This view is tenable when we're talking about cats and dogs but it immediately falls apart when we examine algorithms with more than one generic input.
Associated types can be expressed as type members in Scala. But it is difficult to conveniently talk about them in generic signatures because they represent path-dependent types. As a result, it gets a little harder to write something like a function that takes two sequences. The same is natural in Swift or Rust.
func zip<S: Sequence, T: Sequence>(s: S, t: T) -> ZipSequence<(S.Element, T.Element)>
The proliferation of generic type parameters is a direct consequence of the inability to conveniently talk about associated types. Because writing a function like the one above using path-dependent types is unnatural in Scala, people tend to go for this.
def zip<E1, E2, I1 <: AbstractIterator[E1], I2 <: AbstractIterator[E2]>(
s: Sequence[E1, I1],
t: Sequence[E2, I2]
): ZipSequence[(E1, E2)]
In Swift, I can write generic parameters that talk about the root type of the arguments I'll pass to foo
.
I want a function that operates on sequences, and so that is what I ask.
In Scala, I must take generic parameters that talk about the things that form a sequence and then I have to reconstruct the abstraction in the parameter list.
The more sophisticated the abstraction is, the more complex the signature gets.
One advantage of the generic parameters is that they are amenable to imply equality between associated types. But the same can be done in Swift with where clauses or in Scala with refinements, as I'll show later.
Widening is a very dangerous tool.
If you left the section on the proliferation of generic parameters with the idea that we could get away by simply declaring Sequence.makeIterator
to return an AbstractIterator[Element]
, then consider the following example:
/** The position of an object in a container. */
trait AbstractPosition
/** An object containing one or more other objects. */
trait AbstractContainer[Element]
/** Returns the position of the first element in `this`. */
def firstPosition: AbstractPosition
/** Returns the element at the given `position`. */
def apply(position: AbstractPosition): Element
I just defined a very terrible API and apologize to anyone caring about type safety. Imagine there are two type of containers, like arrays and trees. We should never be able to use a tree position to access an element of an array, yet there is nothing in the trait that says we can't. That's why I believe we should strive for APIs that cause as little widening as possible.
Now let me finally stop enumerating complaints. What can we do about all the issues I discussed?
Again, I think I ought to explain what I mean by "type class", in case you need a refresher or want to be sure we agree on definitions. Feel free to skip this section if you're already an expert.
To me, a type class is a collection of requirements expressed in the forms of methods and associated types, representing a particular concept.
All type classes have a "receiver" parameter, which I call Self
by convention (and because I'm a Swift developer).
For example, here's a type class.
/** A type whose instances can be ordered. */
trait Comparable[Self]:
/** Returns `true` if `this` equal to `other`. */
extension (self: Self) def isEqualTo(other: Self): Boolean
/** Returns `true` if `this` is less than `other`. */
extension (self: Self) def isLessThan(other: Self): Boolean
This type class describes what it means for a type to be comparable (or orderable). In this particular case, we only require a two methods, one to tell whether or not two instances are equal, and another to tell whether or not one instance is ordered before another. All other operations of "comparable" things can be derived from these two requirements.
We state that a particular type implements (or conforms to) a concept by creating an instance of the corresponding type class.
Defining such an instance as a given lets us use "method syntax" via clever compiler desugaring, since the requirements have been defined as extension methods.
For example, here's how we state that Int
is Comparable
.
given Comparable[Int] with
extension (self: Int) def isEqualTo(other: Int): Boolean =
self == other
extension (self: Int) def isLessThan(other: Int): Boolean =
self < other
println(1.isLessThan(2)) // true
An associated type requirement is declared as an abstract type member in the type class. For example, here's a type class describing iterators.
/** A type capable of generating a sequence of elements. */
trait IteratorTrait[Self]:
/** The type of the elements produced by the iterator. */
type Element
/** Advances to the next element and returns it, or returns `None` if there
* are no more elements to produce.
*/
extension (self: Self) def next(): Option[Element]
/** A half-open interval from a lower bound up to, but not including, an upper bound. */
class HalfOpenRange(lower: Int, upper: Int)
given IteratorTrait[HalfOpenRange] with
type Element = Int
extension (self: HalfOpenRange) def next(): Option[Element] =
if self.lower >= self.upper then
None
else
val r = self.lower; self.lower += 1; Some(r)
There are many other quite powerful things we can do with type classes and I'll discuss some of them in the remainder of this document. For now, I think we've covered the basics and are ready to see how type classes address the issues I presented.
I've spent quite some words explaining that one big problem arising with subtyping was the proliferation of generic parameters. Let's see how that isn't the case with type classes. To make my case as convincing as possible, I'll use the syntax proposed by Martin Odersky to shore up the support of type classes in Scala, which is described here.
/** A sequence of values. */
trait Sequence:
type Self
/** The type of the elements in the sequence. */
type Element
/** The type of iterators over the sequence. */
type CustomIterator
given IteratorTrait[CustomIterator] { type Element = Element } = deferred
/** Returns an iterator over the elements in `self`. */
extension (self: Self)
def makeIterator(): CustomIterator
/** The result of applying `combine` on an accumulator, initialized with
* `initialValue`, and each element of `this`.
*/
extension (self: Self)
def fold[T](initialValue: T)(combine: (T, Element) => T): T
def loop(i: CustomIterator, r: T): T =
i.next() match
case Some(e) => loop(i, combine(r, e))
case None => r
loop(self.makeIterator(), initialValue)
/** The number of elements in `this`. */
extension (self: Self) def length: Int =
fold(0)((a, _) => a + 1)
/** The number of elements in `this` that satisfy `predicate`. */
extension (self: Self) def count(predicate: (Int) => Boolean): Int =
fold(0)((a, e) => if predicate(e) then a + 1 else a)
Notice that Sequence
no longer needs any type parameter, despite having two associated types.
As a result, it gets simpler to refer to these associated types in generic signatures.
For example, here's a generic algorithm on Sequence
.
def allSatisfy[S: Sequence](items: S, predicate: (S.Element) => Bool): Boolean =
def loop(i: S.CustomIterator): Boolean =
i.next() match
case Some(e) => if predicate(e) then loop(i) else false
case None => true
loop(self.makeIterator())
The signature is, in my opinion, a significant improvement over what I would have had to write with generic parameters.
- It is clear from the start that I'm defining an algorithms operating on sequences.
- The correspondance between the associated types and their "root" parameters is obvious when I look at
S.Element
. - My signature focuses on the relevant part of the API without leaking unrelated details about the way
Sequence
is defined; in particilar, I don't have to talk about iterators.
For fun, here's what I would have had to write with generic parameters.
def allStatisfyAlt[E, I <: Iterator[E]](
items: Sequence[E, I],
predicate: (E) => Boolean
): Boolean
Making a concrete type conform to Sequence
is also simple.
given [E] => CustomList[E] is Sequence:
type Element = E
// Note: Custom iterator can now be defined in `CustomList` again
type CustomIterator = CustomListIterator[E]
extension (self: CustomList[E])
def makeIterator(): CustomIterator =
CustomListIterator(self.head)
Because I no longer have to make my concrete, efficient data structures inherit a trait or class from the get go, it's easy to retroactively describe how one specific type implements a given concept. In fact, that is one of the primary purposes of type classes! That means I no longer need to define inconvenient (and innefficient) wrappers around data types I didn't author to reuse generic algorithms.
import somelib.Deque
given Sequence is Deque:
// ...
Conditional conformance is equally easy to define!
given [E: Comparable] => CustomList[E] is Comparable:
// ...
Becuase a type class has a "Self" type, binary methods are no longer an issue.
I fact, Comparable
already demonstrates that point.
trait Comparale:
// ...
given Int is Comparable:
// ...
given String is Comparable:
// ...
The conformances of Int
and String
to Comparable
are expressed by distinct type class instances.
There is no longer any change of falling into a satefy trap because of a call to isLessThan
with mismatching arguments.
I can also properly describe Comparable
as a concept for types to implement, rather than having to extract that feature in a ad-hoc helper, like a comparator.
I have examined reasons why subtyping through inheritance often comes short when it comes to generic programming, using concrete examples. The main issue I identified is the proliferation of generic parameters. This problem is caused by the inability to ergonomically express generic data structures and operations over more than once generic receiver. I have then showed how type classes address the shortcomings of subtyping through inheritance, using the set of syntax changes recently proposed by Martin Odersky.
It should be noted that this style of programming, as foreign as it may look, is used extensively in at least three contemporary languages: C++ (concepts), and Swift (protocols), and Rust (traits). I believe that the Carbon project is also heading toward this direction. Swift and Rust in particular propose an experience comparable to the one Scala could offer by pushing into the direction set by PR #19395. These languages provide efficient and highly reusable collection libraries with countless users, empirically demonstrating that this model is at the very least reasonable to write every day code.
One point I didn't explore is performance.
It is obvious that any reasonable framework for generic programming must offer a way to exploit the properties of specific data structures to write more efficient implementations of certain algorithms (e.g., length
is O(n) on an arbitraty singly linked-list but O(1) on an array).
I know that type classes are very good at upholding this principle when one is allowed to leverage a refinement relationship.
However, I know very little about the implications of using type class instances with respect to performance in Scala and have almost no intuition on how to write efficient code for a JIT compiler such as the JVM.
So I'm ready to believe advocates of dynamic dispatch on class inheritance hierarchies if they tell me that type classes will incur a performance hit.
But I also know that type classes enable static dispatch in Swift and can imagine a world where Interflow would be capable of doing the same for Scala Native.
Revisions: