layout | post-type | by | title |
---|---|---|---|
blog |
blog |
Martin Odersky |
Higher Kinds Revisited |
The combination of generics and subtyping poses some deep and difficult questions for language design. I have been struggling with these questions for a good part of the last two years. This is an attempt to summarize what I have learned. I first describe how generics are expressed in the current Scala compiler, nsc, then contrast this with how they are expressed in Scala's foundation DOT and the new Scala compiler dotty. This brings us to the question of how higher-kinded types should be encoded in DOT and Scala. I'll explore the design space and motivate a simple solution.
If we combine generics and subtyping, we inevitably face the problem
that we want to express a generic type where the type argument is an
unknown type that can range over a set of possible types. The
prototypical case is where the argument ranges over all subtypes or
supertypes of some type bound, as in List[_ <: Fruit]
.
Such partially undetermined types come up when we want to express
variance. We would like to express, say, that List[Apple]
is a
subtype of List[Fruit]
since Apple
is a subtype of Fruit
. An
equivalent way to express this is to say that the type List[Fruit]
includes lists where the elements are of an arbitrary subtype of
Fruit
. By that reasoning, List[Apple]
is a special case of
List[Fruit]
. We can also express this notion directly using the
wildcard type List[_ <: Fruit]
. Definition-site variance can be
regarded as a user-friendly notation that expands into use-site
variance expressions using such wildcards.
The problem is how to model a wildcard type such as List[_ <: Fruit]
. Igarashi and Viroli's original interpretation was as an
existential type exists(T <: Fruit).List[T]
which is written
List[T] forSome { type T <: Fruit }
in current Scala. However,
existential types usually come with explicit open and pack constructs,
which are absent in this setting. Moreover, actual subtyping rules as
e.g. implemented in Java are more general than what can be expressed
with existential types alone. The actual modelization is not fully
known and the issues look complicated. Tate, Leung and Learner have
explored some possible explanations in their PLDI
paper, but
their treatment raises about as many questions as it answers.
DOT and dotty solve the problem by a radical reduction. Type
parameters and type arguments are not primitive, but are seen as
syntactic sugar for type members and type refinements. For instance,
if List
is declared like this:
trait List[type Elem] { ... }
then this would be expanded to a parameterless trait with a type member Elem
:
trait List { type Elem; ... }
An application such as List[String]
is expanded to List { type Elem = String }
. If List
were declared as covariant (using [type +Elem]
), the type application is instead expanded to a refinement
with an upper bound, List { type Elem <: String }
. Analogously,
applications of contravariant types lead to refinements with lower
bounds. The example used a named parameter [type Elem]
, which is a
new construct in dotty. If we had written instead
class List[Elem] { ... }
this would expand to
class List { type List$$Elem; private type Elem = List$$Elem; ... }
The type parameter of List
would then get the compiler-generated
name List$$Elem
in order not to clash with other member definitions
of the trait. The parameter is accessible under its original name
Elem
via the private type alias in the body if List
.
The dotty scheme has two immediate benefits. First, we only need to
explain one concept instead of two. Second, the interaction between
the two concepts, which was so difficult before, now becomes
trivial. Indeed, a type like List[_ <: Fruit]
is simply List { type Elem <: Fruit }
; that is, wildcard parameters translate directly to
refinements with the same type bounds.
Beyond that there is a more important, albeit less tangible benefit: DOT and dotty have a simple view of what types are. In that view, a type name introduces a new type, like an uninterpreted symbol of a theory. Types are related by constraints, which introduce subtyping relations between types. Type equality can be expressed by mutual subtyping. By contrast, the traditional view of types sees type names as abbreviations for the types they stand for. There is usually a set of primitive types over which parameterized types are built. So generics in this view are generative; they generate instances of non-generic type structures. This makes sense intuitively, but is not enough. We also need a way to represent abstract types, and the best known way of doing this is the one of SML or DOT or dotty: use opaque type names with constraints. Mixing the generative paradigm with the type abstraction paradigm inevitably leads to more complexity than to stick with one paradigm, as dotty does.
So far we have seen that the DOT/dotty approach is better founded
and simpler than the previous scheme used by nsc. But things get
murkier once we try to include higher-kinded types. If we start with
type applications, then a good way to model higher-kinded types is in
terms of type lambdas. This is a somewhat more principled version of
what is done in nsc. A type constructor such as List
is expressed
as a type lambda [X] -> List[X]
. There can be several parameters
like X
and they can have variances attached. So the covariant list
constructor would be expressed as [+X] -> List[X]
. A higher-kinded
type parameter such as [CC[+X] <: Iterable[X]]
is represented as a
type parameter with a type lambda as upper bound. So the previous
expression is equivalent to [CC <: ([+X] -> Iterable[X])]
. There is
a standard beta reduction rule for applications of type lambdas. The
type ([X] -> S)[T]
simplifies to [X := T]S
. The subtyping rules for
higher-kinded types are fairly standard, I'll omit them here in the
interest of brevity.
A tricky aspect concerns type inference. One important part of type inference in Scala infers method type parameters from term arguments. Say you have the function
def f[CC[+X], T](x: CC[T]) = ...
and the application f(xs)
where xs: List[Int]
. Type inference will
determine that CC
should be [+X] -> List[X]
and T
should be
Int
. However, we could also have xs
of a type that has several
unrelated parameterized supertypes. In this case, there is no best
instantiation of CC
; the compiler will the pick one of the possible
supertypes in some fairly arbitrary way.
This modeling assumes type lambdas and type applications as primitive constructs. If these constructs are absent, as is the case in dotty, higher-kinded types need to be re-evaluated. Do we have to give up on them, or are the ways to emulate them? I have explored two schemes that do this in detail. One is based on type projections, the other on named parameters. In the following, I describe the two schemes and compare them.
The type projection approach was originally suggested by Adriaan Moors and has been worked out in detail in the current dotty compiler. It uses the following basic encodings.
-
A type lambda
[X >: S <: U] -> T
is encoded as the refined typeLambda$I { type $hk0 >: S <: U; type $Apply = [X := this.$hk0]T }
This makes use of a family of synthetic base traits
Lambda$...
, one for each vector of variances of possible higher-kinded parameters. A suffix ofI
indicates a non-variant parameter,P
(positive) a covariant parameter, andN
(negative) a contravariant parameter. Each base trait defines parameters$hk_i
with the given variances and an abstract type member$Apply
. For instance, the base traittrait Lambda$NP[type -$hk0, type +$hk1] { type $Apply }
is used for binary type lambdas where the first type parameter is contravariant and the second is covariant.
-
An application of a non-variant higher kinded type
C
to an argumentT
is encoded asC { type $hk0 = T } # $Apply
Covariant and contravariant hk-type applications lead to projections with upper and lower bounds instead.
-
Beta reduction is supported by dereferencing the type projection. Indeed,
Lambda$I { type $hk0; type $Apply = T } { type $hk0 = A } # Apply
reduces to
[this.$hk0 := A]T
Ideally, an encoding of higher-kinded types into type members and refinements would be sufficiently expressive; an encoded term should be type-checkable in the base calculus without special provisions that account for the fact that types were originally higher-kinded. Unfortunately, there are a number of areas where higher-kinded types do shine through. To make, e.g. the standard Scala collections compile, all of the following tweaks are necessary:
-
$Apply
refinements are covariant. IfT <: U
thenS { type $Apply = T } <: S { type $Apply = U }
This subtyping relationship does not hold for ordinary type refinements. It would hold for upper bound refinements, of course. But we cannot model
$Apply
refinements as upper bound refinements because that would lose beta reduction. -
Encodings of type lambdas distribute over intersections and unions. For instance,
Lambda$I { ... type $Apply = T } & Lambda$P { ... type $Apply = U }
needs to be normalized to
Lambda$I { ... type $Apply = T & U }
-
A subtyping test of the encoded versions of
A[T1, ..., Tn] <: C[U1, ..., Un]
where A
is an inferable higher-kinded type parameter and C
is a
class or a trait leads to the instantiation A = C
.
-
A subtyping test of the encoded versions of
T <: A[U1, ..., Un]
where A
is an inferable higher-kinded type parameter is handled
by looking in linearization order for some base type of T
that is
of the form C[T1, ..., Tn]
where C
is a class or trait whose parameter variances
match those of A
. Then infer A = C
for the first such class or trait C
that
is found.
-
A subtype test
[X1, ..., Xn] -> T <: C
of an encoded type lambda and a class constructor C
is rewritten to
`T <: C[X1, ..., Xn]. Analogously, the subtype test
C <: [X1, ..., Xn] -> T
is rewritten to C[X1, ..., Xn] <: T
.
-
In certain situations, a subtype test of encoded versions of
[X] -> T <: [+X] -> U
might still succeed, even though normally non-variant lambdas are not subtypes of covariant lambdas. This is specifically the case of the two types appear as an arguments of another type application that only expects a non-variant higher-kinded type.
Another shortcoming, depending on one's point of view, is that the
encoding uses type projections, which are an extremely powerful but
also controversial feature of Scala. Type projections can encode SK
combinator calculus and therefore make Scala's type system Turing
complete. This means there are reasonable programs for which the Scala
compiler will not terminate or will crash with a Stackoverflow
exception. Also, type projections have a somewhat uncertain status in
Scala's type theory. The most reasonable explanation I know regards
the type projection T#A
as equivalent to the existential type
x.A forSome { val x: T }
But this begs the question what the correct modeling of such existential types in DOT is. For instance, what are the correct subtyping rules? Questions like these have not yet been settled conclusively; several obvious attempts have turned out to be unsound.
To summarize, a representation of higher-kinded types in terms of refinements and type projections is feasible, but has shortcomings. First, the abstraction leaks internally, that is, encodings of higher-kinded types still have to be treated specially, notably in the subtype checker. Second, the abstraction relies on a very big hammer -- type projections -- which render the type system Turing complete. In the next section, I discuss a more modest and simpler encoding.
One alternative to the projection encoding is not to have higher-kinded types at all, but to rely only on named arguments.
There is one concession for brevity. We allow named type arguments, analogous to named value arguments. A named type argument
C[A = T]
instantiates the type field A
of C
with T
. It is required that A
is an abstract
type member of C
. The named application is expanded
as usual: to C { type A = T }
if A
is invariant, to C { type A <: T }
if A
is covariant and to C { type A >: T }
if A
is contravariant.
There can be several named type arguments. Positional and named arguments cannot be mixed in a type application.
Besides conciseness the main benefit of the named argument syntax over type refinements is that the mechanics of translating declaration-site to use-site variance need not be repeated by the programmer.
To get an idea of the changes needed in expressing higher-kinded functionality, here is a snippet from the collection strawman.
Current version:
trait Iterable[+A] extends CanIterate[A] with FromIterator[Iterable]
trait FromIterator[+C[X] <: Iterable[X]]] {
def fromIterator[B](it: Iterator[B]): C[B]
}
New proposed version:
trait Iterable[type +Elem] extends CanIterate[Elem] with FromIterator[Iterable]
trait FromIterator[+C <: Iterable]] {
def fromIterator[B](it: Iterator[B]): C[Elem = B]
}
There are three differences between these versions.
-
In the new version,
Iterable
has a named parameter,Elem
. This is arguably good. Type parameters, as everything else, should be given meaningful names, in particular if they are pervasive. -
In the new version,
C
is a simple parameter (C <: Iterable
), where before it was itself parameterized and passed the parameter to the bound (C[X] <: Iterable[X]
). This is also good because it makes parameter declarations much more readable than before. -
In the new version, the higher-kinded application in the result type of
fromIterator
has to be explicitly named:C[Elem = B]
instead of justC[B]
. This is more verbose, and might become mildly annoying when there are many such applications to write. On the other hand, it is at least as clear as the original version.
One important difference to the previous higher-kinded type scheme is
that now the names of higher-kinded type parameters matter. This
means it is no longer possible to have a type parameter that can range
over all constructors of given arity and variances, no matter what the
parameter names are. Higher-kinded abstractions need cooperation from
the libraries they range over. For instance, to make higher kinded
abstractions over collections pervasive, there needs to be a named
parameter like Elem
in the root class, which represents the element type
of each collection. Enforcing such naming discipline can be useful in
its own right because it helps understandability. Also, see the
section on type adapters for how this restriction can be mitigated.
Partial type applications fall out naturally from named arguments.
For instance, consider the trait
trait Map[Key, +Value] extends Iterable[(Key, Value)] { ... }
The types
Map[Key = Int]
Map[Value = String]
can both be seen as unary type constructors in their Key
,
respectively Value
types. Generally, a refined type keeps all type
parameters of its parent that are not mentioned in a refinement.
It looks like type inference for the parameter name encoding can be simpler and more robust than type inference for traditional higher-kinded types. Here's the previous example again, encoded in the new scheme:
def f[CC <: Iterable, T](x: CC[Elem = T]) = ...
Given an xs: List[Int]
and an application of f(xs)
, the inferencer would
have to solve the constraint:
List[Elem = Int] <: CC[Elem = T]
for variables CC
and T
. By the existing rules we decompose this into the
constraints
List <: CC
Int <: T
Unless there are more constraints, this then leads to the desired instantiations
CC = List
, T = Int
.
What's more, since all constraints are over named types, there is never a question
which of several possible parameterized base types to pick. So the ambiguities
of old-style higher-kinded type inference never arise. One also does not need
to synthesize type lambdas to get to a particular type arity, so higher-order unification is not required.
Aliased type constructors are a special case of higher-kinded types. A typical
example is code like this in the scala
package object:
type List[+X] = scala.collection.immutable.List[X]
This syntax would no longer be allowed. However, I think it is reasonable to let an alias (but not an abstract type!) inherit the type parameters from its right-hand side. So instead of the above, one should be able to write
type List = scala.collection.immutable.List
and still use application syntax like List[String]
as a shorthand
for List[Elem = String]
. One can also obtain more interesting parameterized
aliases by combining with partial type applications or more general refinements:
type IntMap = Map[Key = Int]
type Transform = Map { type Value = this.Key }
There are aliases which are not encodable. Some representative examples are
type Rep[T] = T
type RMap[X, Y] = Map[Y, X]
type LL[X] = List[List[X]]
A possible extension of the system is to allow fully parameterized aliases (but not parameterized abstract types) and to expand them when a type application is encountered. So fully parameterized aliases always have to be applied immediately. This is a minor extension. It does allow us to express some natural abbreviations, but at the price of a minor inconsistency since parameterized aliases are not full-blown types but rather templates that generate types.
In summary, the named parameter encoding is very simple and well founded in theory. It adds no new concepts to the conceptual treatment of generics in DOT and dotty. It can also be implemented straightforwardly, and tends to lead to types that are simpler and less cluttered than traditional higher-kinded types. However, it is less expressive than the type projection encoding, and in particular requires all higher-kinded type applications to refer to a particular parameter name.
The restriction that all higher-kinded type parameters must be
named fields makes some universal abstractions such as Functor
or
Monad
less useful. Let's say we have classes
class Option[type +Value] { ... }
class List[type +Elem] { ... }
We would like to write a trait Monad
that applies to both Option
and List
, even though their parameters have different names.
To start, here's a Monad
that works well for List
s.
type Constructor = { type Elem }
trait Monad[M <: Constructor] {
def unit[A]: M[Elem = A]
def bind[A, B](x: M[Elem = A], f: A => M[Elem = B]): M[Elem = B]
}
object ListMonad extends Monad[List] {
def unit[A] = Nil
def bind[A, B](x: List[A], f: A => List[B]) = x.flatMap(f)
}
We now describe a possible extension of the named parameter encoding that
allows the Monad
trait to be also instantiated with Option
.
The idea is to use a type adapter which in effect renames the parameter
of Option
from Value
to Elem
:
type OptionM = Option & { type Elem; type Value = Elem }
OptionM
now has the abstract type member Elem
, so we can write an
implementation of Monad
for it:
object OptionMonad extends Monad[OptionM] {
def unit[A]: OptionM[Elem = A] = None
def bind[A, B](x: OptionM[Elem = A], f: A => OptionM[Elem = B]): OptionM[Elem = B] = x.flatMap(f)
}
However, with current Scala typing rules, this implementation would not typecheck.
Indeed, the type of unit
's right type side None
is Option[A]
which expands
to:
Option & { type Value = A }
On the other hand, the declared type of unit
expands to:
Option & { type Elem = A; type Value = Elem }
So the declared type has a type member, Elem
, which the actual type
lacks. To make this work, we'd need to change Scala's typing rules so
that they allow the following conversion:
If an expression
E
has a fully defined typeT
that does not define a memberA
and the expected typeU
has a type aliastype A = S
as member, thenE
also has typeT & { type A = S }
.
That is, type aliases can be automatically added to types of values if the expected type requires them. This looks quite compatible with the way things are currently modeled in DOT. Nevertheless, it would be good to verify the soundness of this rule formally.
Other open questions are whether we want to make type adapters also
work for standard parameters (i.e. not introduced with type
) and
whether we want to have special syntax for type adapters. One possible
candidate is to re-use the syntax [type <parameter>]
in a type. E.g.
Option[type Elem]
would be equivalent to the abovementoned adapter:
Option & { type Elem; type Value = Elem }
This could also be used to make classes without named parameters (e.g. those coming from Java) available as instances of higher-kinded types. Finally, it could be used to change the original variance of a type. E.g.
Option[type Value]
would be a non-variant Option
type. It remains to be worked out
whether it is worth adding this or another syntax to achieve these objectives.
There are three candidates to choose from.
- The status quo, i.e. the previous nsc scheme which has type application and type membership as separate concepts and uses type lambdas for higher-kinded types.
- The type projection encoding.
- The named parameter encoding.
The first two alternatives are quite similar. They cover about the same user-level language, and differ mostly in the areas where precise explanations become difficult. Alternative (1) introduces complexity explaining wildcard arguments, whereas alternative (2) introduces complexity because the original abstractions shine through in the encoded programs. So the choice between (1) and (2) essentially boils down to a matter of implementation. By contrast, the third alternative is very different from the first two. It implies a different, somewhat more restricted user-level language but has much simpler foundations.
Which of the three alternatives is preferable depends on what we want Scala to be. If we want maximal expressiveness and power even at the price of complicated or unclear foundations, we will choose one of the first two. Another reason to pick the first alternative is if we see Scala as a language which is primarily about categorical abstractions and type-level programming. In that case, encodings would only be a distraction, and the status quo works well enough.
I don't think either of these goals should be our main priority. Scala is not primarily about maximal expressive power, and neither is it primarily about type-level programming and categorical abstractions. The ability to do these things is a testament to the flexibility of the language that allows one to venture into these areas, among others.
For me the primary aim of Scala is still the same as when I created it: To explore a particular fusion of ideas coming from functional programming on the one hand, and object and module systems on the other hand, and to make this fusion simple, powerful and practical. DOT is a very important step forward in this development because it shows how compact a formal foundation for this fusion can be. I believe it is important we capitalize on these foundations, keeping the overall goal of simplicity at the top of our agenda. By this reasoning, alternative (3) is the one that makes most sense to me. A welcome side benefit is that the named parameter encoding also seems to lead to code that is often simpler to understand than traditional higher-kinded types.
The discussion in the previous section did not really pay attention to
the fact that Scala is already a well-established language. True,
higher-kinded types need to be enabled explicitly with a
scala.language
import and the user is warned in the doc comments
that they might change in the future. But there is a lot of code out
there that relies on them. So, any changes to the way higher-kinded
types are expressed needs to come with a migration strategy.
Fortunately, we have techniques in place to help migration. We can support both parameter encoding schemes initially. The named parameter syntax is very convenient whether we have traditional higher-kinded types or not, so adding it looks like a net win (and it also gives us partial type applications, a much demanded feature).
At least initially, old-style higher-kinded types could also be
supported through the type projection encoding in dotty. This
encoding is already fully implemented, so having it help migration is
a good use of the considerable investment that went into it. It's an
open and secondary question whether such support would be enabled by
an import from language.higherKinds
, as before, or would be lumped
into the language.Scala2
import that turns on Scala 2.x language
features in dotty.
Thanks to Nada Amin, Guillaume Martres, Dmitry Petrashko, Sandro Stucki, and Vlad Ureche for useful discussions around this topic. Guillaume Martres provided very valuable help in the implementation of the type projection encoding.
We may be able to get rid of this special case if we implemented type lambdas using
type $Apply <: Foo
(as my scheme for projection should allow) instead oftype +$Apply = Foo
, though we would probably still need to check that we don't intersect type lambdas with different arity.