This document extends on the ideas developed in expressive_metaprogramming.md
Extending import
and export
is only part of ensuring a compelling user story
for expressive metaprogramming in Scala.
First, just as we are able to compose functions together into re-usable pieces, we need some way to compose together, and make re-usable, import and export macros.
Second, a class will often be both input to a macro as well as the owner of the code the macro produces. A convenient mechanism to express this pattern will be invaluable for creating easily digested code.
The semantics of the inline
keyword could be extended to include creating
inline trait
objects. This would be a trait
for which the definition is not
fixed until it is instantiated at compile-time by some inheriting object. Our
export example could be re-written to look something like:
inline trait SwiFizzler(inline b: Boolean) {
def someMacro(b: Expr[Boolean])(using Quotes, cx: ExportDecl & EnclosingTemplate): Expr[cx.Decls] = {
b.value match {
case Some(true) =>
cx.decls('{
object freshTermName {
def fizzle: Boolean = true
}
})
case Some(false) =>
cx.decls('{
object freshTermName {
def swizzle: Double = -1.0d
}
})
case None =>
error("A literal boolean value must be supplied.")
}
}
export $someMacro(b).*
}
class Foo extends inline SwiFizzler(true)
class Bar extends inline SwiFizzler(false)
The inline trait
SwiFizzler
wraps up and encapsulates the export
macro
allowing it to be re-used more easily. In addition, it also gives library
authors a place to ensure that necessary base classes or traits are included,
any self-types declared, and any universal methods or fields can be easily found
and documented.
The inline trait
itself need not have a runtime representation outside what
a trait normally has. Instead, the synthesized declarations would be dropped
into the inheriting object.
In order to accept arguments at compile-time the inline trait
would need to be
able to accept inline
parameters. These parameters would need to evaluate to
literals compatible with the set supported by static annotations.
Classes extending an inline trait
must explicitly notate the relationship.
This serves two purposes:
- Code readers are immediately informed that the content of the base class will include synthesized members.
- The compiler can eagerly follow paths better optimized for metaprogramming.
The export
feature is unduly restrictive. Some metaprogramming use-cases will conflict
strongly with those restrictions. One such restriction is that exported
declarations are not allowed to override an existing member.
One use-case where being able to override an existing member is when users would
like to have a custom version of toString
that performs some additional
function. This could be security related, like redacting particular fields, or
as trivial as being able to add ANSI color-codes for pretty printing.
The syntax of export
could be extended so that export override ...
is valid
and allows exported definitions to override existing definitions where they
would otherwise conflict. This feature should work with either normal or macro
exports.
By default export
adds forwarders in the given scope. For certain use-cases that may add needless
runtime overhead. An option to directly inline declarations from synthesized
objects should be allowed where the synthesized object does not itself inherit
from any other traits or classes.
The syntax of export
could be extended so that
export inline ${..}
is valid and will completely inline all
selected declarations from the synthesized object.
The inline
and override
modifiers could both be present for a given
export
statement, with override
working as described previously.
The inline
modifier for export
would only be valid for export macros.
Export will need to be able to export declarations into companion objects. One
way to accomplish this would be to allow two sets of declarations to be
validated through the ExportDecl
, one intended for the base object, the
other intended for the companion object.
For example:
inline trait Deriving[T[_]]() {
def derivingMacro[A: Type](using Quotes, cx: ExportDecl & EnclosingTemplate): Expr[cx.Decls] = {
val sym = TypeRepr.of[A].typeSymbol
val base = cx.baseTypeSymbol
cx.decls(
baseDecls = '{
object empty
},
companionDecls = '{
object freshTermName {
given[T: $sym]: $sym[$base] = $sym.derived
}
}
)
}
export $derivingMacro[T]().*
}
case class Biff(a: Int, b: String) extends inline Deriving[Show]
With the above features in place, there would be multiple paths to migrating existing Scala 2 code that relies on macro annotations.
First, Scala 2 code using macro annotations could be automatically upgraded once
an appropriate scalafix rule is created by the library author. The scalafix rule
would insert the appropriate import
or export
statement, in the appropriate
location, filling in arguments as necessary.
Second, Scala 3 could offer limited support for macro annotations. This would be a temporary measure so that code could cross-compile without immediately requiring changes.
A macro annotations feature would have to be limited. Scala 3 macro annotations would be implemented as purely mechanical syntactic transformations. The transformation would occur just after parsing, and result in either an import or an export statement, calling to a statically named macro function, with arguments copied in place, and a static selector filled in.
For example, defining a Scala 3 macro annotation could look something like:
case class CompatibleMacroAnnotation(name: String)
extends scala.annotation.MacroAnnotation(EXPORT, "path.to.function", "*")
This would define a macro annotation called CompatibleMacroAnnotation
that
would be transformed into an export
of the macro function path.to.function
,
and would select all symbols with the *
selector.
Using the macro annotation would look identical to current uses:
@CompatibleMacroAnnotation("test")
class SomeClass(..) {
..
}
The resulting code would be identical to:
class SomeClass(..) {
export $path.to.function("test").*
..
}
Where possible, library authors could then re-implement certain Scala 2 macro annotations as Scala 3 macro annotations to make migration to Scala 3 simpler and easier for their users.
The combination of extending import
, export
, and introducing inline trait
fits with the stated design principles. The design goals appear to be met, and
all desired capabilities achieved.
The most significant use-case that is not supported are those purely mechanical transformations of code that, for various reasons:
- alter the inheritance hierarchy of a class/trait/object,
- change order, types, or names of class parameters,
- modify generic type parameters, or
- otherwise transform user-written code into something else.
To safely support mechanical transformations of code, I believe that users must always have a clear sense of how their code is changing. It is not clear how this would be accomplished.
Given that, I believe the proposed set of metaprogramming features solve enough issues that an implementation should be pursued and tested.
@nicolasstucki Wow! Thank you for reading this far. This document outlines some of the places I my mind went after outlining the core import/export macro feature. I don't expect the features in this document to be considered as part of the core feature, but I would love feedback nonetheless!
Yep. I discovered as much when I did my implementation. I'm quite certain Scala 3 needs a syntax for quoting statements, though I've not found a syntax myself that I like.
I wasn't thinking that
SwiFizzler
would actually have a type. It doesn't define a type so much as a template for constructing types. Other languages, I'm thinking of dlang, might call this amixin
.The things that would have types would be the classes, objects, and regular traits that extend
SwiFizzler
. This also resolves the second concern you have, because the expansion of any exports would occur not at the definition ofSwiFizzler
, but in the context of and at the definition of the class/object/etc that extendedSwiFizzler
.This is an alternative, slightly more terse syntax for invoking the macro and not a typo. The benefit is that it matches some of the magic that Ammonite already uses for its scripting of imports. I'm pretty sure parsing this syntax would be trivial.
I saw that, but haven't read through that proposal too closely.
Of course. I didn't want to do anything too extensive until I had working code I could test against.
One of my inspiring use cases is a library for managing multiple versions of large data models, where the versions differ in predictable & mechanical ways. I've found that many times I end up with either duplicated case classes that differ in small ways, or with cases classes that encompass all use-cases but then require runtime checks to ensure they aren't broken/insecure in some way. For example, a POST endpoint that creates an entity will accept a version of the entity minus fields like
id
which should only be generated server-side. But duplicating a case class, only to have it differ by whether it has anid
is annoying. Doing it dozens of times is much worse.I've encountered this same, or similar, problems with data models in Compiler ASTs, REST APIs, databases, and wire formats (scodec). The chimney library (https://github.com/scalalandio/chimney) solves a variant of this problem.
Would you consider an example which takes a base case class and generates a new case class with a
id
field (perhaps even a small patch algebra?) a sufficiently motivating example?