scala-workflow
scala-workflow copied to clipboard
Migrating from untyped macros
Hi,
Do you have any plans for migration from now deprecated untyped macros? I am not still sure if this is going to work, but maybe one could provide implicit views from Monad[A]/Functor[A] to A (e.g. with a @compileTimeOnly annotation), so that the code typechecks, and the macro would just disregard them, replacing with maps or binds. What do you think?
Nick
Hi,
It would certainly be great to get rid of untyped macros, but I can't think of how to do it right now.
To make your solution work, we would need to have implicit view for every instance of monad/functor separately, e.g. Option[A] => A (with the way things currently are) and somehow make it available only within context/workflow block so it won't mess with other code. Composition of functors/idioms won't work anymore (List[Option[A]] => A can't be resolved from List[A] => A and Option[A] => A). Also, powerful implicits like those might interfere with other implicits. For instance, Some("foo") + "bar" is valid Scala because of Predef.StringAdd implicit class and Option[String] => String simply wouldn't be called.
I'll keep trying to come up with something and, of course, all the ideas are highly appreciated.
To make your solution work, we would need to have implicit view for every instance of monad/functor separately, e.g.
Option[A] => A(with the way things currently are) and somehow make it available only within context/workflow block so it won't mess with other code.
Well, providing the views for every functor is not that high a price to pay :) As for the messing, I believe @compileTimeOnly should keep them rather sane.
Composition of functors/idioms won't work anymore (
List[Option[A]] => Acan't be resolved fromList[A] => AandOption[A] => A).
That’s true. Still can be done with an “explicit” implicit though, which is of course ugly.
For instance, Some("foo") + "bar" is valid Scala
This is an interesting matter. How do you know anyway—with the current implementation—that it’s not point(Some("foo").toString + "bar")? However, if part of the code typechecks like this and then fails typechecking somewhere midway because of wrong inferred types, things will break.
By the way, I just realized that the new macro annotations expand before type-checking! So conceptually a ported version could look like so:
@workflow[Option[_]] val x = Some(3) + 5
However, I just tried to pass the type argument through and it didn’t work quite well. It’s also unclear how to pass normal arguments, if we want @wokflow(zipList). @xeno-by, any thoughts on this?
How do you know anyway—with the current implementation—that it’s not
point(Some("foo").toString + "bar")?
That's how rewriting works — we start with the most nested subexpression. Basically, the whole expression never gets to be typechecked as is (unless we walked through all of its subexpressions and left them untouched). In this case, Some("foo") is typechecked before Some("foo") + "bar", and rewritten with synthetic String argument to map.
What exactly didn't work?
@xeno-by
class Workflow[A] extends StaticAnnotation {
def macroTransform(annottees: Any*) = macro WorkflowMacros.macroTransform[A]
}
object WorkflowMacros {
def macroTransform[A: c.WeakTypeTag](c: MacroContext)(annottees: c.Expr[Any]*): c.Expr[Any] = {
import c.universe._
println(weakTypeOf[A])
c.Expr[Any](Block(DefDef(Modifiers(), newTermName("x"), List(), List(), TypeTree(), Apply(Select(Literal(Constant(3)), newTermName("$plus")), List(Literal(Constant("asd"))))), Literal(Constant(()))))
}
}
scala> import foo.Workflow
import foo.Workflow
scala> @Workflow[Int] val x = 4
x: Int = 4
So the annotation is not doing anything at all. Without the type arg it works fine:
class Workflow extends StaticAnnotation {
def macroTransform(annottees: Any*) = macro WorkflowMacros.macroTransform
}
object WorkflowMacros {
def macroTransform(c: MacroContext)(annottees: c.Expr[Any]*): c.Expr[Any] = {
import c.universe._
//println(weakTypeOf[A])
c.Expr[Any](Block(DefDef(Modifiers(), newTermName("x"), List(), List(), TypeTree(), Apply(Select(Literal(Constant(3)), newTermName("$plus")), List(Literal(Constant("asd"))))), Literal(Constant(()))))
}
}
scala> import foo.Workflow
import foo.Workflow
scala> @Workflow val x = 4
x: String = 3asd
Another question is if we have a macro annotation with arguments (class Workflow(a: Int) extends StaticAnnotation), how do we access them in impls?
Another issue is that we need higher-kinded type as the argument. I seem to remember, that c.WeakTypeTag didn't work for that.
That's how rewriting works
I understand that, but what happens in this example?
workflow[Option] {
Some(Success(42)) getOrElse "foobar"
}
is it option.map(x$1 ⇒ x$1.getOrElse("foobar"))(Some(Success(42)) or option.point(Some(Success(42)).getOrElse("foobar")) ? Maybe there’s no issue and I just need to sleep for a while :)
The first one. Method invocation is represented somewhat like Apply(Select('Some(Success(42))', 'getOrElse'), List("foobar")), so again, we start with the most nested subexpression: 42, then Success(42), then Some(Success(42)), that gets rewritten as synthetic x$1 : Success[Int] and getOrElse method will be taken from Try class, not Option.
Whereas
workflow[Option] {
Some(Failure(new Exception)) getOrElse "foobar"
}
will produce Some("foobar").
@stanch
a) That's a bug: https://github.com/scalamacros/paradise/issues/2. Probably would be easy to fix, but I don't have much time for that. If you need this fix badly, I can carry it out tomorrow. Otherwise, let's keep it on-demand.
c.prefix
@aztek It makes sense! Thanks for the explanation.
@xeno-by re 2) Lol now I remember that you suggested the same when we submitted https://issues.scala-lang.org/browse/SI-7736 :) So I’ll also try to get the type args with it and see how it goes.
Upd: even if I just put class Workflow[A] extends StaticAnnotation and never mention A again, the macro still does not expand.
Check this out! https://github.com/stanch/scala-workflow/commit/506fcc75196406ce59ea5781630ea673768f8b50#L0R49 https://github.com/stanch/scala-workflow/blob/e0325cc641fd4ed952337a0aec797d2777a0247a/project/Build.scala
scala> @workflow(option) val x = Some("f") + Some("o")*2 + Some("bar")
x: Option[String] = Some(foobar)
However:
scala> @workflow(option) val x = Some(2) + 3
error: ambiguous reference to overloaded definition,
both method + in class Int of type (x: Char)Int
and method + in class Int of type (x: Short)Int
match expected type ?
<console>:10: error: type mismatch;
found : Int(3)
required: String
@workflow(option) val x = Some(2) + 3
^
I am curious if this is an issue specific to how the tree is parsed/checked, e.g. https://github.com/stanch/scala-workflow/blob/506fcc75196406ce59ea5781630ea673768f8b50/core/src/main/scala/scala/workflow/package.scala#L195
Oh, cool!
I'm not sure about the error. Looks like the whole expression is typechecked at some point (you'll get the same error message without the annotation). Generated code is correct, so rewriter/typechecker worked well.
I think the first error happens somewhere in rewrite. The second one is just due to the failed annotation expansion. I will look into this today. We can also try to revive the context and $ duo in this fashion:
@context(option)
def foo(bar: Int) = {
@$ val a = 4 // a = Some(4)
a
}
Of course the annotations are not as good-looking as functions, and you can put them into fewer places. But on the other hand, I believe we can do
class async extends workflow(future)
@async val x = 5 // x = Future(5)
which is not bad!
Upd: The shown implementation of @async does not suffice because of how c.prefix works. I’m looking into it.
@xeno-by, after some investigation this looks very suspicious.
println(util.Try(c.typeCheck(q"4+")))
...
error: ambiguous reference to overloaded definition,
both method + in class Int of type (x: Char)Int
and method + in class Int of type (x: Short)Int
match expected type ?
Success(4.<$plus: error>)
The Success part tells us that no exception was thrown. Could you check this? c.typeCheck(q"val x: String = 5") throws just fine.
Most likely a bug. Would be great if you could submit it. As a workaround, after getting a success you could additionally check whether the resulting tree is erroneous.
On Sunday, 1 September 2013, stanch [email protected] wrote:
@xeno-by, after some investigation this looks very suspicious.
println(s"typechecking ${scope wrapping tree.duplicate}") println(util.Try(c.typeCheck(scope wrapping tree.duplicate))) ... typechecking { val arg$1: Int = $qmark$qmark$qmark; arg$1.$plus } error: ambiguous reference to overloaded definition, both method + in class Int of type (x: Char)Int and method + in class Int of type (x: Short)Int match expected type ? Success({ private[this] val arg$1: Int = scala.this.Predef.???; $iw.this.arg$1.<$plus: error> })
The Success part tells us that no exception was thrown. Could you check this? c.typeCheck(q"val x: String = 5") throws just fine.
— Reply to this email directly or view it on GitHub.< https://github.com/notifications/beacon/nHKVk3PF9lWIv8gP84yu9cLXl26slIG0NQ4V3nxnbsMOUVYPkqBZaUu54kC1tVcz.gif
< http://sgmail.github.com/wf/open?upn=I9-2BAxytMHO4kqVosQHHNh2JYlg9HJJ3bQyQWssaMZjW-2BrCy5-2BzP9k2ApW0WnvSEnMs-2FjnZEoMoUoiQCNyE8T6tOVbw0ph7YBS2R-2FydwKqbRfJ75zEGpE-2BOVizZNKhkAQi2RliZzvreIn9QdzNF-2F6xsvUfjSpr5E89lwHCQHjWplwWDayYX3Zlo0eFx5SPHaZHuca4yeX4grLa7VJG7VXcA-3D-3D
Is that it https://issues.scala-lang.org/browse/SI-7461 ?
Is that it https://issues.scala-lang.org/browse/SI-7461 ?
Yes, with a difference that compilation does not fail.
@xeno-by should I submit a new one, or we reopen this one?
Dunno yet. Let me take a look first
@stanch The example from SI-7461 works for me, so maybe there's a problem with the particular tree that you're emitting. Could you provide a minimized reproduction and reopen the bug with it?
I know you are both watching that bug, but just in case—the reproduction is here https://github.com/stanch/paradise-bug.
And now to something completely different. I realized this morning that by macro-annotating the enclosing method and defining workflow, context and $ as stubs, we would probably be able to keep the original API untouched. The macro would collect all stubs and replace them with rewritten trees. @aztek what do you think? In general, do you plan to merge my port here or should we keep it as a separate repo with new docs?
@stanch Could ypu please put up some examples of macro-annotated workflows? Or, even better, revise some of the examples in the current specs.
I don't think anybody actually uses scala-workflow, so preserving API (even as simple as this one) shouldn't be a concern.
I don't want to abandon the current implementation completely, so I'd rather have two branches here — one with release version and another with old untyped macros. If you get anything useful with annotations, it's fine with me to merge it here. There're parts of the code independent to macro interface (like instances and composition classes, that I'll push soon), that should be easy to sync.
@xeno-by Anyway, do you think macro annotations will make their way to release? It's just it looks like we would switch untyped macros implementation with another untyped-at-heart implementation. Is it possible, that macro annotations will be eventually discarded for the same reasons untyped macros were?
I will put the examples in readme.md in my fork as soon as [1]—and hopefully [2]—is fixed. As for the usage, well, besides 90 stars that you already have I am actually planning to use it in https://github.com/stanch/macroid and dependent projects. Sometimes I even think it should be a SIP! workflow is way ahead of for, effectfully and F# computation expressions :)
I’ll be happy to submit a pull request with something useful, probably having a separate branch is indeed the best solution.
[1] https://issues.scala-lang.org/browse/SI-7461 [2] https://github.com/scalamacros/paradise/issues/2
1 can be worked around, can't it?
@xeno-by Maybe it can. To maintain the code here [1], I need a way to get the error description from the erroneous tree. I did a quick search through auto-completion with no success...
[1] https://github.com/stanch/scala-workflow/blob/master/core/src/main/scala/scala/workflow/package.scala#L204
Well, that's impossible from what I know. I'll try to see what can be done this weekend.
@stanch To clear things up about the typeCheck(tree, scope) function:
- The purpose of the function is to decide whether or not
treeis lifted to a current workflow (with bindings of local variables, forming ascope). - It can basically result in three values:
Success(<type>)(so later we decide if<type>corresponds to the type of the workflow),Success(EmptyTree)(when we know for a fact, thattreeis not lifted) andFailure(e)(when there's a type error inside the lifted code, e.g.$(Some("foo") * Some("bar")); no error will be produced if the lifted code is correct). - Peeking at fragments of error messages is a hack originating from all typechecking exceptions being of the same class.
- There're a few legitmate errors that we handle: typechecking uncurried functions (
"follow this method with _'"), overloaded functions ("ambiguous reference"), package names ("package ... is not a value") and object constructors ("missing arguments for constructor","too many arguments for constructor"). In all those cases the perfect strategy would be to perform a deeper analysis of the code to get the actual type, but so far it is assumed that it won't correspond to workflow type, soEmptyTreeis returned.
@aztek Thanks! I kinda understood this, but your explanation makes it more clear. I am working on my idea with stubs and would like to start a discussion on the future API.
The current API is:
workflowcontext+$
It seems that for the future API we have the following options:
-
Workflow annotations on
ValDefs and possiblyDefDefs@workflow(option) val x = Some(1) + Some(2) // val x = Some(3)This one seems quite useful, especially if we provide shortcuts for common things such as Option, Try, etc. The meaning is rather explicit, which is good.
-
Method transformations that allow to use old API
@enableWorkflow def x(a: Int, b: Int) = { val y = context[Option] { $(Some(2) + Some(3)) } val z = workflow[Option] { Some(4) + Some(5) } }As I mentioned before, this is based on
@compileTimeOnlystubs. -
Same as above, but also providing default context
@workflowContext[Option] def x(a: Int, b: Int) = { val y = $(Some(2) + Some(8)) // uses Option val z = workflow[Try] { 3 / 0 } }
Personally I think keeping everything would be a mess. So I suggest to select the most reasonable subset of the following (usage-wise):
@workflow@enableWorkflow+workflow@enableWorkflow+context+$@workflowContext+$@workflowContext+workflow(i.e.@workflowContextalso acts as@enableWorkflow)@workflowContext+context+$(ditto)
I am leaning towards @workflow (for both ValDefs and DefDefs) and @workflowContext + $. This is more of a gut feeling, but the reasons are probably 1) @workflow and workflow is too much; 2) I doubt there will be two different contexts per average function—if there are, one can always use @workflow; 3) this keeps focus on annotations, since we have to deal with them anyway.
Please let me know what you guys think.
@xeno-by I just found a rather upsetting issue:
val x = Some(3)
@workflow(option) val y = "asd" + x
fails during annotation expansion with not found: value x. Do you have any ideas on how to overcome this?
It was a deliberate decision to restrict c.typeCheck in macro annotation context to outer scope. I was afraid that seeing yet unexpanded members during typechecking would create confusion. It looks like it's still going to bite us. Needs some serious pondering...
Damn... It started so nice... But what about
@workflowContext(option) def test = {
val x = Some(3)
$( "asd" + x )
}
?
This one doesn’t work either. Should I change Transformer owner? https://github.com/stanch/scala-workflow/blob/master/core/src/main/scala/scala/workflow/package.scala#L92
shortcuts for common things such as Option, Try, etc.
This is very neat. I wish I had those in the first place, so this would work:
val frp = context(frpWorkflow)
fpr {
...
}
There're a few minor difficulties in implementation of @enableWorkflow and @workflowContext, afaiu. We gotta traverse through the expression and find context/$ applications, how to reliably recognize that they're not locally-defined functions? Also, what if there're nested $/context applications or nested annotations? How should those be expanded? Should we keep a stack of workflow context definitions to resolve an appropriate workflow for each level of nesting? This is how it works right now.
scala> context(list) { context(option) { $(Some(10) + 2) }}
res0: Option[Int] = Some(12)
Not a major concern, certainly, but a thing to remember.
As I mentioned before, this is based on
@compileTimeOnlystubs.
I'm not sure what you mean by that. We don't need implicit conversions for macro-annotated implementation, do we?
A few general thoughts on the topic. Originally, separation to workflow and context/$ were introduced because of partial support of Scala features (right now it's just function applications and vals, basically). Since it is unlikely that the whole Scala will be supported any time soon, I think a separation of some sort must be kept.
Speaking of supported subset of Scala, I would really like to see if-expressions and pattern-matching working, so workflow macro would be more usable. With the present of @workflow annotation that would be even more relevant. Consider this example:
def eval: Expr ⇒ Env ⇒ Option[Int] =
context(function[Env] $ option) {
case Var(x) ⇒ fetch(x)
case Val(value) ⇒ $(value)
case Add(x, y) ⇒ $(eval(x) + eval(y))
}
With supported pattern-matching that could be
def eval: Expr ⇒ Env ⇒ Option[Int] =
workflow(function[Env] $ option) {
case Var(x) ⇒ fetch(x)
case Val(value) ⇒ value
case Add(x, y) ⇒ eval(x) + eval(y)
}
And even better with the annotation
@workflow(function[Env] $ option)
def eval: Expr ⇒ Env ⇒ Option[Int] = {
case Var(x) ⇒ fetch(x)
case Val(value) ⇒ value
case Add(x, y) ⇒ eval(x) + eval(y)
}
I'm not sure, how long will it take me to support it, but if we agree to this interface, that would certainly become a highly important ticket.
I support your choice of @workflow / @workflowContext + $ (perhaps, call it just @context?), but we should check with actual use-cases, starting, I guess, with the examples from the Readme.
We gotta traverse through the expression and find context/$ applications, how to reliably recognize that they're not locally-defined functions? Also, what if there're nested $/context applications or nested annotations? How should those be expanded?
Take a look at my hack here. Basically we can check that the immediate parent of $ is the current annotation. As for differentiating $ from local functions, I believe, there’s an API to typecheck and get a fully qualified term name, however, you are right in that import workflow.{$ ⇒ dollar} will not work. But I don’t think this problem is solved in either scala-async or effectful. So right now this boils down to the user being nice.
I'm not sure what you mean by that. We don't need implicit conversions for macro-annotated implementation, do we?
See here. What @compileTimeOnly does is if any $s remain after macro expansion, compilation will fail.
The example you showed could be as well done as
@workflowContext(function[Env] $ option)
def eval: Expr ⇒ Env ⇒ Option[Int] = {
case Var(x) ⇒ fetch(x)
case Val(value) ⇒ $(value)
case Add(x, y) ⇒ $(eval(x) + eval(y))
}
It actually illustrates my point that context is likely going to span the entire function, unless the function is really big and messy, but then again it’s the user who is to blame ;)
I agree about studying the use cases. In fact I have another one up my sleeve, something like https://github.com/raimohanska/ui-thread-as-monad, which will soon be incorporated in https://github.com/stanch/macroid. So @ui annotation will be quite handy! I will have some frp there too.
I suggest that we make a public gist and start adding the examples there (will update this message when I create one).
Sorry, accidentially clicked "Close"...
So here is the gist: https://gist.github.com/stanch/6421641 I think everything looks reasonable :) However, right now the implementation is blocked by the following:
-
No type args in macro annotations (https://github.com/scalamacros/paradise/issues/2)
-
Inconsistent typechecker behavior (https://issues.scala-lang.org/browse/SI-7461)
-
Limited typechecking scope in macro-annotations:
It was a deliberate decision to restrict c.typeCheck in macro annotation context to outer scope
@xeno-by, is there hope? :)
Thanks for sharing! Looks nice. Perhaps, we can also support annotations of object declaration? val x = might look confusing in some cases, imo. I changed a few things: https://gist.github.com/aztek/6421797, what do you think?
Oh, sure! I was thinking to allow annotation of classes (which would influence the code it their bodies), but somehow forgot the objects. Not sure why I had
@context(writer[Log])
val Writer(result, logEntries) = ${
...
either.
@aztek These tests pass: https://github.com/stanch/scala-workflow/blob/master/core/src/test/scala/scala/workflow/WorkflowContextSpec.scala#L6 Unfortunately I couldn’t enable more elaborate ones due to the points I had mentioned above.
@xeno-by My apologies, https://issues.scala-lang.org/browse/SI-7461 is of course solved in 2.10.3-RC1!
@stanch Would you like to take on https://github.com/scalamacros/paradise/issues/2? :)
I am sort of trying already, but can’t figure out where’s the annotation stuff there :) Upd: using the power of search... should be something starting from https://github.com/scalamacros/paradise/commit/a57ed89194331e1212645c28630ab37e8626e3fc, right?
@stanch Annotation expansion logic lives mostly in Namers.scala: https://github.com/scalamacros/paradise/blob/3410f7d0312dcfec84cc53405c26e33fc8815637/plugin/src/main/scala/org/scalalang/macroparadise/typechecker/Namers.scala.
I thought I would report my progress on this:
- https://github.com/scalamacros/paradise/issues/2 is fixed in the newest snapshot
- Many original tests pass (with minimal modifications), e.g. https://github.com/stanch/scala-workflow/blob/master/src/test/scala/scala/workflow/ReadmeSpec.scala
- I have updated the readme: https://github.com/stanch/scala-workflow#scala-workflow-
Current issues:
-
[ ] Neither
@workflow, nor$can depend on adjacent declarations. That is,val x = Some(3) @workflow(option) val y = "asd" + xfails with
not found: value x. -
[ ] Destructuring binds, e.g.
@opt Some((x, y)) = (4 + Some(8), 5)are not supported. I need to study how they are desugared more closely.
@stanch Actually, if you're enthusiastic about the first issue, you could also take a look. The logic that causes problems resides over here: https://github.com/scalamacros/paradise/blob/3410f7d0312dcfec84cc53405c26e33fc8815637/plugin/src/main/scala/org/scalalang/macroparadise/typechecker/Namers.scala#L506.
@xeno-by thanks for the heads-up! I’ll take a look. Now I also understand why
@context(option) def test = {
val x = Some(3)
$( "asd" + x )
}
doesn’t work. I’m calling c.typeCheck on $(...) alone, where x is of course not known. The current idea is to replace all $s with @workflow-annotated synthetic vals and then rely on the above issue being fixed.
What's the current status of this?
@milessabin It seems that the remaining issue is whether we should always return newTyper(context) here, regardless of the scope. With this change, the new scala-workflow interface works as expected, however 1) there’s some weird compiler crash when annotations are used in a specific way inside FlatSpec; 2) all tests in paradise pass, obviously except the one that checks typer scoping rules. So the task is essentially to verify that the FlatSpec crash is not due to the change I made in paradise and also try to fix it anyway. Besides this, we have to persuade @xeno-by to allow that change, but I don’t think he is that much against it :) So the problem is that right now I don’t have enough time to pursue this :(
@all Also see http://stackoverflow.com/questions/19379436/cant-access-parents-members-while-dealing-with-macro-annotations
Today I tried to implement scope-respecting typecheck for macro annotations and ran into a number of troubles, which require careful thinking. Would be interesting to hear your opinion.
1a) First of all, why did I originally decide to allow macro annotations to only see enclosing lexical scope, not the current scope? Well, there's a simple reason. If we allow access to current scope, we'll be potentially letting the programmers to see yet unexpanded trees, e.g. consider this snippet
{
@foo class X
@bar class Y
}
1b) Imagine that both annotations expand a class into a class and a companion, and in the implementation of @foo we decide to take advantage of that and do c.typeCheck(Ident(TermName("Y")). Unfortunately, by the time when @foo expands, @bar hasn't yet been expanded, which means that the typecheck will fail. If we swap the order of declarations, everything will work fine. Awkward.
1c) There's something that's even worse. Imagine that @bar expands the underlying class into just a companion, destroying the original class. Then, if @foo does c.typeCheck(q"???: Y").tpe and uses its results to generate code, that might lead to all sorts of strange compilation errors, compiler crashes or bytecode corruptions (depending on your luck).
2a) Allright, so we have 1b, which is weird, but we can live with that, trading the awkwardness of c.typeCheck not seeing anything for the awkwardness of c.typeCheck not seeing stuff declared below, and we have 1c, which is outright wrong, and we shouldn't let it ever happen. What can we do here?
Well, we can embrace the top-down nature of processing definitions and say that macro annotations can see their lexical scope, but right until the expansion point. Everything below won't be visible in c.typeCheck. So in our example above, @foo would only see class X, but not class Y (and definitely not object Y). That sounds like something we can work with, and that's what I tried to implement today. Unfortunately, it ended up being very tricky.
2b) Imagine for a second that our original example is not a block, but rather is a template, e.g. something like:
object M {
@foo class X
@bar class Y
}
Then our calls to c.typeCheck from within @foo will immediately hit an implementation restriction of scalac - cyclic reference errors.
The problem is that macro annotation on M's members are expanded when scalac figures out the signature of M during so called lazy completion of M's type. An interesting thing about completion is that it locks the completee while it runs. If someone touches the completee (e.g. requests its signature) during an ongoing completion, a cyclic reference error will be thrown. Unfortunately, touching M is what c.typeCheck has to do, because in order to typecheck something in the lexical context of a class, we need to know the signature of that class (which is the list of members + the list of base classes).
2c) If we apply enough care, we can work around this restriction (I will omit the details in order not to bore down an already tired reader), allowing macro annotations expanding within templates to see partially constructed templates without tripping cyclic errors, but then we have another problem.
When allowing blocks to be observed in partially expanded state, we were more or less fine, because the only ones who could make such observations were macro annotations expanding within that block. However, with templates we open up a much bigger loophole.
If @foo expanding in the second example goes out to explore the outer world, it might look into other classes and modules and request a type signature of a not yet completed method. Doing that might trigger typechecking of that method (in case when the return type of the method is not specified explicitly), which might launch other macros, which might want to ask M's type signature.
Previously, such macros would trip a cyclic reference error, and rightfully so, because they would be trying to access partially constructed type of M. What will they see now? Right, they will see partially constructed M. Not good at all.
2d) We've been hit hard by cruel fate, but that's not all yet. After some time, I accidentally realized that the tradeoff proposed in 2a is more ugly than initially advertised. Consider the following snippet:
{
@foo class X
...
<a lot of definitions>
...
object X
}
When one of the companions is affected by a macro annotation, both are expanded simultaneously. But then where does our expansion point lies that delimits what @foo can and cannot see?
Is it immediately after class X? No, that doesn't make much sense, because then we might not be able to even typecheck object X as it might refer to the definitions in-between.
Is it immediately after object X? Okay, that's reasonable, but then what do we do if one of the in-between definitions is itself annotated? If we disallow that expansion, that might jeopardize object X. If we allow that expansion, then it will get to see unexpanded class X, which as we figured out in 1c is highly undesirable. Stalemate.
- Is that the end? Not really. Even if the situation is grim, it's all about tradeoffs.
For example, for more than a year I didn't know how to implement macro annotations at all, but a couple months ago I figured out just the right amount of restrictions to put in place in order to make things work.
So now we shouldn't give up and should just try our best to find a better/smarter set of tradeoffs :)
Thanks for sharing! Regarding 2d, how crucial is the logic of silmultaneous expansion of companions? If it would follow the same rule of moving expansion point, you'd be just forced in this example to move object X before @foo class X, which doesn't seem so bad.
What about
object M {
class B
}
{
@foo class X
import M._
object X extends B
}
What about it? In that case you'd need to move both the object and the import before the annotated class (provided, that annotations can see their lexical scope).
Good point, but there are two problems with that:
- Relocation of imports might change meaning of names.
- Shuffling definitions might confuse the forward reference checker.
For more details about 2, see The Scala Language Specification page 45:
The scope of a name introduced by a declaration or definition is the whole statement sequence containing the binding. However, there is a restriction on forward references in blocks...
I'm not sure why is this a problem. We merely impose restrictions on how macro-annotations can be used, and then it's up to programmer to arrange imports/definitions properly and resolve possible problems. Neither macro nor compiler should do that, of course.
Ah, by "you" you meant the programmer, not the compiler, right? If yes, then what should the compiler say about the aforementioned snippet? Give a compilation error?
Sorry for the confusion, by "you" I meant the programmer. My initial question was if it is possible not to expand both companions simultaneously when one of them is macro-annotated, but rather stick to moving expansion point. For this snippet a compilation error might be too harsh, because @foo macro might not affect object X at all and the whole code is safe. But than again, we can't determine that in general case, so compilation error it is. The error message could say that unannotated companion should precede annotated one. This also implies that companions can't be both macro-annotated at the same time, but that is probably an extremely rare use case anyway.
Sorry, I’m late to the party, but here’s a random thought: we could make a distinction between “whitebox” and “blackbox” macro-annotations. “blackbox” ones should promise that they do not mess with the declaration names and do not add or remove declarations. Now, everything should be visible for the typechecker, however, referencing whitebox-annotated stuff should produce an error. The tricky thing is of course how to restrict blackbox behavior without having to expand annotations. To the problems:
1bseems fine, that’s a price to pay for forward-referencing;1cis gone;2dis now not that bad, since no other annotation will be able to referenceclass Xorobject X.
2c is tough though :)
Also I think forward-referencing inside blocks/templates is bad and those who do it should feel bad!
Well, open recursion is what you get when you sign up for OOP :)
Can blackbox annotations change signatures of the annottees without changing their names? Could you also give an example of a code snippet that "references whitebox-annotated stuff" and an error that it produces?
- Yes, they can. In case of
ClassDefs andModuleDefs, this would mean changing the type signatures of some of the members. The original type of the annottee, however, should not be available without performing the expansion. Thus, typechecking a blackbox-annotated definition could trigger a chain of expansions of fellow annottees, but they all would be of a blackbox kind (see 2), so what can go wrong :) - Consider
@black val x = 8 + "Y.z"
@white object Y { val z: Int = 4 }
where @black tries to eval "Y.z" and consequently typecheks Y.z. At this point, the Context should produce a compile-time error saying Illegal reference to a whitebox-annotated member “Y” in current scope.
What about a compiler plugin then? :) (I am no expert on these)
What about a compiler plugin?
@xeno-by I meant doing scala-workflow as one.
Without dirty hacks, compiler plugins can't change how typer works (with dirty hacks that's going to be roughly what an annotation-based solution is, I think). Analyzer plugins can do that, but they can only postprocess typechecking results, not change typechecking completely, like we need to do here.
Hey everyone,
I just wanted to ask whether there is some progress regarding the scoping of the typeCheck method. I ran into the following usecase:
I want to analyse the type of a wrapped class (using the pimp my library pattern) in order to generate some methods.
implicit class FooWrapper(@myMacro self: Foo) {}
Currently I use the c.typeCheck(q"(null.asInstanceOf[$tpt])").tpe trick to get hands on the Type of the wrapped class, but this does neither work with Foo being a type alias, nor type-parametrized.
@b-studios So far there hasn't been much progress unfortunately - this is quite a tricky issue. Could you elaborate on your use case (a standalone github project would be ideal), so that we could possibly come up with a workaround?
@xeno-by The usecase in short: I use the Type of the annotated parameter (Foo in the above example) to search for Java idiomatic getter and setter methods (getBar, setBar) in order to generate scala idiomatic ones (bar, bar_=).
I also published the code on github. In this test file line 81ff. are the examples which are currently not working. The type of the wrapped class is being resolved using Utils.toType.
Thanks alot
@b-studios Thanks for taking time to publish the reproduction! I'll try to take a look today/tomorrow.
@xeno-by In order to save you some time I extracted the problem inside of a new branch. This way it is only two short files to take a look at: macrodefinition and usage
@b-studios The first problem is indeed https://github.com/scalamacros/paradise/issues/14, and you'll have to reformulate the affected code at the moment. How much of a blocker is that to you?
The second problem, I think, could be successfully handled within the current framework. How about crafting a more elaborate tree for c.typeCheck, something like { class Dummy[T] { type Dummy123 = $tpt }; () }?
Also Happy New Year for everyone checking out this thread right now :)
@xeno-by The type alias issue is not so much of a problem right now, since I currently have influence on the client code.
Regarding the type parameter problem: Great idea to bind the missing parameter in the crafted type tree. I will try to use this approach, but I fear it is only applicable if the binding instance of Tis known.
@b-studios Could you elaborate on potential problems with the proposed approach for handling type parameters?
@xeno-by In the example the macro is applied in the context of the class definition where the type parameters are bound. This way one could query the type parameters and manually craft a tree - binding those type parameters in order to typeCheck the annotated type constructor (As you described above).
If the type arguments of the type constructor are bound outside of the annotated class definition it might be difficult to find the binding occurrence (one would have to reimplement the lookup of types). Please correct me, I just started using scala macros and I don't know how such a lookup could be implemented. I'm happy if I am just wrong with this assumption.
Since I perform the typecheck to retreive the Type of a tree, the above solution does not work anyway because it always returns Unit. So I came up with a dirty solution to be evaluated in the next days:
If the type tree is an AppliedTypeTree, typecheck null.isInstanceOf[$tpt[..$wildcards]] with the appropriate number of wildcards. This way I prevent access to the type variables and the "not found: type T" typecheck error.
@b-studios Oh wow, this is something that I didn't expect:
class X[T <: String] {
class test[T <: Int] {
implicit class XWrapper(@accessors self: P[T]) {}
}
}
Prints members of String, not Int. Yes, that's a problem - I'll try to fix it.
@b-studios Yes, the type of the returned tree will be Unit, but if you inspect the types of its inner trees, then you'll be able to get to the type of Dummy123.
@b-studios I have fixed the problem with typecheck not seeing type parameters declared in enclosing classes. In the morning (in ~8-10 hours), if all the builds are green, I'll publish 2.0.0-M2 with the fix.
@b-studios 2.0.0-M2 is now published for 2.10.3 and 2.11.0-M7. Please let me know whether it solves the problem with type parameters.
@xeno-by Sorry for the late message. Thank you alot! Your fix actually works perfectly for member annotations like
trait Foo[T] {
@annotation def foo: T
}
I guess there is a reason for the constructor argument annotations not to work?
class Foo[T](@annotation foo: Bar[T])
// scala.reflect.macros.TypecheckException: not found: type T
but if you inspect the types of its inner trees, then you'll be able to get to the type of Dummy123
I tried this, but extracting Dummy123 from the results of typeCheck and asking for the type still yielded null. I have to investigate a little more, but sadly won't find the time in the next few days.
-
Annotations on type and value parameters of classes and methods expand the enclosing class/method, not just the member itself. Therefore
Tdoesn't exist yet when the ctor param annotation expands. However you could grab it from the class being expanded and wrap the dummy as before. -
Here's what I had in mind wrt Dummy123: https://gist.github.com/xeno-by/8255893
@xeno-by Ad 2) I was using dummy123.tpe on the typechecked tree which always yielded <notype>. Switching to dummy123.symbol.typeSignature (as you did in your gist) works. Thanks a lot.