zinc icon indicating copy to clipboard operation
zinc copied to clipboard

Invalidate macro client when type parameter changes

Open Friendseeker opened this issue 1 year ago • 6 comments

Fixes #1171

Issue

When a macro takes some type parameter T, e.g. def hasAnyField[T]: Boolean = macro hasAnyFieldImpl[T]. Changes in T require macro client to be invalidated if macro implementation inspects T (e.g. fetching members of T).

Fix

When implementation of T is invalidated, invalidate all macros clients that use T as type parameter using invalidation logic from https://github.com/sbt/sbt/pull/1778

Friendseeker avatar Dec 27 '23 04:12 Friendseeker

/cc @bishabosha who knows the most about the current state of the bridge in scala 3.

smarter avatar Jan 03 '24 15:01 smarter

I will test the added test case in scala 3, see what is needed on that side

bishabosha avatar Jan 11 '24 16:01 bishabosha

@bishabosha ping

SethTisue avatar Feb 20 '24 22:02 SethTisue

I added the test cases in https://github.com/dotty-staging/dotty/commit/304c41a714a50016b443c87640081a0eb479682f, which fail, so we would need to reimplement the logic in Scala 3 - is there a special reason to introduce the new DependencyContext enum case? or is it just for nicer debugging? or that inverse invalidation computation can't be replicated without adding the case

bishabosha avatar Feb 28 '24 14:02 bishabosha

Posting my 2 cents at @bishabosha's request: in general, you cannot solve this problem using the current approach. A macro can look at anything. Even if now you're solving the case where it looks at the members of a type given as a parameter, you're probably not solving the case where it looks at the members of the type of a local variable in a block that is passed to it as argument (to name a random example).

There are only 2 reasonable choices when it comes to macro expansion:

  • Assume macros behave as a regular methods, and therefore record dependencies exactly like regular methods. I macros look at other things, there will be undercompilation.
  • Actually track exactly what any particular macro expansion looks at.

Anything else in-between places an arbitrary boundary on what you can and cannot track, and that boundary will be broken next month by the next macro that comes in.

In order to actually track exactly what macro expansions look at, you would need something very similar to what we use in the Scala.js optimizer and emitter: a so-called "knowledge guardian" that gates every call to the Quotes API and records all the calls made by a macro as its dependencies. If you want to know more about that, you can find all the details in my only research paper ever: https://lampwww.epfl.ch/~doeraene/publications/oopsla16-incremental-optimizer.pdf

sjrd avatar Feb 28 '24 16:02 sjrd

I added the test cases in dotty-staging/dotty@304c41a, which fail, so we would need to reimplement the logic in Scala 3 - is there a special reason to introduce the new DependencyContext enum case? or is it just for nicer debugging? or that inverse invalidation computation can't be replicated without adding the case

The inverse invalidation indeed can't be replicated. It is similar to the inheritance case, where a special type of dependency is needed to encode necessary information for invalidation.

Posting my 2 cents at @bishabosha's request: in general, you cannot solve this problem using the current approach

Indeed typesTouchedDuringMacroExpansion is currently, only an approximation. However, this is still a step over the status quo. I still need to read the paper in detail, but I believe future work on bringing knowledge guardian to Dotty (if it isn't yet done) can directly reuse the PR's logic by replacing typesTouchedDuringMacroExpansion with the exact types gathered from Quotes API calls.

Friendseeker avatar Feb 28 '24 20:02 Friendseeker

@sjrd wrote:

There are only 2 reasonable choices when it comes to macro expansion:

  • Assume macros behave as a regular methods, and therefore record dependencies exactly like regular methods. I macros look at other things, there will be undercompilation.
  • Actually track exactly what any particular macro expansion looks at.

Anything else in-between places an arbitrary boundary on what you can and cannot track, and that boundary will be broken next month by the next macro that comes in.

I think I agree with the overall points, but there's some subtlety due to the nature of Zinc. To prevent invalidation spreading like wildfire, Zinc uses namehashing as a heuristic optimization to limit the invalidation only to the client code that uses the name of the changed API (https://www.slideshare.net/EugeneYokota/analysis-of-zinc-scalasphere-2019#24).

I think adopting Strategy 1 and assume that macros behave like regular method, or inlined methods like Scala 3, is fine. But what we should do, is basically treat lexical elements going into the macro application as an input parameter of a tree generator. For the case of mainargs macro the callsite looks like:

  private[this] lazy val parser: ParserForClass[Config] = mainargs.ParserForClass[Config]

so the input in this case is just the parameter [T] of def apply[T]: ParserForClass[T]. So I feel like the solution presented in this PR is still somewhat within the realm of Strategy 1, just undoing the namehashing optimization for macro application, similar to the special-cased inheritance cases.

eed3si9n avatar Apr 08 '24 02:04 eed3si9n