KEEP
KEEP copied to clipboard
Create override-extension-functions.md
A new proposal sprung from this proposal. This is a first sketch, which I hope to fill with fruitful discussions.
Currently it's all about syntax, which is definitely not enough.
Should "dynamic dispatch" work for receivers of reified
type parameter type only?
How exactly should "dynamic dispatch" work? (you can skip implementation details here, for now just describe the intended behavior)
Can you provide a bit more detailed example, e.g., with copy
function based on copyTo
or something?
Then you can use this example as a base to discuss the intended behavior in detail.
Should "dynamic dispatch" work for receivers of
reified
type parameter type only?
No, for all explicit extension receivers. I thought that the example code already illustrates that, doesn't it?
How exactly should "dynamic dispatch" work?
I will elaborate on this topic. Is more code examples (using the copy
example as you mentioned) with comments, which method should be chosen and executed in that specific case sufficient?
Can you provide a bit more detailed example [...]
Sure.
Can you describe in detail how the dispatch should be performed?
- Can a function (property) with
dispatch
receiver parameter be invoked outside ofinline
function withreified
type parameters? What behavior is expected? - Assume you have the following in module M1:
interface IA
fun (dispatch IA).foo() { ... }
At run-time, you also have the following definitions in module M2 (depending on M1):
interface IB : IA
fun (dispatch IB).foo() { ... }
Now, some code in M1 receives a value with run-time type T <: IB.
What implementation of foo
should be used?
Can you describe in detail how the dispatch should be performed?
Yes. I was on vacation I need to find some spare time. I will include this (and the last question/answer on #35) into the proposal.
A quick answer to your questions:
Can a function (property) with dispatch receiver parameter be invoked outside of inline function with reified type parameters? What behavior is expected?
Yes. It should just behave as if the extension property/method has been defined as a property/method on the dispatch
receiver.
Assume you have the following [...] What implementation of
foo
should be used?
In your example fun (dispatch IA).foo()
should be executed, but in this slightly adapted snippet fun (dispatch IB).foo()
should be executed:
// module M1
package m1
import m2.foo
interface IA
fun (dispatch IA).foo() { ... }
// some function body in M1
val someBasA: IA = getSomeIB()
someBasA.foo() // executes `fun (dispatch IB).foo()`, since the method has been imported.
// end function body
// module M2
package m2
interface IB : IA
fun (dispatch IB).foo() { ... }
So, it sounds more like an overloaded function resolved at call-site during inlining.
I'd rather treat those as regular overloaded functions (no dispatch
keyword needed, really), but mark those reified overloaded calls somehow. This would, by the way, allow such reified dispatch on functions that were not designed with this feature in mind.
No I don't think so. As I wrote in my answer it should work for normal non-inline functions, too. I don't think that this proposal should have anything to do with inline functions (although I use it in my example). But of course it is about overloading, since I cannot override outside the class (extension functions can only be overloaded iff they are not bound as a method to an implicit receiver), but overloading in the extension receiver (which is the first parameter in a extension function in languages like Xtend) is more or less "the same" as overriding member functions.
Resolving overloaded functions at call-site is the path of #35.
OK I think I know now why you mean this is a special form of overload. Consider the following snippet in Kotlin 1.0.3 (sorry it is long):
open class IA {
fun foo(newObj: IA): IA {
println("IA.foo")
return newObj
}
fun foo(newObj: IB): IB {
println("IB.foo")
return newObj
}
fun foo(newObj: IC): IC {
println("IC.foo")
return newObj
}
fun bar(newObj: IA): IA {
println("IA.bar")
return newObj
}
}
open class IB: IA() {
fun bar(newObj: IB): IB {
println("IB.bar")
return newObj
}
}
open class IC: IB() {
fun bar(newObj: IC): IC {
println("IC.bar")
return newObj
}
}
fun IA.baz(newObj: IA): IA {
println("IA.baz")
return newObj
}
fun IA.baz(newObj: IB): IB {
println("IB.baz")
return newObj
}
fun IA.baz(newObj: IC): IC {
println("IC.baz")
return newObj
}
fun IA.bazz(newObj: IA): IA {
println("IA.bazz")
return newObj
}
fun IB.bazz(newObj: IB): IB {
println("IB.bazz")
return newObj
}
fun IC.bazz(newObj: IC): IC {
println("IC.bazz")
return newObj
}
fun main(args: Array<String>) {
val a: IA = IA()
val b: IA = IB()
val c: IA = IC()
a.foo(a)
b.foo(b)
c.foo(c)
println()
a.foo(IA())
b.foo(IB())
c.foo(IC())
println()
println()
a.bar(a)
b.bar(b)
c.bar(c)
println()
a.bar(IA())
b.bar(IB())
c.bar(IC())
println()
println()
a.baz(a)
b.baz(b)
c.baz(c)
println()
a.baz(IA())
b.baz(IB())
c.baz(IC())
println()
println()
a.bazz(a)
b.bazz(b)
c.bazz(c)
println()
a.bazz(IA())
b.bazz(IB())
c.bazz(IC())
}
It outputs:
IA.foo
IA.foo
IA.foo
IA.foo
IB.foo
IC.foo
IA.bar
IA.bar
IA.bar
IA.bar
IA.bar
IA.bar
IA.baz
IA.baz
IA.baz
IA.baz
IB.baz
IC.baz
IA.bazz
IA.bazz
IA.bazz
IA.bazz
IA.bazz
IA.bazz
So overloading works for the extension receiver as expected, in terms of weird overload semantics of Java (which Kotlin adopted), since overloading in the same class (see IA.foo
) evaluates the type of the given parameter (statically), but in a class hierarchy (see IA.bar
) it evaluates only to the method visible in IA
and does not consider IB.bar
and IC.bar
. From a compiler standpoint this is reasonable, but from the OO user's point of view it is IMHO not. The same holds for the extension functions IA.baz
and IA.bazz
, where the first defines its overloads on IA
and the second on IA
, IB
, and IC
, respectively. So in order to be consistent with this overloading behavior, I have to change my example in the proposal since I currently use the case of method overload in a class hierarchy (see above). Since I would like to have the behavior of dispatch extension receiver and virtual receivers of member functions (i.e. methods) as similar as possible, I will adapt this and elaborate on this.
But if I remove the overloaded parameter which leads to method override the static dispatch of extension functions again leads to different behavior (than with a dispatch keyword):
open class IA {
// ...
open fun bar2(): IA {
println("IA.bar2")
return this
}
}
open class IB: IA() {
// ...
override fun bar2(): IB {
println("IB.bar2")
return this
}
}
open class IC: IB() {
// ...
override fun bar2(): IC {
println("IC.bar2")
return this
}
}
fun IA.bazzz(): IA {
println("IA.bazzz")
return this
}
fun IB.bazzz(): IB {
println("IB.bazzz")
return this
}
fun IC.bazzz(): IC {
println("IC.bazzz")
return this
}
fun main(args: Array<String>) {
a.bar2()
b.bar2()
c.bar2()
println()
println()
a.bazzz()
b.bazzz()
c.bazzz()
}
As it results in:
IA.bar2
IB.bar2
IC.bar2
IA.bazzz
IA.bazzz
IA.bazzz
@dnpetrov
I tried out several examples and I agree that adding a keyword dispatch
for this is not necessary in the first step. I updated/refined the proposal accordingly (and renamed it to override-extension-functions
).
It now just propose to allow overriding extension functions analogous to overriding member functions. This feature is completely additive as the current behavior stays the same although I think that it is inconsistent with member functions as overloading a extension function with a function of the same signature is allowed (and statically dispatched) but not on member functions. It would be more consistent to allow this on member functions, too:
open class A {
fun foo() {
println("A")
}
}
class B: A {
// should be allowed and overload A.foo since A.foo is not `open`.
// If A.foo would have been `open`, this would be denied and the
// keyword `override` is required (and then it behaves like a normal
// overriden method).
fun foo() {
println("B")
}
}
val b: A = B()
b.foo() // prints "A"
val b2 = B()
b2.foo() // prints "B"
Should I add this (above) to the proposal?
I address your hint regarding "special handling of overloading" in section "Outlook".
@dnpetrov: I added a description regarding the scope (as you asked me) and a section on realization in a translational fashion presenting a possible implementation (what the compiler would output) in pseudo kotlin
A description how super
-calls are realized in extension functions is missing, I will add this later on.
OK, I added how super
-calls could be realized, how the kotlin source code for the given "compiler output" in the realization chapter would look like, added an a little bit insane idea how this could be used with type parameters as extension receiver.
I added this:
This feature is completely additive as the current behavior stays the same although I think that it is inconsistent with member functions as overloading a extension function with a function of the same signature is allowed (and statically dispatched) but not on member functions. It would be more consistent to allow this on member functions, too.
...to section "Further Discussion".
"Module" in Kotlin is a compilation unit as defined by the build system. It is not "package" (which corresponds to package in JVM).
It's not clear to me how "overriding extension" members (functions and properties) should be located. Should they all be package-level members? Should they all belong to the same package as an overridden "open extension" member? Taking into account the whole module scope when resolving extension overrides doesn't look feasible. My previous question regarding local functions actually belongs here, too.
In the multiple modules example (M1, M2, and M3) M2 and M3 should see different definitions on 'foo' in run-time. How this would be achieved?
"Module" in Kotlin is a compilation unit as defined by the build system. It is not "package" (which corresponds to package in JVM).
OK I will change that.
It's not clear to me how "overriding extension" members (functions and properties) should be located. Should they all be package-level members?
There are only the two possibilities "package-level" and "class-level", right? Since "class-level" has an implicit receiver I think it gets overly complex allowing "overriding extensions" there (if it is necessary, we can add it later on since it would be additive).
Taking into account the whole module scope when resolving extension overrides doesn't look feasible.
When I read the Kotlin documentation correctly, currently the scope of extension methods is either "class-level" or "package-level" and there is no "module-level" scope for them. So, this should for sure not be the case for "overriding extensions", too. I.e., you have to import an extension outside of its package (since "class-level" is not allowed, see above). But I think I described this in detail in the current version of the proposal, didn't I? What do you miss in this regard?
My previous question regarding local functions actually belongs here, too.
I will add a section for this.
In the multiple modules example (M1, M2, and M3) M2 and M3 should see different definitions on 'foo' in run-time. How this would be achieved?
The foo
methods are imported "as is" in module M2 or M3, if no new cases for the dispatch are added (in this example the override for D
is already in M2):
// M3 (without override)
import m1.*
import m2.*
open class E: C
val l = arrayOf(A(), B(), C(), D(), E())
// prints "A\nAB\nABC\nAD\nABC"
l.forEach { it.foo(); println() }
Leading to this pseudo kotlin implementation:
// M3 (without override)
import m1.*
import m2.*
open class E: C
// no need to implement dispatch method `A.foo` because it can be imported "as is".
val l = arrayOf(A(), B(), C(), D(), E())
// prints "A\nAB\nABC\nAD\nABC"
l.forEach { it.foo(); println() }
If, however, a new override is added the static extension method foo (for dispatching) is not imported, but generated from scratch for the new module (e.g. M3). E.g. if we add a new class E in M3 with override E.foo
this would look like this in kotlin:
// M3 (with override)
import m1.*
import m2.*
open class E: C
override fun E.foo() {
super.foo()
print("E")
}
val l = arrayOf(A(), B(), C(), D(), E())
// prints "A\nAB\nABC\nAD\nABCE"
l.forEach { it.foo(); println() }
Pseudo kotlin:
// M3 (with override)
import m1.*
import m2._foo
// **not** import.m2.foo !!!
open class E: C
// this is generated completely new since `A.foo` is **not** imported
fun A.foo() {
when(this) {
is E -> _foo(e = this, superFunction = {e: E -> _foo(c = e, superFunction = { c: C -> _foo(b = c, superFunction = { b: B -> _foo(a = b) }) })})
is D -> _foo(d = this, superFunction = { d: D -> _foo(a = d) })
is C -> _foo(c = this, superFunction = { c: C -> _foo(b = c, superFunction = { b: B -> _foo(a = b) }) })
is B -> _foo(b = this, superFunction = { b: B -> _foo(a = b) })
is A -> _foo(a = this)
}
}
fun _foo(e: E, superFunction: (E) -> Unit) {
superFunction(e)
print("E")
}
val l = arrayOf(A(), B(), C(), D(), E())
// prints "A\nAB\nABC\nAD\nABCE"
l.forEach { it.foo(); println() }
So, there is no need for a different definition of foo
at runtime, since it is already there (generated) at compile time.
If you have two different compilation units that you import in a third one, which have different implementations of foo
, you do not import any of them but generate a new one, too.
I see one issue, do you mean this?: If I change M1 and M2 I have to recompile M3, in order to update my local A.foo
implementation. But this is an implementation problem of my realization. If you do not like this effect, a meta-structure could be introduced, which contains the V-table at runtime (which contains the information of the dispatch method's when
-expression, depending on the current scope).
Then A.foo
would not contain the when
-expression but a lookup to the runtime information in the V-table and call the _foo
method returned by the V-table.
I will change the section name to "Possible Realization" and add a "Alternative Realization" section.
I updated the proposal. It now describes an alternative realization and the issues with the realization in the proposal (which is used for ease of presentation). I added a paragraph to describe local method override as well as a corresponding question to the "Open Questions" section. Further on, I refined the usage of the notions "compilation unit", "module" (both describe the same), and package (not to be confused with the two former notions).
@dnpetrov are there any open concerns?
Sure, there are some, sorry for a long delay. I need some free cycles to explain my technical concerns in detail.
Ok. Not a problem :)
So far, I find the following issues with this proposal (or maybe with the approach in general):
- Search scope for extension override resolution is probably too broad. For a "regular" override in a class/interface it is limited to the member scopes of the base types. For an extension function override, it is currently not defined, so it's rather hard to reason about it. If we assume that's a static scope of the corresponding file (which includes symbols defined in package, and symbols imported in file), it becomes very close to "unfeasible". If we consider the task of generating the dispatching function for an open extension function: it depends on every file in a project (which can import this function) AND every dependency (which can contain "base" function definition). That looks really bad. Other ideas?
- Separate modules will have separate implementations of a dispatching function, which can clash at run-time.
This works more or less Ok in the desugared code you've provided, because this dispatching function is defined explicitly in code. Compiler has no luxury of placing functions "somewhere".
Consider the following case:
module M1 with a "base" function
foo
; modules M2 and M3 containing (module specific) overrides forfoo
; module M4 depending on M2 and M3.
Regarding your 2nd point: I already consider this case in the current version of the proposal 0228a9d . The shown "implementation" is just for presentation purposes (to give an idea of the semantics). Of course you need a virtual table approach (as I mentioned there), so that every dispatch function (in each compilation unit) does a lookup on the same vtable (which might, e.g., be filled in a static code block). I wrote the same case in my proposal as you mentioned (see Alternative Realization).
And I think this is an answer for your 1st concern, too. If you have a dynamic vtable, which is filled during class/package load (e.g. via static code blocks I am not totally aware how you currently handle this in Kotlin), this vtable can contain the scope information too (I wrote an example in the proposal), which has just to be "looked up" in the dispatch function. The compiler can generate the needed dispatch functions, which are called statically. Inside it asks the vtable taking into account the scope.
This could even be optimized, using/generating different dispatch functions per "call-site-scope", which need only very tailored vtable accesses (taking into account only the imports and stuff of the current call-site). These dispatch functions, should even be inlineable (or be fully unnecessary, if the corresponding lookups are generated directly to the call-site).
On a lower level calls to virtual functions (as all non-final methods on Objects are in Java) are realized via a vtable lookup followed by a static call to the found method with the instance as implicit first parameter.
Of course the lookup is more complex. Further on there might be competing extension functions on the classpath and being in scope, but like class loading it could be resolved via class path ordering (vtable entries would be "first come first serve"). There could even be runtime hints via stderr (like Java saying you are using max permgen space although this option is not available anymore).
In Xtend they have explicit extension imports but implicit extension method definition. In Kotlin it is the other way around. The "Xtend way" you can use my "possible realization" (and they do it this way) Because you import the extension methods from the class directly and not from the package. Therefore you cannot have another compilation unit that interferes, as the class loader does not load the same class from different compilation units (it loads the class that is higher on the class path).
Having two compilation units with classes in the same package is something that is (or at least should be) discouraged at all (and therefore Java security mechanism prevents this for signed jars IIRC). But you cannot be sure... so a vtable
approach would be the way to go in Kotlin.
Ok, so, essentially this boils down to a dispatch table built at execution time.
This requires detailed high-level rules expressed in terms of class loading (NB there's no such thing as "package initialization") and class/type identity/equivalence.
However, since right now we are more or less sure what the "cost" of this overall solution is likely to be, I'd suggest taking a pause as we consider some other approaches to the "expression problem" in Kotlin.
Ok, so, essentially this boils down to a dispatch table built at execution time.
Yes.
This requires detailed high-level rules expressed in terms of class loading (NB there's no such thing as "package initialization") and class/type identity/equivalence.
IIRC packages are realized as classes under the hood, aren't they? So there is a way to add a static initializer there. Or you can just put/generate a hidden class for such packages, which has the static initializer then.
However, since right now we are more or less sure what the "cost" of this overall solution is likely to be, I'd suggest taking a pause as we consider some other approaches to the "expression problem" in Kotlin.
OK, can you point me to that discussions? I would really appreciate, if I could see what these considerations are about :).
Thanks in advance.
Sure, I'll provide the corresponding links as something becomes available. I'd also (try to) keep you updated if we have any major internal results. See below for more details.
Regarding packages in Kotlin - see http://kotlinlang.org/docs/reference/java-to-kotlin-interop.html#package-level-functions. Package-level functions and properties become members of the corresponding "file classes". "Package classes" were present in early versions of Kotlin, but were removed before 1.0 release, because in practice they do clash. There are some scenarios where identically named packages are present in different compilation units, even if that's discouraged. E.g.: main module + tests; small pluggable extensions. Unfortunately, that's exactly the reason why that can't be checked in a compiler: these packages belong to different compilation units.
So far, in Kotlin we tried to keep the language abstractions transparent to Java and JVM, and designing new language features is always a search for a reasonable compromise that would both provide expressive power and keep your JVM intuition in place. With the programmatically generated dispatch table you are basically programming JVM virtual dispatch inside a JVM. There are the following classes of technical problems to deal with:
- Dispatch table updates should be formalized in a way to minimize unpleasant surprises.
- In JVM, classes are processed in multi-threaded environment. JVM provides some synchronization guarantees for the class members initialization. However, the dispatch table initialized from possibly multiple class initializers should be thread-safe itself.
- It's rather unlikely that programmatical dispatching will be optimized by JIT. We could use indy here, but not for Java 6 (read: Android). We could also use some bytecode postprocessing at run-time. These issues are not just JVM-specific; we'll have to deal with them in every managed run-time.
Note that you can emulate "Xtend way" (dispatching functions are (virtual) methods in classes) in Kotlin by using implicit receivers. You'll still have to write dispatch methods manually. Still, you'll be using available dispatch mechanism for extension providers instead of reimplementing it.
The alternative approaches we consider now are basically different ways to pack that dispatching in platform classes, e.g., type classes with "table-passing implementation" (that's basically extension provider as an implicit parameter in Xtend terms).
Package-level functions and properties become members of the corresponding "file classes".
So there you have a class where you could put the initializer code.
The alternative approaches we consider now are basically different ways to pack that dispatching in platform classes, e.g., type classes with "table-passing implementation"
Creating "table classes" automatically mimicking the original inheritance relations is a clever idea, especially because the JIT compiler will optimize this. I really do like the idea, but it makes it not as easy to add implementations dynamically from different compilation units (exactly the place where the "naive implementation" from my proposal fails) and having scopes. But if you use "table interfaces" instead you can hide (and therefore replace) the implementations and you can have very dedicated implementations for a given scope by "storing" a specific instance of the correct implementation in your scope at runtime.
If this helps, I can add a respective section to this proposal (or somewhere else) explaining some details.
The alternative approaches we consider now are basically different ways to pack that dispatching in platform classes, e.g., type classes with "table-passing implementation" (that's basically extension provider as an implicit parameter in Xtend terms).
@dnpetrov is there any news on your alternative approach?
I have another case where the possibility to have (multiple) dynamic dispatch (in particular double-dispatch for extension functions) would be very handy: writing DSLs/typesafe builders. If you try to reuse a DSL or builder for different outputs, you would not put the rendering code into the meta model classes of your domain, but in extension functions, so that you can change the implementation context specific (and have SoC).
I am even thinking of realizing the feature via kapt
.