ONE icon indicating copy to clipboard operation
ONE copied to clipboard

[loco] Allow node mixin usage with direct inheritance

Open ai-moiseev opened this issue 3 years ago • 10 comments

What

Currently, loco IR's node mixins (such as FixedArity<>::Mixin and others) use virtual inheritance which allows to avoid 'diamond inheritance' problem, since it can often appear during mixin idiom usage. But virtual inheritance can introduce runtime performance cost (such as lesser cache-friendliness and requirement for usage of dynamic_cast, see this quesion on stackoverflow) in dependent backends which use loco.

Proposal here is to introduce optional template argument for mixins which will allow to parametrize inheritance mode using 'wrapper' type.

Why

There's runtime optimization requirements in one of backends which uses loco as a library.

Draft

  • FixedArity mixin: #8528

ai-moiseev avatar Feb 28 '22 12:02 ai-moiseev

Please get approval from @parjong

seanshpark avatar Mar 01 '22 22:03 seanshpark

@ai-moiseev , please share how to reproduce the problem you encountered.

seanshpark avatar Mar 01 '22 23:03 seanshpark

@ai-moiseev , I've little understanding of this. (1) what exaclty is your problem?(please share code for example) (2) how does draft #8528 solve your problem?

seanshpark avatar Mar 01 '22 23:03 seanshpark

@seanshpark The problem with current implementation of FixedArity mixin is that it adds 'virtual' link to inheritance chain, which leads to various limitations related to runtime performance. The solution here is to provide ability to switch between 'virtual' and 'direct' variants with optional template type parameter, so it maintains backward compatibility with existing code base, but provides means to optimize other parts.

I can share with you backend issue from private repo which can give a little bit more context. I'll send you e-mail shortly.

ai-moiseev avatar Mar 02 '22 05:03 ai-moiseev

which leads to various limitations related to runtime performance.

OK. So how's the performance gain? And what module for what model?

I'll send you e-mail shortly.

Please do not share with e-mail. It would be better to share information with backend repo.

seanshpark avatar Mar 02 '22 05:03 seanshpark

OK. So how's the performance gain? And what module for what model?

Around 10-17% percent of compile time reduction for medium-sized models. I've shared some benchmarks in backend repository and plan to add more for comparison.

Please do not share with e-mail. It would be better to share information with backend repo.

Sure. I've mentioned you in backend repository issue, it contains code example. I can duplicate description partially in this issue to improve intention clarity.

ai-moiseev avatar Mar 02 '22 05:03 ai-moiseev

I've shared some benchmarks in backend repository and plan to add more for comparison.

Thanks!

I can duplicate description partially in this issue to improve intention clarity.

No need :)

seanshpark avatar Mar 02 '22 05:03 seanshpark

I've roughly read the related issues in backend repo. Q) do you have any plans to revise codes in compiler modules too?

seanshpark avatar Mar 02 '22 05:03 seanshpark

Q) do you have any plans to revise codes in compiler modules too?

I'm currently focused on backend optimization, but it can sure bring optimization to compiler modules too :) I suppose intention is to speed up entire toolchain, so I can work on this.

ai-moiseev avatar Mar 02 '22 05:03 ai-moiseev

I suppose intention is to speed up entire toolchain, so I can work on this.

OK, sounds nice :) Please take your time. I'll follow the changes in backend, as to understand :)

seanshpark avatar Mar 02 '22 06:03 seanshpark