[RFC] Establish TVM Unity Connection -- A Technical Strategy
This RFC summarizes an overall technical strategy on establish TVM Unity connection milestone. It is supplementary to the technical RFCs about specific components.
Thanks @tqchen!!! I'm excited to see the pre-RFC become this formal RFC.
The Unity Connection is a great step from multi-level lowering compilation to a flexible, unified abstraction for the end-to-end model compilation. I'd like to summarize the discuss thread here for readers who did not participate.
Modularized Compilation Flow
The TVM unity uses cross-layer abstraction to represent:
- Graph IR: how to organize ops/kernels
- Tensor IR/Libraries/FFI: how to execute the ops/kernels
Based on such abstraction, we can build the module at any stage as long as the module is legal. However, the current multi-stage lowering pipeline requires we must have a
GraphIR->TensorIR->RuntimeModulepipeline.
Easy to customize
Fast customization is critical during researching and prototyping. The modularized workflow natively enables it. Here I'd like to share two cases:
Ex1: Adding new operator supports
Instead of 7-step tutorial, here are only two steps with the unity connection:
- implement how the op is computed (both tir or libraries are good),
- Directly call the implementation in the unified abstraction.
Ex2: Customizable operator fusion
Each pass is decoupled, which means we are able to fuse operators (to be general, optimize the module) in multiple passes. i.e. We can have a customized pass to fuse two convs while using the internal fusor to fuse the following element-wise ops.
Cross-layer optimization opportunities
Layout optimization is a typical cross-layer optimization, that we are able to do with TVM Unity. Also, we do have some prototype results to prove it works. Additionally, I'm happy to see the community members working on different backends are all looking forward to this feature.
Note that this RFC is a technical strategy, which is a bit different with the Relax Upstream RFC https://github.com/apache/tvm-rfcs/pull/89. Please turn to that thread if you have specific comments on relax itself.
Love to hear ideas from the community.
opened a voting thread https://github.com/apache/tvm/issues/12651
Thanks, everyone, for putting effort into making unity development happen.
Today, we come to the one-year mark of the unity connection proposal. It is amazing to see how the landscape of AI/ML has changed and how some of the emerging needs fit right into the strategy we bought one year ago. Because of the time delays. The strategy as it is no longer timely sense as it was one year ago. So we decided to withdraw the proposal and work to come up with new ones.
It is now obvious that if we took the default option of not pursuing or delaying the strategy standing at last year's point,, we would miss the opportunity to empower the community in the age of genAI.
Luckily, we did not let that slip away and pushed the development of unity branch. Many of you may have seen the news lately on how they bear fruits in enabling real applications like bringing LLMs and other foundational models to various devices.
The good news is that we have clarified strategy decision process to empower strategic proposals. There are also new emerging opportunities in the age of genAI and foundational models, which majority community support.
Please follow the unity branch, continue participating in community discussions, and let us bring TVM to enable everyone. Thank you to many community members who supported the proposal and believed in what it can bring. Let us do our best pushing unity development. We will also collect inputs and come up with new versions of strategies that reflects the current inputs.
Thanks everyone for the year-long discussion.
I'd love to note that, in respective, we would have already missed the boat of generative AI, and Apache TVM would have lost momentum of empowering the community on the up-to-date workloads they are interested in, if we decided to follow the default option not pursuing a real solution, and to stagnate with endless debates without concrete action taken.
I'm super glad to have a set of decisive PMC staff who care and a broad vibrant community who contribute and help. We work collaboratively on the Unity branch, this super valuable technical directions, catching up and empowering generative AI together!