TensorComprehensions
TensorComprehensions copied to clipboard
Support for user-provided cache key to reuse autotuning results on different machines
@ttheodor i autotune the function in server with 4gpu and get cache file. and i try to reuse generated cache file in different server with 4gpu. but it throws error 2 server has diffenet gpu. @nicolasvasilache said it is key related problem. could you give me some instrunction to reuse cache files?
@ttheodor for more context I am thinking we could override the automatic HW key by some user specified string for load / store. The user is then responsible for ensuring things are proper.
@seongwook-ham note that performance on the machine you did not autotune for will be not great or even bad. If you want good results, autotune on the target HW (for the foreseeable future).
@ttheodor when will it be possible to use #233 ? and how could i use it?
@ttheodor is there some progress?