eppaneamd

Results 3 comments of eppaneamd

@akaitsuki-ii I think this fix needs to be revisited, due to: - When flashinfer is installed on AMD GPUs, `HAS_FLASHINFER` gets set to `True` in [globals](https://github.com/feifeibear/long-context-attention/blob/main/yunchang/globals.py#L99-L110) when e.g. MI300X returns...

This issue is likely present on ROCm/pytorch release branches 2.8 and 2.9 as well.