eileen2003-w
Results
2
issues of
eileen2003-w
I have read the text and found that I have to install the flash-attn1.x to fit my Turing GPU, so I get the source package from github: https://github.com/Dao-AILab/flash-attention/releases?page=6. Then I...
I have already downloaded Flash-attention 1.x(actually flash-attn 1.0.8) because currently I only have a GPU with TURING architecture(TITAN RTX). But for my needs (running a demo of a multimodal LLM),...