Haotian Cui
Haotian Cui
Thank you for the suggestion! I will try to add these features soon, probably by additional kwargs.
Hi, thank you for the question and sharing the your environment info. It looks like you have `flash-attn 1.0.4` installed. So the warning at model.py:21 basically says the `FlashMHA` class...
Hi, thank you for your interest and question. I also saw some related questions from others and I would love to provide further explanation for this. While honestly speaking, I...
Thank you! Will test the docker. In the meantime, the current repo will require running on GPU. Can the current docker access GPUs? Is Nvidia container required?
Hi, yes, the generative attention masking is implemented in the dev-temp branch. The specific code is here https://github.com/bowang-lab/scGPT/blob/dev-temp/scgpt/model/flash_layers.py#L19. It takes in two parts of input pcpt_total_embs, gen_total_embs. The first relates...
Hi, this looks some issue with pip install. I didn't find which package was related. Can you show several more lines before the copied messages, it should say this happens...
Hi, what gpu are you using? It looks like a known supporting issue of flash-attn, which only support newer gpus? You can find the list of supported ones here https://github.com/HazyResearch/flash-attention#installation-and-features
BTW, we also noticed a lot of reporting issues when people install flash-attn. My current plan is to - [ ] provide a working docker image asap, hopefully, that can...
Hi @mjstrumillo , falsh_attn currently requires Turing or later GPUs. Please see this comment https://github.com/bowang-lab/scGPT/issues/39#issuecomment-1635989348 . The current issue when running without flash_attn is about loading the pretrained weights. Flash_attn...
I think after preprocessing and peak calling, ATAC data can be mapped to small windows of regions. Here is a related paper that may help suggest a lot of tools...