Haotian Cui
Haotian Cui
Thanks for the great question. We should have stated it more clearly. The binning was mainly used in pretraining and in fine-tuning tasks where you don't really care about the...
Hi @kzkedzierska , sorry for the delay! We did use customized implementation during pre-training. The release of data collection and pretraining code took longer than we expected, and I was...
Hi @kzkedzierska , thank you for your patience sincerely. The current challenge is merely the workload of merging some legacy code with different naming and other misc, while we are...
Hi, I have uploaded the code in the dev-temp branch. Regarding your initial questions: 1. Yes, the design would make each query independent of all other unknown genes 2. We...
Sure, we will add the tutorial asap. The update is going to be ready in a few days.
Hi @naveedaq91 , sorry for the waiting. To answer your questions: 1. I have not used the particular environment or g5 cluster so far, but were you running any one...
Yes, the pretrained model has 12 transformer blocks and 8 attention heads. As stated in the comments of the code snippet you copied, the `batch_size, layer_size, nlayers, nhead` will be...
I also have a quick fix here to load the model args in `hyperparameter_defaults` when you are training from scratch without loading pretrained model https://github.com/bowang-lab/scGPT/commit/dcaf1e382cd0b74e474783be94532eb915fff302#diff-76b1a198348cad2e01186fc1103195323e45fb75388dccbad08b00d9e16c0db8R163
Thank you!
Thank you so much! This is super helpful. BTW, if not limited to python, is there any other tool you would suggest as a better choice? Or you would think...