MicrobeLab

Results 60 comments of MicrobeLab

The GPU memory is a major restriction of the batch size. In theory, using a large batch size should be helpful when training on a lot of classes. But a...

I generated a very large training set, and if the model did not converge, I generated more data rather than trained on the same dataset again. That's why the epoch...

Hi, the issue seems to be due to parallel, not DeepMicrobes. Please try installing parallel first (not use parallel provided here) and make sure that installation is fine.

Hi, I'm not sure how to parse the seq_id in your file. This issue is not related to DeepMicrobes codes. The index error showed that the code failed to get...

Hi, 1. Sorry, not sure what do you mean by "in batches"; 2. I would recommend that you format your fastq headers to the same as ours.

I think you can feel free to change the codes as long as no bug occurs :)

Hi, the model was not fine-tuned on the benchmark dataset.

Hi, Simulated reads should be put in a fasta file and shuffled. The large fasta file can then be split into a lot of small fasta files and thereby a...

1. Yes. 2. Since we have shuffled at the read level (step 1), the large fasta file does not have to be random split. 3. Yes. 4. Yes. Sorry I...

The computational environment was mentioned in the paper (40G GPU, 8 CPU cores). It took roughly 1-2 days for the genus model.