cross-modal-autoencoders icon indicating copy to clipboard operation
cross-modal-autoencoders copied to clipboard

Reproducing the validation experiment of the paper (scRNA and scATAC)

Open jules-samaran opened this issue 3 years ago • 0 comments

Hello, I've just started a PhD at the ENS computational biology department in Paris where I'll be advised with Laura Cantini. As my PhD will be centered on multimodal data integration I was very interested by your recent paper "Multi-domain translation between single-cell imaging and sequencing data using autoencoders".

I tried reproducing the first experiment where you integrate scRNA-seq and scATAC-seq but I didn't manage to obtain satisfying results. Indeed when I used the same data and same preprocessing as described in the Supplementary files of your paper and projected with UMAP the latent representations of both modalities they were completely separated. I used the NN architecture in this repository and slightly adapted the code of train_rna_image.py in order to replace the image modality with ATAC but I tried to use the same hyper-parameters as described in your Supplementary materials so I don't understand why the adversarial loss doesn't seem to work despite the fact that I tried increasing the weight behind this loss term (maybe more ascent steps of the discriminator are needed). Do you have a script for the reproduction of this experiment that you could share please? Do you plan on releasing a software implementing your method?

Any help would be very appreciated!

Thanks,

Jules

jules-samaran avatar Oct 26 '21 09:10 jules-samaran