deepRAM
deepRAM copied to clipboard
About one-hot encoding
Thanks for the nice work. We are trying to make your work into an ML problem set for our class.
I have a question about one-hot encoding code here https://github.com/MedChaabane/deepRAM/blob/87014f400d9bdeb96f04f7ae977071188deaebeb/deepRAM.py#L35
It seems that the one-hot encoding code set the first few and last few sequences to 0.25. And the length of the sequence that is set to 0.25 is equal to motiflen
, I wonder what is the reason for that. I also read the paper, but did not see an explanation for this choice. Is this something standard to do, and where can I read more about this?