Harishankar G

Results 10 comments of Harishankar G

Ah ok !..We would be using Fairseq for the student model. For faster inference we wanted to check if we can convert to CTranslate2 with the VMAP support. I would...

HI @guillaumekln , so currently we are using Transformer for both encoders and decoders. We want to go with hybrid, tranformer(enc)-rnn(dec) based networks to further reduce the inference latency and...

Hi @Andrewlesson no. We intend to go with custom Fairseq model most probably where we define a custom architecture in Fairseq. If that doesn't work our we would have to...

Yea we are already using a deep encoder shallow decoder architecture. We want to experiment the performance of such architectures where the encoder and decoder stack have separate architectures of...

Actually I guess not. The specific package that does not install on MAC M1 is the **pyonmttok** package which is a dependency of OpenNMT-Py and due to this the installation...

Hi @guillaumekln sure will check that while installing from sources. Will update in couple of days.

Hi @guillaumekln I can confirm that this has worked. But I presume we are excluding the `pyonmttok` dependency for MAC M1 based OS. I would like to know what is...

Hi @NilSet thanks for the information. Will check the same and get back.

Hi @NilSet I can confirm that I was able to convert the model using the `--skip_op_check` option and also load the model using `loadGraphModel` from the TFJS SDK. Thanks for...

Any progress on this ? My initial thought was to just pass the appropriate data type in the call to `torch.cuda.amp.autocast` in the below `fairseq_task.py` file. ![image](https://user-images.githubusercontent.com/3500976/235360873-774203d7-e38c-4549-bb5a-c2e427eccb51.png) Not sure if...