Amir H. Jadidinejad

Results 10 comments of Amir H. Jadidinejad

So, the weights are not applied to the current version of Spotlight (0.1.5)? Precisely, what's the role of **weights** in [the Interaction class](https://github.com/maciejkula/spotlight/blob/master/spotlight/interactions.py)?

or translation experiments?

I have my own dataset. My vocabulary size is 10,000 BPE. Changing the model to `nmt_large` does not solve the problem. For large model, GPU utilization is always under 30-35%.

Enlarging the model (increasing vocab size to 40K, add more buckets, increase sequence lengths) leads to better GPU utilization (under 80-90%). Thank you. Why GPU utilization fluctuated between 5% to...

According to [the standard TF documentation](https://www.tensorflow.org/performance/performance_guide#utilize_queues_for_reading_data): > Another simple way to check if a GPU is underutilized is to run `watch nvidia-smi`, and if GPU utilization is not approaching 100%...

I think there is two options: 1. Load the model, add an input placeholder to the graph, remove the input pipeline from the graph and freeze it. As mentioned by...

Input pipelines are [the standard way](https://www.tensorflow.org/performance/performance_guide#utilize_queues_for_reading_data) to feeding TF models. So, I think, its indeed possible. But the current documentation in TF serving module is not clear.

It's strange if TF serving is not compatible with input queues. It's not a good design to leverage different feeding in the training and inferencing. For example, one application of...

I have the same question. Any new update?