attention-learn-to-route icon indicating copy to clipboard operation
attention-learn-to-route copied to clipboard

question regarding `_get_parallel_step_context`

Open jingweiz opened this issue 4 years ago • 1 comments

Hi, thanks for making the code public! I have a question regarding the function _get_parallel_step_context: Here, https://github.com/wouterkool/attention-learn-to-route/blob/c66da2cfdc9ae500150bfc34d597a33631d2ceb3/nets/attention_model.py#L378, num_steps would always be 1 as the current_node reads the prev_a of the tsp state, so then this means that https://github.com/wouterkool/attention-learn-to-route/blob/c66da2cfdc9ae500150bfc34d597a33631d2ceb3/nets/attention_model.py#L427 will always be hit, and the lines from 436 to 449 will never be used, is this correct or am I missing sth here? Thanks in advance and looking forward for your reply! Jingwei

jingweiz avatar Dec 04 '20 08:12 jingweiz

Hi!

This is correct. The reason that there is a steps dimension is that this can be used to evaluate the model on a given tour in a single forward pass, which is much more efficient than one step at the time. This could be useful for, e.g., supervised training (teacher forcing) or things like experience replay. This code is a leftover of some early experiments in that direction which I thought may still be useful to somebody.

Wouter

wouterkool avatar Dec 04 '20 11:12 wouterkool