Jimmy Wei
Jimmy Wei
Using the "##" assumption to do alignment breaks down when you run it on a language like Chinese (which is what I tried to do), which uses character tokenization. Interestingly...
Hi my internship ended yesterday, you can redirect further questions to Mojtaba or Kurt, thanks! The commit just updated the test fixtures by running pytest—force-regen test.py So it should fix...
> Looks great. You added the minimal amount of code needed and refactored well. This is good example of adding on top of high level code. Just make sure that...
Another question I had after reading through the code again is whether we want to modify the how we handle observations in [TorchAgent](https://github.com/facebookresearch/ParlAI/blob/dff9aabb5024c30c81e146cebffbc88bc6431b61/parlai/core/torch_agent.py#L417). Currently, [act](https://github.com/facebookresearch/ParlAI/blob/dff9aabb5024c30c81e146cebffbc88bc6431b61/parlai/core/torch_agent.py#L2142) calls [batch_act](https://github.com/facebookresearch/ParlAI/blob/dff9aabb5024c30c81e146cebffbc88bc6431b61/parlai/core/torch_agent.py#L2152) with the observation...
> Another question I had after reading through the code again is whether we want to modify the how we handle observations in [TorchAgent](https://github.com/facebookresearch/ParlAI/blob/dff9aabb5024c30c81e146cebffbc88bc6431b61/parlai/core/torch_agent.py#L417). > > Currently, [act](https://github.com/facebookresearch/ParlAI/blob/dff9aabb5024c30c81e146cebffbc88bc6431b61/parlai/core/torch_agent.py#L2142) calls [batch_act](https://github.com/facebookresearch/ParlAI/blob/dff9aabb5024c30c81e146cebffbc88bc6431b61/parlai/core/torch_agent.py#L2152)...
From the failing tests it looks like adding `silence_token` to the dictionary will break models trained without `silence_token` when they restore from checkpoints due to mismatch in embedding size. We...
> From the failing tests it looks like adding `silence_token` to the dictionary will break models trained without `silence_token` when they restore from checkpoints due to mismatch in embedding size....
> Looks good thank you. I had a few small comments on the code. Also an extra major comment about the tests: it looks like a lot of your tests...
Another bug I found is in the `batch_observe` function: ``` if batch_actions[i] is None: # shouldn't send None, should send empty observations batch_actions[i] = [{}] * len(self.worlds) ``` Later in...
> @chiehminwei do we still need changes or your work on TGA is going to cover it? This is just a "bug" that I spotted when reading through the codebase,...