transformers-metarl icon indicating copy to clipboard operation
transformers-metarl copied to clipboard

Shape broadcast error

Open SapanaChaudhary opened this issue 1 year ago • 0 comments

On running the halfcheetah code, I get the following error:

Traceback (most recent call last):
  File "./transformer_ppo_halfcheetah.py", line 277, in <module>
    transformer_ppo_halfcheetah()
  File "transformer-metarl/lib/python3.7/site-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "transformer-metarl/lib/python3.7/site-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "transformer-metarl/lib/python3.7/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "transformer-metarl/lib/python3.7/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "transformers-metarl/src/garage/experiment/experiment.py", line 369, in __call__
    result = self.function(ctxt, **kwargs)
  File "./transformer_ppo_halfcheetah.py", line 274, in transformer_ppo_halfcheetah
    meta_batch_size)
  File "transformers-metarl/src/garage/trainer.py", line 526, in train
    average_return = self._algo.train(self)
  File "transformers-metarl/src/garage/torch/algos/rl2.py", line 439, in train
    self._meta_evaluator.evaluate(self)
  File "transformers-metarl/src/garage/experiment/meta_evaluator.py", line 212, in evaluate
    deterministic=True
  File "transformers-metarl/src/garage/sampler/local_sampler.py", line 118, in obtain_samples
    batch = worker.rollout(deterministic, idx_offset=idx_offset)
  File "transformers-metarl/src/garage/torch/algos/rl2.py", line 166, in rollout
    while not self.step_episode(deterministic):
  File "transformers-metarl/src/garage/torch/algos/rl2.py", line 182, in step_episode
    a, agent_info, aug_obs, hidden_states = self.agent.get_action(self._prev_obs, deterministic) #augment_obs = obs + hidden_states
  File "transformers-metarl/src/garage/torch/algos/rl2.py", line 290, in get_action
    return self._policy.get_action(obs, deterministic)
  File "transformers-metarl/src/garage/torch/policies/gaussian_transformer_encoder_policy.py", line 379, in get_action
    actions, agent_infos, aug_obs, prev_hiddens = self.get_actions([observation])
  File "transformers-metarl/src/garage/torch/policies/gaussian_transformer_encoder_policy.py", line 412, in get_actions
    self._update_prev_observations(observations)
  File "transformers-metarl/src/garage/torch/policies/gaussian_transformer_encoder_policy.py", line 435, in _update_prev_observations
    self._prev_observations[:, self._working_memory_index(), :] = observations
ValueError: could not broadcast input array from shape (1,10) into shape (1,25)

Coming from the following method:

def _update_prev_observations(self, observations):
        #import pdb; pdb.set_trace()
        if self._step < self._obs_horizon: # fits in memory: just keep updating the right index
            self._prev_observations[:, self._working_memory_index(), :] = observations
        else: # more observations than working memory length
            self._prev_observations = np.concatenate((self._prev_observations[:, 1:, :], observations), axis=1)
        
        self._step += 1

Could you please suggest an easy fix for this?

SapanaChaudhary avatar May 12 '23 22:05 SapanaChaudhary