DRLinFluids icon indicating copy to clipboard operation
DRLinFluids copied to clipboard

issue whille executing the example notebook

Open e-dinesh opened this issue 2 years ago • 3 comments

After executing the third cell, the following error pops-up saying "TypeError: run() takes 5 positional arguments but 8 were given". Any leads please ?


Runner defined DONE!

Episodes: 0%| | 0/1000 [00:00, reward=0.00, ts/ep=0, sec/ep=0.00, ms/ts=0.0, agent=0.0%]

/home/dinesh/Documents/DRLinFluids/examples/active_flow_control/env01/0.005/U /home/dinesh/Documents/DRLinFluids/examples/active_flow_control/env01/0.005/U /home/dinesh/Documents/DRLinFluids/examples/active_flow_control/env01/0.005/U


TypeError Traceback (most recent call last) Cell In [3], line 11 8 print('Runner defined DONE!') 10 # runner.run(episodes=500, max_episode_timesteps=80) ---> 11 runner.run( 12 num_episodes=num_episodes, 13 save_best_agent ='best_model' 14 ) 15 runner.close() 17 for environment in environments:

File ~/local/anaconda3/envs/drlinfluids/lib/python3.8/site-packages/tensorforce/execution/runner.py:516, in Runner.run(self, num_episodes, num_timesteps, num_updates, batch_agent_calls, sync_timesteps, sync_episodes, num_sleep_secs, callback, callback_episode_frequency, callback_timestep_frequency, use_tqdm, mean_horizon, evaluation, save_best_agent, evaluation_callback) 512 break 514 else: 515 # Check whether environment is ready, otherwise continue --> 516 observation = self.environments[n].receive_execute() 517 if observation is None: 518 self.terminals[n] = self.prev_terminals[n]

File ~/local/anaconda3/envs/drlinfluids/lib/python3.8/site-packages/tensorforce/environments/environment.py:328, in Environment.receive_execute(self) 326 self._expect_receive = None 327 assert self._actions is not None --> 328 states, terminal, reward = self.execute(actions=self._actions) 329 self._actions = None 330 return states, int(terminal), reward

File ~/local/anaconda3/envs/drlinfluids/lib/python3.8/site-packages/tensorforce/environments/environment.py:380, in EnvironmentWrapper.execute(self, actions) 376 raise TensorforceError( 377 message="An environment episode has to be initialized by calling reset() first." 378 ) 379 assert self._max_episode_timesteps is None or self._timestep < self._max_episode_timesteps --> 380 states, terminal, reward = self._environment.execute(actions=actions) 381 if isinstance(states, dict): 382 states = states.copy()

Cell In [1], line 99, in FlowAroundCylinder2D.execute(self, actions) 96 self.foam_params['verbose'] = True 98 simulation_start_time = time() ---> 99 drlinfluids.runner.run( 100 self.foam_root_path, 101 self.foam_params,self.agent_params['interaction_period'], self.agent_params['purgeWrite_numbers'],self.agent_params['writeInterval'], 102 self.agent_params['deltaT'], 103 start_time_float, end_time_float 104 ) 105 simulation_end_time = time() 107 self.probe_velocity = utils.read_foam_file( 108 self.foam_root_path + f'/postProcessing/probes/{self.start_time_filename}/U', 109 dimension=self.foam_params['num_dimension'] 110 )

TypeError: run() takes 5 positional arguments but 8 were given

e-dinesh avatar Oct 23 '22 15:10 e-dinesh

Hi @e-dinesh, we will do a quick test about your issue, and give you feedback soon.

venturi123 avatar Oct 24 '22 13:10 venturi123

Hi @e-dinesh ! Which Tensorforce version did you use? I suggest Tensorforce==0.6.0:)

1900360 avatar Oct 24 '22 13:10 1900360

Maybe this is a good indication that providing a conda environment file would be useful? See creating an environment.yml file at https://carpentries-incubator.github.io/introduction-to-conda-for-data-scientists/04-sharing-environments/index.html . Otherwise, maybe it is better to use the container (docker or singularity) to run the notebook, to make sure all versions are exactly the same as expected? :)

jerabaul29 avatar Oct 24 '22 14:10 jerabaul29