magnet icon indicating copy to clipboard operation
magnet copied to clipboard

ValueError: Could not find trained model in model_dir: ./output/sa_nn1.

Open mamengyiyi opened this issue 5 years ago • 6 comments

Hi, thanks for your amazing work. When I tried to run the main.py, it occured this error. How can I resolve it please?Below are the traceback: Traceback (most recent call last): File "main_with_actor_in_it.py", line 162, in <module> main() File "main_with_actor_in_it.py", line 97, in main actions = env.act(state) File "/home/my/.conda/envs/tensorflow1.8/lib/python3.6/site-packages/pommerman/envs/v0.py", line 137, in act return self.model.act(agents, obs, self.action_space) File "/home/my/.conda/envs/tensorflow1.8/lib/python3.6/site-packages/pommerman/forward_model.py", line 122, in act ret.append(act_ex_communication(agent)) File "/home/my/.conda/envs/tensorflow1.8/lib/python3.6/site-packages/pommerman/forward_model.py", line 101, in act_ex_communication return agent.act(obs[agent.agent_id], action_space=action_space) File "/home/my/playground/magnet/models/ddpg_agent.py", line 98, in act input_to_ddpg = self.__input_to_ddpg__(prev_state, curr_state) File "/home/my/playground/magnet/models/ddpg_agent.py", line 83, in __input_to_ddpg__ graph_predictions = np.asmatrix(list(itertools.islice(y_generator, prev_state.shape[0]))[0]['graph']) File "/home/my/.conda/envs/tensorflow1.8/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 488, in predict self._model_dir)) ValueError: Could not find trained model in model_dir: ./output/sa_nn1.

mamengyiyi avatar Mar 28 '19 06:03 mamengyiyi

Thank you for having interest in this repo. The code does not have the referred error messages but the deprecated messages have shown. Please check the updated code and try to run via python main.py.

tegg89 avatar Mar 28 '19 08:03 tegg89

Thank you for having interest in this repo. The code does not have the referred error messages but the deprecated messages have shown. Please check the updated code and try to run via python main.py.

Thank you. It runs well until it restore parameters from ./output/sa_nn1/model.ckpt-3: 报错 It went well when restoring parameters from ./output/sa_nn1/model.ckpt-2 and model.ckpt-1: 2 Why this happened?

mamengyiyi avatar Mar 28 '19 09:03 mamengyiyi

@mamengyiyi I also encountered this issue and what I do is to modify the code of ddpg_agent.py like below, def act(self, obs, action_space): action = action_space.sample()

    self.prev_state = self.curr_state
    if self.pr_action is not None:
        self.curr_state = state_to_matrix_with_action(obs, action=self.pr_action)

    if self.prev_state is not None:
        curr_state = self.curr_state
        prev_state = self.prev_state

        input_to_ddpg = self.__input_to_ddpg__(prev_state, curr_state)


        curr_state_matrix = self.curr_state
        prev_state_matrix = self.prev_state

#modify start pred_input_NN1 = tf.estimator.inputs.numpy_input_fn( x={"state1": prev_state_matrix.astype("float32"), "state2": curr_state_matrix.astype("float32"), "y": np.asmatrix(self.graph.flatten())}, y=np.asmatrix(self.graph.flatten()), batch_size=1, num_epochs=None, shuffle=False) #modify end # Predict the estimator y_generator = self.estimator_nn1.predict(input_fn=pred_input_NN1)

        graph_predictions =  np.asmatrix(list(itertools.islice(y_generator, prev_state_matrix.shape[0]))[0])
        input_to_ddpg = np.concatenate([self.curr_state, graph_predictions], axis=1)
        print(input_to_ddpg.shape)
        # action = self.actor.predict(np.expand_dims(input_to_ddpg, 0))[0, 0]

    self.pr_action = action

    return action

XiongrenChen avatar May 30 '19 04:05 XiongrenChen

@mamengyiyi I also encountered this issue and what I do is to modify the code of ddpg_agent.py like below, def act(self, obs, action_space): action = action_space.sample()

    self.prev_state = self.curr_state
    if self.pr_action is not None:
        self.curr_state = state_to_matrix_with_action(obs, action=self.pr_action)

    if self.prev_state is not None:
        curr_state = self.curr_state
        prev_state = self.prev_state

        input_to_ddpg = self.__input_to_ddpg__(prev_state, curr_state)


        curr_state_matrix = self.curr_state
        prev_state_matrix = self.prev_state

#modify start pred_input_NN1 = tf.estimator.inputs.numpy_input_fn( x={"state1": prev_state_matrix.astype("float32"), "state2": curr_state_matrix.astype("float32"), "y": np.asmatrix(self.graph.flatten())}, y=np.asmatrix(self.graph.flatten()), batch_size=1, num_epochs=None, shuffle=False) #modify end

Predict the estimator

y_generator = self.estimator_nn1.predict(input_fn=pred_input_NN1)

        graph_predictions =  np.asmatrix(list(itertools.islice(y_generator, prev_state_matrix.shape[0]))[0])
        input_to_ddpg = np.concatenate([self.curr_state, graph_predictions], axis=1)
        print(input_to_ddpg.shape)
        # action = self.actor.predict(np.expand_dims(input_to_ddpg, 0))[0, 0]

    self.pr_action = action

    return action

Thank you, it work!

GELIELEO avatar Jul 27 '19 02:07 GELIELEO

@mamengyiyi I also encountered this issue and what I do is to modify the code of ddpg_agent.py like below, def act(self, obs, action_space): action = action_space.sample()

    self.prev_state = self.curr_state
    if self.pr_action is not None:
        self.curr_state = state_to_matrix_with_action(obs, action=self.pr_action)

    if self.prev_state is not None:
        curr_state = self.curr_state
        prev_state = self.prev_state

        input_to_ddpg = self.__input_to_ddpg__(prev_state, curr_state)


        curr_state_matrix = self.curr_state
        prev_state_matrix = self.prev_state

#modify start pred_input_NN1 = tf.estimator.inputs.numpy_input_fn( x={"state1": prev_state_matrix.astype("float32"), "state2": curr_state_matrix.astype("float32"), "y": np.asmatrix(self.graph.flatten())}, y=np.asmatrix(self.graph.flatten()), batch_size=1, num_epochs=None, shuffle=False) #modify end

Predict the estimator

y_generator = self.estimator_nn1.predict(input_fn=pred_input_NN1)

        graph_predictions =  np.asmatrix(list(itertools.islice(y_generator, prev_state_matrix.shape[0]))[0])
        input_to_ddpg = np.concatenate([self.curr_state, graph_predictions], axis=1)
        print(input_to_ddpg.shape)
        # action = self.actor.predict(np.expand_dims(input_to_ddpg, 0))[0, 0]

    self.pr_action = action

    return action

Thank you, it work!

Excuse me , can you run the project successfully? I still have problem as above..I' ve sent a e-mail to you. Would you like to communicate with me,plz

ZhaoMingYang-tju avatar Dec 24 '19 02:12 ZhaoMingYang-tju

@mamengyiyi I also encountered this issue and what I do is to modify the code of ddpg_agent.py like below, def act(self, obs, action_space): action = action_space.sample()

    self.prev_state = self.curr_state
    if self.pr_action is not None:
        self.curr_state = state_to_matrix_with_action(obs, action=self.pr_action)

    if self.prev_state is not None:
        curr_state = self.curr_state
        prev_state = self.prev_state

        input_to_ddpg = self.__input_to_ddpg__(prev_state, curr_state)


        curr_state_matrix = self.curr_state
        prev_state_matrix = self.prev_state

#modify start pred_input_NN1 = tf.estimator.inputs.numpy_input_fn( x={"state1": prev_state_matrix.astype("float32"), "state2": curr_state_matrix.astype("float32"), "y": np.asmatrix(self.graph.flatten())}, y=np.asmatrix(self.graph.flatten()), batch_size=1, num_epochs=None, shuffle=False) #modify end

Predict the estimator

y_generator = self.estimator_nn1.predict(input_fn=pred_input_NN1)

        graph_predictions =  np.asmatrix(list(itertools.islice(y_generator, prev_state_matrix.shape[0]))[0])
        input_to_ddpg = np.concatenate([self.curr_state, graph_predictions], axis=1)
        print(input_to_ddpg.shape)
        # action = self.actor.predict(np.expand_dims(input_to_ddpg, 0))[0, 0]

    self.pr_action = action

    return action

Thank you, it work!

Excuse me , can you run the project successfully? I still have problem as above..I' ve sent a e-mail to you. Would you like to communicate with me,plz

hi, can you solve it? I have the same question

Amanda2024 avatar Mar 10 '21 15:03 Amanda2024