ManiSkill icon indicating copy to clipboard operation
ManiSkill copied to clipboard

Vectorization of environment using cpu in windows

Open j-thapa opened this issue 1 year ago • 0 comments

Hello,

I am trying to train RL models using vectorized environments on CPU in windows, is it feasible or not; while I am trying to do so am getting RuntimeError: Some environments have an observation space different.

I am using custom environment but even when using existing environment like PushCube am getting the same error.

For reference here is the section of code where am trying to create the vectorized environments.

Define the environment

env_id = "PickCube-v1" # Your chosen environment obs_mode = "rgbd" # Observation mode control_mode = "pd_ee_delta_pos" # Control mode reward_mode = "normalized_dense" # Reward mode robot_uids = "panda" # Robot type

def cpu_make_env(env_id, env_kwargs=dict()): def thunk(): env = gym.make(env_id, obs_mode=obs_mode, reward_mode=reward_mode, control_mode=control_mode, robot_uids=robot_uids) env = CPUGymWrapper(env) return env return thunk

if __name__ == "__main__":
import multiprocessing as mp
mp.freeze_support()  # Necessary for Windows

num_eval_envs = 8
env_kwargs = dict(obs_mode="rgbd")  # Modify your env_kwargs here

vector_cls = gym.vector.SyncVectorEnv if num_eval_envs == 1 else lambda x: gym.vector.AsyncVectorEnv(x, context="spawn")

eval_envs = vector_cls([cpu_make_env(env_id, env_kwargs) for _ in range(num_eval_envs)])

# Evaluation loop
obs, _ = eval_envs.reset(seed=0)

for _ in range(400):
    action = eval_envs.action_space.sample()  # Replace with your policy action
    obs, rew, terminated, truncated, info = eval_envs.step(action)
    print(rew)

I followed and made some changes as required from the example shown in https://maniskill.readthedocs.io/en/latest/user_guide/reinforcement_learning/setup.html#evaluation

Thanks

j-thapa avatar Sep 26 '24 08:09 j-thapa