Fei Xia

Results 88 comments of Fei Xia

It seems that you have multiple versions of cuda installed in `/usr/local`, you can expose the version used in `nvidia-smi` by soft linking `cuda` to it. For example: ![image](https://user-images.githubusercontent.com/5158896/127429832-cc9ff20e-9e13-4e3e-95f0-f505436c94b6.png) ![image](https://user-images.githubusercontent.com/5158896/127429804-825f49d7-8930-4a62-8b42-882f05176d52.png)

@rainprob did you build the code again? You would need to run `./build.sh build_local` in GibsonEnv folder.

@susu3621 It is implemented in a newer version of Gibson that is not released yet. Stay tuned :)

@botforge you can find it here: https://github.com/StanfordVL/GibsonEnvV2 The code to support scan is in these lines: https://github.com/fxia22/gibsonv2/blob/master/gibson2/envs/locomotor_env.py#L266-L302

@botforge yes, that's what I mean.

Yes, this is expected. The distinction between flagrun experiment and navigate experiment is that in flagrun experiment, the target is given while in navigate experiment the target is learned from...

We are observing similar issues with the latest version. I guess this is due to a change in robot coordinate systems in a recent update. Sorry for the inconvenience, we...

We have fixed this issue in master branch, it turns out a data loading issue. Now you can run the example with the following commands: ``` mkdir gibson/utils/models python examples/train/train_husky_navigate_ppo2.py...

GibsonEnv for manipulation is being actively developed and will be released soon.

@jhpenger One example of using gibson goggle can be found in https://github.com/StanfordVL/GibsonEnv/blob/master/examples/ros/gibson-ros/goggle.py. It requires a kinect camera and ROS, as the goggle accepts RGBD images and adapts into to gibson...