VRKitchen
VRKitchen copied to clipboard
Controlling the agent and recording data
Dear VRKitchen development group,
greetings from the Vision for Robotics group from the University of Technology Vienna, Austria (https://www.acin.tuwien.ac.at/en/vision-for-robotics/)!
We came across your VRKitchen project and it looks very interesting to us. More specifically, We would use the "prepare dish" scenario with atomic actions to create traces of object positions / object-to-object contacts that I use as input to our own plan recognition algorithm. Your environment seems very suited to this task and would save me a lot of time.
Therefore we would kindly ask you to clarify the usage of your system to us. We tried to run it, with some success:
- The simulation starts and a window on the screen appears, but there was no support for HTC Vive.
- If we don't comment out the call to simulate the steps of a plan, there is random movement and an error message that data is missing.
- We didn't find the relevant piece of code that would allow us to act in the simulation through the VR controller input ourselves.
We would be very thankful for some clarification and further insights into your project so we can use it to create test data for our plan recognition project.
Best regards,
Michael Koller
I'm also interested in similar functionality. @michaelkoller have you heard back from the authors about this?
Hello!Yes, but they are currently working on different things. Also the proper dataset will only be available in a few months, they have suggested. It will apparently be some time before we can use it in the way I asked. Best regards,Michael Am 06.12.2019 18:29 schrieb Tushar Nagarajan [email protected]:I'm also interested in similar functionality. @michaelkoller have you heard back from the authors about this?
—You are receiving this because you were mentioned.Reply to this email directly, view it on GitHub, or unsubscribe.