conduit
conduit copied to clipboard
Remaining stuff
How does this sound?
What's Left to Do
ASAP due to waiting on other people
- If we want to use latedays or GHC machines to stream, ensure software is installed now.
- Find the location of the GHC machines with the best graphics cards, and test that I'm able to use them
- Test our entire project code so far, with Oculus, on GHC machines, if software is missing, ask for it to be installed. Make sure we use the high end graphics card
Reading/Research
- Foveated rendering
- Creating a client-server setup
- VR video streaming
Efficient local reference implementation
- Profile video decoding, find out how long it takes
- Try ffmpeg, see if it's faster
- Better way to stream video to OpenGL -- framebuffers?
Basic View-optimization
- Only stream parts in nearby view
- Stream outside fovea region at lower bitrate or resolution (nearest neighbor or bilinear)
Server-client split
- Choose a target server (latedays, another cluster machine) and target client. We may want them to be physically close on the same LAN for now but should also check out further distances
- Write server and cient code
Advanced view-optimization
- Extrapolate missing pixels if head turns too fast (see time warp)
- Predict where the head will be when it receives the packet (should be available via Oculus)
- Other stuff Carmack said
How to measure performance: Using the waterfall target video.
- Change in motion-to-photons latency versus streaming the entire video
- How fast can you turn your head and have the stream catch up? (More qualitative with prediction)
- How long does the "view-optimization" take? Can it be pipelined? How long would streaming normally take in terms of latency?
- How much bandwidth do you now need?
- Framerate
Following Kavon's suggestion, I think it'd also be good to create a testing harness that simulates Oculus, programmatically follows a series of motions, and prints out some stats