ddnet icon indicating copy to clipboard operation
ddnet copied to clipboard

Add motion blur with frame stacking to demo recorder

Open sjrc6 opened this issue 1 year ago • 4 comments
trafficstars

Since higher fps videos are not really supported by youtube or most other online video streaming it would be very convenient for the client to apply "frame rate down sampling" or "motion blur" or "frame stacking" whatever you want to call it. Basically you render at 60*n fps and every n frames get averaged into 1 frame, the more frames you average the better it looks. The output video file should be 60fps.

It's possible to implement with a simple shader and 2 frame buffers. Just add each frame to the buffer then divide by the number of frames every n frames. It might also be possible with ffmpeg magic trickery but I am not familiar so I'm not sure if that's reasonable.

sjrc6 avatar May 04 '24 05:05 sjrc6

I wonder if the ddnet client, with the knowledge about what a tee is and what its velocity is, could do something external ffmpeg can't.

If in the end it makes the code more complex and has no better result than using ffmpeg externally I don't think it is worth it.

I personally usually record at max fps even with 0.5x or slower playback speeds and then use video editing software to merge the frames. This worked very smooth with Sony Vegas. I have yet to find the ffmpeg command that matches its quality. But I have some ffmpeg command flying around (would have to search it).

I assume the best result could be achieved with the projects @Patiga is working on. By exporting demos to blender projects and then let blender do the motion blur.

ChillerDragon avatar May 04 '24 07:05 ChillerDragon

Since higher fps videos are not really supported by youtube or most other online video streaming it would be very convenient for the client to apply "frame rate down sampling" or "motion blur" or "frame stacking" whatever you want to call it.

Is that actually desirable? If you pause the video, it always looks quite buggy.

heinrich5991 avatar May 04 '24 07:05 heinrich5991

Is that actually desirable? If you pause the video, it always looks quite buggy

Yes it is desirable. If moving parts are blurred nicely it looks noticeably better.

ChillerDragon avatar May 04 '24 07:05 ChillerDragon

If in the end it makes the code more complex and has no better result than using ffmpeg externally I don't think it is worth it.

The benefit is you avoid processing very large video files. The client only needs to store 1 blurred frame at a time so you don't need to produce a 1gb file and then process it to 50mb.

I wonder if the ddnet client, with the knowledge about what a tee is and what its velocity is, could do something external ffmpeg can't.

I think it's better to not bother, 1200fps or more would be a nearly perfect motion blur and the client has no trouble rendering very fast so analytic motion blur isn't really necessary. Doing something like drawing each tee 50 times to the frame buffer instead of rendering the whole frame 50 times is not really that big of a performance gain for TW.

I assume the best result could be achieved with the projects @Patiga is working on. By exporting demos to blender projects and then let blender do the motion blur.

You only need a simple average to achieve perfect motion blur. Some advanced video plugins will try to use frame interpolation techniques to extract motion blur from lower fps videos but TW can render so fast it's not relevant.

sjrc6 avatar May 04 '24 07:05 sjrc6