Practical-RIFE
Practical-RIFE copied to clipboard
Issue with new models
I'm not sure if this is the correct place to raise this issue and tbh I'm pretty sure alotta people have raised it. But why are the results of the new models still consistently worse than 2.3? It seems that quality has peaked around that point and both 3.1 3.8 and 3.9 performed worse than 2.3 quality wise, although being much better speed wise. Is it an issue with training or a change in code?
v2.3 model is unexpectedly good. v4.0 achieves both effects and performance; according to the latest large-scale evaluation report, v4.0 should be a suitable choice
Version 4 really has better quality and good performance, but it still falls short of the reference 2.3
Version 4 really has better quality and good performance, but it still falls short of the reference 2.3
May I also know the direction for future development? I saw you added a depth map in the RIFE repo, does this mean RIFE will soon have an awareness to depth as an attempt in increasing the quality? And is it true that you will be implementing multi-frame input to improve quality as well?
I am developing a new algorithm that can be used as a supplement to any video frame synthesis algorithm. For example, it can be used to repair bad frames generated by RIFE. It is expected to be released in three months.
I am developing a new algorithm that can be used as a supplement to any video frame synthesis algorithm. For example, it can be used to repair bad frames generated by RIFE. It is expected to be released in three months.
That sounds very interesting! Looking forward to that Please do update us with more details if you can in the future and happy holidays!
Version 4 really has better quality and good performance, but it still falls short of the reference 2.3
Where can i get the v4 and v2.3 model?
@Q8sh2ing https://github.com/hzwer/Practical-RIFE#usage and https://github.com/megvii-research/ECCV2022-RIFE/issues/41
@Q8sh2ing https://github.com/hzwer/Practical-RIFE#usage and https://github.com/megvii-research/ECCV2022-RIFE/issues/41
Thanks, I can't believe i miss that...
@hzwer Sidenote Is training for RIFE still ongoing? And are the plans you outlined previously like taking multi frame input still being worked on?
Hi, looked around a little, is there an all in one place where all RIFE models are located? Thanks in advance!Respectfully,-- JeCarlo PalomoCo-FounderAll The Things(671)687-8498Without knowledge, action is useless, and knowledge without action is futile. -Abu BakrOn Dec 21, 2022, at 6:38 PM, hzwer @.***> wrote: @Q8sh2ing https://github.com/hzwer/Practical-RIFE#usage and megvii-research/ECCV2022-RIFE#41
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you are subscribed to this thread.Message ID: @.***>
@Q8sh2ing https://drive.google.com/drive/folders/1lPdn7VqT-8dMG5YfXxz9zIGuBBBJKIcg?usp=sharing
https://drive.google.com/drive/folders/11u79nnOagqzo5cqza9myh--TCyg6OLBn
These are the input and outputs when testing out different models, do yall get the same result? v4 models seems to produce a lot of distortion, I mean the results are pretty accurate but the distortions are just out of topic.
v4.6 can produce sharp and accurate results, but it's not usable for some kind of footage. So you may want to try older models. For panning shots, v4.1 often works better. For footage with patterns (nets/grass/fences/etc.), v4.0 has a better 2D pattern resilience than any other v4.x model including the latest v4.6.
For footage with patterns (nets/grass/fences/etc.), v4.0 has a better 2D pattern resilience than any other v4.x model including the latest v4.6.
I hope there is more focus on this case in future models, as these artifacts are very noticeable and distracting, and most live action content at least will have a scene that triggers the issue.