compatibility with rtx 5090?
Hi Dan Congratulations on the coloring product you made. I would really like to try it out, but if I may ask a few questions first:
- Is it compatible in combination with Hybrid with nvidia rtx 5090 or amd 9070 xt
- Apart from Hybrid_dev_2025.09.18-153823 and VapoursynthR72_torch_2025.09.17 do I need to install anything else - it is not very clear from the manual you posted whether python needs to be installed or not. At the moment, nothing works for me with just these two files installed. If you can help me, I would be very grateful. Regards Alex
The encoding problem is solved after installing another partition on the hard drive, apparently the installation in Program Files is a problem for now Once again congratulations on the wonderful development, Regards, Alex
At page 6 I wrote: It is suggested to install Hybrid in a writeable path, like “C:\Hybrid” or “D:\Programs\Hybrid”. In the User Guide I don't explained the reason, but the main reason is that there are some coloring models that download the neural networks if there is need or were updated respect to the version released with Hybrid. So Hybrid must be able to write in the installation directory, and since for security reasons running Hybrid as administrator was disabled by Selur. It is not possible to install Hybrid in "c.\Program Files".
I updated the User Guide to explain better Hybrid installation. Thanks for having reported this issue.
yes, it's my fault for not reading this information carefully at the risk of being rude, if I may ask one more thing i made all the necessary settings according to the instructions and i get the following error:
Imports
import vapoursynth as vs
getting Vapoursynth core
import logging import sys import os core = vs.core
Import scripts folder
scriptPath = 'E:/Hybrid/64bit/vsscripts' sys.path.insert(0, os.path.abspath(scriptPath))
Force logging to std:err
logging.StreamHandler(sys.stderr)
loading plugins
core.std.LoadPlugin(path="E:/Hybrid/64bit/vsfilters/ColorFilter/TimeCube/vscube.dll") core.std.LoadPlugin(path="E:/Hybrid/64bit/vsfilters/Support/akarin.dll") core.std.LoadPlugin(path="E:/Hybrid/64bit/vsfilters/MiscFilter/MiscFilters/MiscFilters.dll") core.std.LoadPlugin(path="E:/Hybrid/64bit/vsfilters/SourceFilter/LSmashSource/LSMASHSource.dll")
Import scripts
import color import vsdeoldify as havc import validate
Source: 'E:\11111111.mp4'
Current color space: YUV420P8, bit depth: 8, resolution: 1440x1080, frame rate: 24fps, scanorder: progressive, yuv luminance scale: limited, matrix: 709, format: AVC
Loading E:\11111111.mp4 using LibavSMASHSource
clip = core.lsmas.LibavSMASHSource(source="E:/11111111.mp4") frame = clip.get_frame(0)
setting color matrix to 709.
clip = core.std.SetFrameProps(clip, _Matrix=vs.MATRIX_BT709)
setting color transfer (vs.TRANSFER_BT709), if it is not set.
if validate.transferIsInvalid(clip): clip = core.std.SetFrameProps(clip=clip, _Transfer=vs.TRANSFER_BT709)
setting color primaries info (to vs.PRIMARIES_BT709), if it is not set.
if validate.primariesIsInvalid(clip): clip = core.std.SetFrameProps(clip=clip, _Primaries=vs.PRIMARIES_BT709)
setting color range to TV (limited) range.
clip = core.std.SetFrameProps(clip=clip, _ColorRange=vs.RANGE_LIMITED)
making sure frame rate is set to 24fps
clip = core.std.AssumeFPS(clip=clip, fpsnum=24, fpsden=1)
making sure the detected scan type is set (detected: progressive)
clip = core.std.SetFrameProps(clip=clip, _FieldBased=vs.FIELD_PROGRESSIVE) # progressive
adjusting color space from YUV420P8 to RGBH for vsDDColor
clip = core.resize.Bicubic(clip=clip, format=vs.RGBH, matrix_in_s="709", range_in_s="limited", range_s="full")
adding colors using DDColor
from vsddcolor import ddcolor clip = ddcolor(clip=clip, model=1)
adjusting color space from RGBH to RGB24 for vsHAVC
clip = core.resize.Bicubic(clip=clip, format=vs.RGB24, dither_type="error_diffusion")
adding colors using HAVC
clip = havc.HAVC_main(clip=clip, Preset="slower", ColorFix="magenta/violet", ColorTune="medium", BlackWhiteTune="light", EnableDeepEx=True, DeepExMethod=0, DeepExRefMerge=1, ScFrameDir=None, ScThreshold=0.09, DeepExModel=0, DeepExEncMode=0, DeepExMaxMemFrames=0)
stabilize colors using ColorAdjust (HAVC)
clip = havc.HAVC_ColorAdjust(clip=clip, BlackWhiteTune="light", Strength=1)
adjusting color using Levels on RGB24 (8 bit)
clip = core.std.Levels(clip=clip, min_in=16, max_in=235, min_out=16, max_out=235, gamma=0.95)
adjusting color space from RGB24 to YUV444PS for vsTweak
clip = core.resize.Bicubic(clip=clip, format=vs.YUV444PS, matrix_s="709", range_in_s="full", range_s="limited") # additional resize to allow target color sampling
adjusting color using Tweak
clip = color.Tweak(clip=clip, hue=8.00, sat=1.15, cont=1.01, coring=True)
adjusting color space from YUV444PS to RGB48 for vsRGBAdjust
clip = core.resize.Bicubic(clip=clip, format=vs.RGB48, matrix_in_s="709", range_in_s="limited", range_s="full", dither_type="error_diffusion")
color adjustment using RGBAdjust
clip = color.RGBAdjust(rgb=clip, g=0.950)
color adjustment using TimeCube
clip = core.timecube.Cube(clip=clip, cube="E:/Hybrid/64bit/vsfilters/ColorFilter/TimeCube/color/Presetpro - Hollywood.cube") clip = core.std.Crop(clip=clip, left=2, right=2, top=0, bottom=0)# cropping to 1436x1080
adjusting output color from: RGB48 to YUV420P10 for x265Model
clip = core.resize.Bicubic(clip=clip, format=vs.YUV420P10, matrix_s="709", range_in_s="full", range_s="limited", dither_type="error_diffusion") # additional resize to allow target color sampling
set output frame rate to 24fps (progressive)
clip = core.std.AssumeFPS(clip=clip, fpsnum=24, fpsden=1)
output
clip.set_output()
Failed to evaluate the script: Python exception: module 'vsdeoldify' has no attribute 'HAVC_ColorAdjust'
Traceback (most recent call last): File "src/cython/vapoursynth.pyx", line 3378, in vapoursynth._vpy_evaluate File "src/cython/vapoursynth.pyx", line 3379, in vapoursynth._vpy_evaluate File "C:\Users\nfoga\AppData\Local\Temp\tempPreviewVapoursynthFile19_41_04_999.vpy", line 51, in clip = havc.HAVC_ColorAdjust(clip=clip, BlackWhiteTune="light", Strength=1) ^^^^^^^^^^^^^^^^^^^^^ AttributeError: module 'vsdeoldify' has no attribute 'HAVC_ColorAdjust'
video is not encoded because of this error, i don't know if that's the problem:
added new job with id 2025-09-19@19_39_42_3810 starting 2025-09-19@19_38_52_9010_01_audio@19:39:42.405 - E:\11111.mkv 2025-09-19@19_38_52_9010_01_audio finished after 00:00:00.134 starting 2025-09-19@19_38_52_9010_03_video@19:39:42.547 - E:\11111.mkv 2025-09-19@19_38_52_9010_03_video finished after 00:00:12.830 with exitCode -1 Crashed with exit status 0 -> 2025-09-19@19_38_52_9010_03_video crashed: Crashed with exit status 0 Failed to evaluate the script: Python exception: module 'vsdeoldify' has no attribute 'HAVC_ColorAdjust' Traceback (most recent call last): File "src/cython/vapoursynth.pyx", line 3378, in vapoursynth._vpy_evaluate File "src/cython/vapoursynth.pyx", line 3379, in vapoursynth._vpy_evaluate File "C:\Users\nfoga\AppData\Local\Temp\tempPreviewVapoursynthFile19_41_04_999.vpy", line 51, in clip = havc.HAVC_ColorAdjust(clip=clip, BlackWhiteTune="light", Strength=1) ^^^^^^^^^^^^^^^^^^^^^ AttributeError: module 'vsdeoldify' has no attribute 'HAVC_ColorAdjust'
Regards. Alex
For what I can see, you are attempting to use DDColor with HAVC. DDColor is already included in HAVC, please read the documentation. If you want to use DDColor only you can select in HAVC use Coloring = DDColor(Artistic). Moever it seems that HAVC_ColorAdjust is not available. Please download the current experimental version of Hybrid. HAVC 5.5.0 currently is only available in experimental folder
Please attach the script that you are using.
Selur reported that in the experimental version is missing HAVC 5.5.0: (1145)
Please read chapter 3.13. at page 15 of User Guide. The filter "B&W Tune" is already included in HAVC so it is not necessary to apply the filer HAVC_ColorAdjust. As explained in the Guide the filter HAVC_ColorAdjust was added to improve movies already colored. Supposing that you have already a colored movie for example: test_deoldify.mp4. You can use HAVC_ColorAdjust to improve the colors of test_deoldify.mp4, but in this case you don't have use HAVC.
I got it, I'll remove ddcolor and HAVC_ColorAdjust and wait for Selur to fix the wrong file many thanks for the help Regards Alexi
ps: if everything works out, I'm thinking of coloring my collection of black and white football matches, if you've already tried coloring them, could you share a script that would work best?
Try the settings suggested at page 29-30 for high powered GPU. Then you can experiment by changing "B&W tune", "B&W blend", "Stabilize" and "Denoise". To get the result that best suite your need. If you find that the colors are still not good enough for you, read the chapter 3.0.3 at page 12-13.
P.S. Selur reported that the new torch add-on is up.
thanks for everything the first attempt is not very successful - a bit dark and gray colors, but I need bright and saturated - I probably have to test a lot, but the software is incredible - congratulations and bow, I never believed that someone could create something like this Regards, Alex
Try the "Tweak" parameters shown at page 12, saturation=1.15 and Hue=10. If is not enough you can try to increase them.
P.S. If you like HAVC put a star in the project's page.
I put a star - you deserve ten, but I can only give one
Hi, Dan I want to thank you again for the wonderful software you have invented From all the tests I have done so far, the best results were obtained with the reference frame method for video enhancement If it is not too impudent of me, I would like to ask 2 questions:
- Is it possible to remove the restriction on extracting all frames from the video? Currently I can only extract every second one, but I would like to improve all of them
- After extracting the reference images, I started looking for the best possible method for their improvement - of all the automated ones I tried, the best results, at least for poor quality images, were given by the ComfyUI Flux Kontext method. Is it possible, if you like it too, to implement this process in your software?
I don't know if it is allowed to publish a YouTube video here, where the process is explained and what resources are needed, if you are interested I can share it.
Of course, it can also be used separately
In the chapter 4.2 Advanced coloring using adjusted reference frames, is described how to extract the reference frames. Adjusting the reference frame manually is a time consuming task, so I suggest to adjust only the most important ones.
There is not restriction on the number of reference frames to be extracted. For example if you set SC min freq = 5, will be extracted at least a reference frame every 5 frames (please don't go below 2).
For what I can see ComfyUI Flux Kontext is a tool to improving images based on "prompts", no way to add it inside an Automatic Video Colorizer.