Results 65 comments of csxmli2016
trafficstars

> I found that this module didn't work well for English, and I want to retrain it, so I'm wondering how to train it? which project is it based on?...

> i am unable to load screen. this is the code i used > > import cv2 > > cap = cv2.VideoCapture('rtsp://admin:[email protected]:554/1') facedetect=cv2.CascadeClassifier('haarcascade_frontalface_default.xml') > > while True: ret, frame =...

> name 'fused_act_ext' is not defined Error in real_lq12. Continue... Kindly refer to the instructions of BasicSR or https://github.com/TencentARC/GFPGAN/issues/5

> 输入需要有HQ参考图像,还是不符合通用场景,大多数人脸都无法获取到参考HQ图像 例如手机相册,现在已经支持人物分组,在拍摄的时候利用相册里已有的高质量图像来增加身份相关的纹理,特别是在一些极端场景,例如暗光或者前置摄像头自拍这些场景,可能会稍微有用。 主要是想保id,一些人脸图像在一些场景中用通用的人脸复原模型或者是用人脸先验相关的方法,都容易丢失身份,不像本人,这种情况就只能通过已有的HQ图像作为参考,引入身份相关的纹理。

> Thank you for your encouragement all along. When I execute the .sh file directly in my virtual environment, the code trains normally. However, when I debug it in VS...

> Hello! Thank you very much for your excellent work. As a newcomer, I don't quite understand which script file to run during the inference editing stage. How should I...

> "Thank you for your response. When I run train_wild.py, I don't understand how the 'f = open('./Face/ffhq_wild_names_with_caption.txt', 'r')' part is supposed to be constructed. In the txt file I...

> "Thank you very much for your excellent work. Could you please share the code or readme for 'utilizing BLIP2 to obtain captions'?" A simple example: ``` from PIL import...

> The statement at the end of the WPlusAttnProcessor Class defines the residual connection. Are you defining the initial hidden_states, which is the input from the previous step, as residual,...

> no ckpt file is found in /home/cooper/.cnstd/1.2/db_resnet34 > > It seems that some files are missing, how can I download them? Have you successfully installed the python package of...