multiple-attention
multiple-attention copied to clipboard
The code of multi-attention deepfake detection
Hi, thank you for open sourcing your code. I've read your paper and I really appreciate the attention based data augmentation implemented for the deep fake detection. I do have...
不好意思打扰一下,我目前正打算按照readme里面那样运行multi-attention,但是报错显示没有这些模块  请问这要怎么解决啊?
您好,大家都是中国人,我就用中文表达了(/尴尬,怕英文表达不准确) 在AGDA.py中有一个 mod_func的函数,最后两行代码不知道代表啥数学意义,有看明白的大佬吗? bottom=torch.sigmoid((torch.tensor(0.)-thres)*zoom) return (torch.sigmoid((x-thres)*zoom)-bottom)/(1-bottom)
Everyone, I refactored the current project. Although I don’t know how effective it is, the code can run and be trained. [https://github.com/yblir/multiple-attention-modify](https://github.com/yblir/multiple-attention-modify)
Thanks for your sharing. But I wanna to ask how to load the pretrained weights, e.g., "ff_c23.pth", as the "FF-checked.pkl" can't be loaded. BTW, could you please provide a simple...
你好,论文中提到的重要模块的代码有给吗?
Has anyone been able to make this program work ?
Hi, When I try to run the code of evaluation, I need to load the "config.pkl" as here shown:https://github.com/yoctta/multiple-attention/blob/bb069ccde5174eab59c9be25c254b7db34a03793/evaluation.py#L13 I didn't find any evaluation config parameters in "config.py" except training...
Could you please share the training process and techniques?
@yoctta we get RuntimeError: expected scalar type Byte but found Float in the Conv2dStaticSamePadding class' forward method w.r.t self.weights which are of type Parameter please change the return statement in...