FeatherNets_Face-Anti-spoofing-Attack-Detection-Challenge-CVPR2019 icon indicating copy to clipboard operation
FeatherNets_Face-Anti-spoofing-Attack-Detection-Challenge-CVPR2019 copied to clipboard

请问一下,输入一张人脸如何预测是否为真人照片呢 ?

Open zhishao opened this issue 5 years ago • 20 comments

zhishao avatar Oct 31 '19 00:10 zhishao

将图片输入feathernet网络得到1024维向量, 之后怎么做呢?

zhishao avatar Oct 31 '19 01:10 zhishao

image 看测试的代码,其实和普通的预测代码没有多大区别,也可以打印一下网络的输出也就知道了。

SoftwareGift avatar Oct 31 '19 01:10 SoftwareGift

谢谢您!但是我把real和fake都试了几张,输出的都试[0]。

image1 = Image.open('./image/f01.jpg') image1 = transform(image1).unsqueeze(0) output1 = net(image1) soft_output = torch.softmax(output1, dim=-1) preds = soft_output.to('cpu').detach().numpy() _, predicted = torch.max(soft_output.data, 1) predicted = predicted.to('cpu').detach().numpy() print(predicted)

zhishao avatar Oct 31 '19 02:10 zhishao

Its amazing its working! thanks i use these code: return 1 to fake photo and 0 in real face [face cropped required]

the next code print output:

./images/fake.jpg: FAKE!
./images/real.jpg: REAL

PDTA: I could export to tensorflow.js, some recomendation? thanks

import sys
sys.path.insert(0,'.')
sys.path.insert(0,'..')
import torch
import tensorflow as tf
from pytorch2keras.converter import pytorch_to_keras
from tensorflow.keras.models import Model
from models import FeatherNet
from torchsummary import summary
from PIL import Image
import torchvision.transforms as transforms


def check_spoofing(image_path):
    image1 = Image.open(image_path)
    image1 = transform(image1).unsqueeze(0)
    output1 = model(image1)
    #print(output1)
    soft_output = torch.softmax(output1, dim=-1)
    #print(soft_output,soft_output.data)
    preds = soft_output.to('cpu').detach().numpy()
    #print(preds)
    _, predicted = torch.max(soft_output.data, 1)
    predicted = predicted.to('cpu').detach().numpy()
    if(predicted[0]==1):
        print(image_path+": FAKE!")
    else:
        print(image_path + ": REAL")

if __name__=='__main__':
    input_size=224
    img_size=224
    ratio = 224.0 / float(img_size)
    global normalize, transform
     # Data loading code
    normalize = transforms.Normalize(mean=[0.14300402, 0.1434545, 0.14277956],  ##accorcoding to casia-surf val to commpute
                                     std=[0.10050353, 0.100842826, 0.10034215])
    name = 'FeatherNetB'
    model = FeatherNet(se = True,avgdown=True)
    #print(model)
    model_path = './checkpoints/FeatherNetB_bs32/_47_best.pth.tar'
    checkpoint = torch.load(model_path,map_location = 'cpu')
    print('load model:',model_path)
    model_dict = {}
    state_dict = model.state_dict()
    #print(checkpoint)
    for (k,v) in checkpoint['state_dict'].items():
        print(k)
        if k[7:] in state_dict:
            model_dict[k[7:]] = v
    state_dict.update(model_dict)
    model.load_state_dict(state_dict)
    #model.load_state_dict(checkpoint['state_dict'])
    model.eval()
    transform = transforms.Compose([
        transforms.Resize(int(256 * ratio)),
        transforms.CenterCrop(img_size),
        transforms.ToTensor(),
        normalize,
    ])

    check_spoofing('./images/fake.jpg')
    check_spoofing('./images/real.jpg')

msvargas avatar Nov 06 '19 03:11 msvargas

谢谢您!但是我把real和fake都试了几张,输出的都试[0]。

image1 = Image.open('./image/f01.jpg') image1 = transform(image1).unsqueeze(0) output1 = net(image1) soft_output = torch.softmax(output1, dim=-1) preds = soft_output.to('cpu').detach().numpy() _, predicted = torch.max(soft_output.data, 1) predicted = predicted.to('cpu').detach().numpy() print(predicted)

参考楼下的,楼下测试成功了,可能你忘记切换到eval模式了.

SoftwareGift avatar Nov 06 '19 03:11 SoftwareGift

执行过model.eval( ),我换了很多张图仍然全输出0,请问我使用手机对着电脑屏幕上的人脸拍照,然后识别手机拍的这张照片,这种情况可以被识别吗?

zhishao avatar Nov 06 '19 05:11 zhishao

@zhishao 兄弟,你好,关于输出全是0的问题你解决了吗?我现在也遇到了同样的问题

wangzhen6309271 avatar Nov 13 '19 06:11 wangzhen6309271

我觉得可能这个模型无法正确识别拍摄屏幕上的人脸吧,我使用了测试集中的照片以及三个预训练的模型测试,结果基本等于猜。

zhishao avatar Nov 13 '19 06:11 zhishao

@zhishao 出现过fake的结果吗?能把你的测试集给我一下吗?

wangzhen6309271 avatar Nov 13 '19 06:11 wangzhen6309271

@zhishao 模型训练跑通了现实训练结果很好,但是测试不怎么理想,你解决了吗

S130111 avatar Nov 14 '19 10:11 S130111

@punisher97 Can you please share the fake and real images that you used? I am getting real for any image I have tested with.

Rakesh-Chekuri avatar Dec 04 '19 11:12 Rakesh-Chekuri

@SoftwareGift 作者您好,请问预测时使用的是深度相机得到的深度图 还是 普通RGB图片呢?因为我看了您写的代码,训练时好像只使用了depth和label的数据,是这样吗?

zhishao avatar Dec 05 '19 09:12 zhishao

@SoftwareGift 作者您好 感谢您的分享. 想请教

  1. 使用RGB作为feathernetB的 input是可以的吗? 因为看您的input有三个channel, 还是说depth map 也有三个channel呢?
  2. 想请问按照您上面的叙述, 这样不就只考虑1024 feature vector中的index 0/1而已了吗?

ckcraig01 avatar Dec 24 '19 04:12 ckcraig01

作者提供的其实都是深度的预训练模型(除了FeatherNetB_bs32-ir/_54.pth.tar),没有rgb的,拿rgb来测试自然就和猜的没有两样。

archwine avatar May 04 '20 13:05 archwine

Its amazing its working! thanks i use these code: return 1 to fake photo and 0 in real face [face cropped required]

the next code print output:

./images/fake.jpg: FAKE!
./images/real.jpg: REAL

PDTA: I could export to tensorflow.js, some recomendation? thanks

import sys
sys.path.insert(0,'.')
sys.path.insert(0,'..')
import torch
import tensorflow as tf
from pytorch2keras.converter import pytorch_to_keras
from tensorflow.keras.models import Model
from models import FeatherNet
from torchsummary import summary
from PIL import Image
import torchvision.transforms as transforms


def check_spoofing(image_path):
    image1 = Image.open(image_path)
    image1 = transform(image1).unsqueeze(0)
    output1 = model(image1)
    #print(output1)
    soft_output = torch.softmax(output1, dim=-1)
    #print(soft_output,soft_output.data)
    preds = soft_output.to('cpu').detach().numpy()
    #print(preds)
    _, predicted = torch.max(soft_output.data, 1)
    predicted = predicted.to('cpu').detach().numpy()
    if(predicted[0]==1):
        print(image_path+": FAKE!")
    else:
        print(image_path + ": REAL")

if __name__=='__main__':
    input_size=224
    img_size=224
    ratio = 224.0 / float(img_size)
    global normalize, transform
     # Data loading code
    normalize = transforms.Normalize(mean=[0.14300402, 0.1434545, 0.14277956],  ##accorcoding to casia-surf val to commpute
                                     std=[0.10050353, 0.100842826, 0.10034215])
    name = 'FeatherNetB'
    model = FeatherNet(se = True,avgdown=True)
    #print(model)
    model_path = './checkpoints/FeatherNetB_bs32/_47_best.pth.tar'
    checkpoint = torch.load(model_path,map_location = 'cpu')
    print('load model:',model_path)
    model_dict = {}
    state_dict = model.state_dict()
    #print(checkpoint)
    for (k,v) in checkpoint['state_dict'].items():
        print(k)
        if k[7:] in state_dict:
            model_dict[k[7:]] = v
    state_dict.update(model_dict)
    model.load_state_dict(state_dict)
    #model.load_state_dict(checkpoint['state_dict'])
    model.eval()
    transform = transforms.Compose([
        transforms.Resize(int(256 * ratio)),
        transforms.CenterCrop(img_size),
        transforms.ToTensor(),
        normalize,
    ])

    check_spoofing('./images/fake.jpg')
    check_spoofing('./images/real.jpg')

thank you

damvantai avatar Jun 10 '20 08:06 damvantai

@Epimenides7 你说的真对

sunjunlishi avatar Jul 27 '20 09:07 sunjunlishi

Its amazing its working! thanks i use these code: return 1 to fake photo and 0 in real face [face cropped required]

the next code print output:

./images/fake.jpg: FAKE!
./images/real.jpg: REAL

PDTA: I could export to tensorflow.js, some recomendation? thanks

import sys
sys.path.insert(0,'.')
sys.path.insert(0,'..')
import torch
import tensorflow as tf
from pytorch2keras.converter import pytorch_to_keras
from tensorflow.keras.models import Model
from models import FeatherNet
from torchsummary import summary
from PIL import Image
import torchvision.transforms as transforms


def check_spoofing(image_path):
    image1 = Image.open(image_path)
    image1 = transform(image1).unsqueeze(0)
    output1 = model(image1)
    #print(output1)
    soft_output = torch.softmax(output1, dim=-1)
    #print(soft_output,soft_output.data)
    preds = soft_output.to('cpu').detach().numpy()
    #print(preds)
    _, predicted = torch.max(soft_output.data, 1)
    predicted = predicted.to('cpu').detach().numpy()
    if(predicted[0]==1):
        print(image_path+": FAKE!")
    else:
        print(image_path + ": REAL")

if __name__=='__main__':
    input_size=224
    img_size=224
    ratio = 224.0 / float(img_size)
    global normalize, transform
     # Data loading code
    normalize = transforms.Normalize(mean=[0.14300402, 0.1434545, 0.14277956],  ##accorcoding to casia-surf val to commpute
                                     std=[0.10050353, 0.100842826, 0.10034215])
    name = 'FeatherNetB'
    model = FeatherNet(se = True,avgdown=True)
    #print(model)
    model_path = './checkpoints/FeatherNetB_bs32/_47_best.pth.tar'
    checkpoint = torch.load(model_path,map_location = 'cpu')
    print('load model:',model_path)
    model_dict = {}
    state_dict = model.state_dict()
    #print(checkpoint)
    for (k,v) in checkpoint['state_dict'].items():
        print(k)
        if k[7:] in state_dict:
            model_dict[k[7:]] = v
    state_dict.update(model_dict)
    model.load_state_dict(state_dict)
    #model.load_state_dict(checkpoint['state_dict'])
    model.eval()
    transform = transforms.Compose([
        transforms.Resize(int(256 * ratio)),
        transforms.CenterCrop(img_size),
        transforms.ToTensor(),
        normalize,
    ])

    check_spoofing('./images/fake.jpg')
    check_spoofing('./images/real.jpg')

I think Real is "1" while Fake is "0". If you check txt files present in data folder, you can conclude the actual label value.

piyushlife avatar Aug 24 '20 09:08 piyushlife

I dont know what this line "./checkpoints/FeatherNetB_bs32/_47_best.pth.tar" refrence to? it has just a readme and when I download links in readme one of them is not tar and it is dmg (I dont know even the link language) and the other one is not usable

FatemeGhasemi avatar Oct 19 '20 11:10 FatemeGhasemi

训练数据的人脸都是扣出来的, 那实际应用的时候不需要把人脸扣出来么?

yangjian1218 avatar Nov 15 '21 10:11 yangjian1218

谢谢您!但是我把real和fake都试了几张,输出的都试[0]。 image1 = Image.open('./image/f01.jpg') image1 = transform(image1).unsqueeze(0) output1 = net(image1) soft_output = torch.softmax(output1, dim=-1) preds = soft_output.to('cpu').detach().numpy() _, predicted = torch.max(soft_output.data, 1) predicted = predicted.to('cpu').detach().numpy() print(predicted)

参考楼下的,楼下测试成功了,可能你忘记切换到eval模式了.

作者,您好, 按照这个代码,的确可以输出预测,可是为啥是[1,1024]维度呢, 不是二分类么?另外我看FeatherNet的定义,n_class=2并没有用上.

yangjian1218 avatar Nov 16 '21 06:11 yangjian1218