FeatherNets_Face-Anti-spoofing-Attack-Detection-Challenge-CVPR2019
FeatherNets_Face-Anti-spoofing-Attack-Detection-Challenge-CVPR2019 copied to clipboard
会提供tensorflow的版本吗
会提供tensorflow的代码吗
同问,会有KERAS版本的吗
Input image size: 224, test size: 224
- Number of FLOPs: 83.05M The network has 351220 params. total_params 351220 DataParallel( (module): FeatherNet( (features): Sequential( (0): Sequential( (0): Conv2d(3, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace) ) (1): InvertedResidual( (downsample): Sequential( (0): AvgPool2d(kernel_size=2, stride=2, padding=0) (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): Conv2d(32, 16, kernel_size=(1, 1), stride=(1, 1), bias=False) ) (conv): Sequential( (0): Conv2d(32, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=32, bias=False) (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace) (3): Conv2d(32, 16, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (2): SELayer( (avg_pool): AdaptiveAvgPool2d(output_size=1) (fc): Sequential( (0): Linear(in_features=16, out_features=2, bias=True) (1): ReLU(inplace) (2): Linear(in_features=2, out_features=16, bias=True) (3): Sigmoid() ) ) (3): InvertedResidual( (downsample): Sequential( (0): AvgPool2d(kernel_size=2, stride=2, padding=0) (1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): Conv2d(16, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) ) (conv): Sequential( (0): Conv2d(16, 96, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace) (3): Conv2d(96, 96, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=96, bias=False) (4): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU6(inplace) (6): Conv2d(96, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) (7): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (4): InvertedResidual( (conv): Sequential( (0): Conv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace) (3): Conv2d(192, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=192, bias=False) (4): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU6(inplace) (6): Conv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) (7): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (5): SELayer( (avg_pool): AdaptiveAvgPool2d(output_size=1) (fc): Sequential( (0): Linear(in_features=32, out_features=4, bias=True) (1): ReLU(inplace) (2): Linear(in_features=4, out_features=32, bias=True) (3): Sigmoid() ) ) (6): InvertedResidual( (downsample): Sequential( (0): AvgPool2d(kernel_size=2, stride=2, padding=0) (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): Conv2d(32, 48, kernel_size=(1, 1), stride=(1, 1), bias=False) ) (conv): Sequential( (0): Conv2d(32, 192, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace) (3): Conv2d(192, 192, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=192, bias=False) (4): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU6(inplace) (6): Conv2d(192, 48, kernel_size=(1, 1), stride=(1, 1), bias=False) (7): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (7): InvertedResidual( (conv): Sequential( (0): Conv2d(48, 288, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace) (3): Conv2d(288, 288, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=288, bias=False) (4): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU6(inplace) (6): Conv2d(288, 48, kernel_size=(1, 1), stride=(1, 1), bias=False) (7): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (8): InvertedResidual( (conv): Sequential( (0): Conv2d(48, 288, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace) (3): Conv2d(288, 288, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=288, bias=False) (4): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU6(inplace) (6): Conv2d(288, 48, kernel_size=(1, 1), stride=(1, 1), bias=False) (7): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (9): InvertedResidual( (conv): Sequential( (0): Conv2d(48, 288, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace) (3): Conv2d(288, 288, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=288, bias=False) (4): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU6(inplace) (6): Conv2d(288, 48, kernel_size=(1, 1), stride=(1, 1), bias=False) (7): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (10): InvertedResidual( (conv): Sequential( (0): Conv2d(48, 288, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace) (3): Conv2d(288, 288, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=288, bias=False) (4): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU6(inplace) (6): Conv2d(288, 48, kernel_size=(1, 1), stride=(1, 1), bias=False) (7): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (11): InvertedResidual( (conv): Sequential( (0): Conv2d(48, 288, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace) (3): Conv2d(288, 288, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=288, bias=False) (4): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU6(inplace) (6): Conv2d(288, 48, kernel_size=(1, 1), stride=(1, 1), bias=False) (7): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (12): SELayer( (avg_pool): AdaptiveAvgPool2d(output_size=1) (fc): Sequential( (0): Linear(in_features=48, out_features=6, bias=True) (1): ReLU(inplace) (2): Linear(in_features=6, out_features=48, bias=True) (3): Sigmoid() ) ) (13): InvertedResidual( (downsample): Sequential( (0): AvgPool2d(kernel_size=2, stride=2, padding=0) (1): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): Conv2d(48, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) ) (conv): Sequential( (0): Conv2d(48, 288, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace) (3): Conv2d(288, 288, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=288, bias=False) (4): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU6(inplace) (6): Conv2d(288, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (7): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (14): InvertedResidual( (conv): Sequential( (0): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace) (3): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=384, bias=False) (4): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU6(inplace) (6): Conv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (7): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (15): InvertedResidual( (conv): Sequential( (0): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU6(inplace) (3): Conv2d(384, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=384, bias=False) (4): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU6(inplace) (6): Conv2d(384, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (7): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (16): SELayer( (avg_pool): AdaptiveAvgPool2d(output_size=1) (fc): Sequential( (0): Linear(in_features=64, out_features=8, bias=True) (1): ReLU(inplace) (2): Linear(in_features=8, out_features=64, bias=True) (3): Sigmoid() ) ) ) (final_DW): Sequential( (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=64, bias=False) ) ) )
这是print出来的summary 有时间不妨自己重新写一个~