Piotr Kawa

Results 10 comments of Piotr Kawa

Hi! Thank you very much! :) Yes - it is possible to use only one of the available datasets, however this will likely result in the less generalized models, as...

Hi! `generate_adversarial_samples.py` is the old name of `evaluate_models_on_adversarial_attacks.py` script. Thank you for pointing it out - I updated README accordingly! :)

Hi, unfortunately we do not provide pre-trained models, however training procedure can easily allow to recreate the results. Alternatively, due to computational requirements, you could try using smaller batch size,...

Hi, could you provide a detailed stacktrace? One of the possible errors you could get would be because at the time of the paper being written there were no exact...

Could you please provide information on where exactly `KeyError: 'attack type'` happens? It is used in couple of places and I cannot find its location in this stacktrace. To only...

Hi! The following repository contains only the architecture of the model and some benchmarks we used in our paper. The repository related to our other work ("Improved DeepFake Detection Using...

Hello! Regarding torchaudio - unfortunately I did not encounter the problem you describe, but I guess it might be a problem with a backend for torchaudio i.e. ffmpeg and sox....

Hi! Unfortunately I overlooked this script when releasing our codebase. The script we used looked similar to this: ```python from pathlib import Path from moviepy.editor import * FAKEAVCELEB_DATASET_PATH = ""...

The results presented in the article should be achievable using the repository with default settings (among others - 5 epochs). We ran our experiments again using the repository code and...

I'm glad to see you achieved better results. Our training also contained 286,014 samples. AAD is by design a framework for various DeepFake datasets, but I will surely add this...