michiyosony
michiyosony
@slashstar @rizkiarm How were you able to get the "Lipreading in the wild" dataset? My impression from [this](https://www.robots.ox.ac.uk/~vgg/data/lip_reading_sentences/) was that it wasn't available.
Ah, thank you. I was confusing "Lipreading in the Wild" with "Lipreading sentences in the Wild".
@crazygirl1992 I don't know why it stopped--you'd probably need to include logs for anyone to be able to help you. However, whenever my training crashed, I would restart it to...
If you watch the video `lrarzn.mpg`, it starts with a grey screen. In the LipNet paper, they write "The videos for speaker 21 are missing, and a few others are...
@rad182 I was getting a similar error, but after deleting `datasets.cache` (and possibly some other change that I'm not aware of?) the stack trace changed to ``` Process Process-1: Epoch...
I am able to train on mouth crops. I was not able to figure out how to train on videos directly.
@rizkiarm When you trained `weights368.h5`, did you leave out speakers 1, 2, 20, and 22 (as the LipNet authors did)?
@rizkiarm I see, thank you. I wasn't paying enough attention to the different types of training. Is "unseen speakers" the case where a particular speaker's videos are either all in...
@rizkiarm Great, thanks. With regards to testing using the "real validation sets", how do I know which videos were in your validation set and not in your training set? I'm...
My problem seems to have been the input video--quite possibly the lighting. Another set of videos is giving rather good results; here's a frame: 