localcroft icon indicating copy to clipboard operation
localcroft copied to clipboard

Question about wav content

Open emphasize opened this issue 5 years ago • 8 comments

Hi el-tocino,

I'm struggling a bit to find a german dataset to speed up the process of finding fake words.

There are some sets, but almost exclusively spoken sentences (half-sentences). Some are short, but i'm not certain that this even qualifies to be training material. Is precise-train-incremental restricted to spoken words?

emphasize avatar Jul 15 '20 18:07 emphasize

You can train precise for recognizing sneezes, actually, if so inclined.

Using sox you can trim longer clips down, based on silence between words. Aim for 3s or less per clip. Then dump them in the nww folders as appropriate. It's still better to try and use false activation words and noises where possible. Random speech will help to an extent, but you also want to fine-tune this to be as accurate to both the wake word and discerning against not-wake-word as possible.

el-tocino avatar Jul 15 '20 20:07 el-tocino

Thanks, that's not meant to be a supplement. More an addition to the word finder methods you suggest.

Mozillas common voice dataset is an exceptable source then. Sadly not words, but short sentences with 6 or less words. And a hefty amount of data that's at least somewhat "peer-reviewed".

Do you recommend some ambient sound sources besides the tuxfamily.org suggestion?

-- Short additional question: What is meant by the batch -b option flag of precise-train?

Cheers, Swen

emphasize avatar Jul 15 '20 20:07 emphasize

Precise community data has a not wake word section including some noises. The google speech commands dataset is an ideal addition to not wake words (though it's large, and will significantly increase training time). Recording ambient noise is pretty easy with a cell phone as well.

Batch size is useful for making a wider pass of data for each epoch, I tend to use pretty large sizes (5000?), some experimentation would be useful.

Latest Common Voice now has a large subset of single word entries.

el-tocino avatar Jul 15 '20 22:07 el-tocino

Google Research has a lot of different language datasets (Nepali, who would guess that), but unfortunatly no german one. Or do you suggesting that languages itself play a lesser role? GSC v.2 is already downloaded, but then i realized: there's not much spoken english around here ;)

I think i will train them in a Raspbian VirtualMachine, if that's possible. Or turn to Windows completely for that process. My pi buddies already sweatin'.

emphasize avatar Jul 15 '20 22:07 emphasize

The language isn't as important as phonemes and pattern of words.

I'd train on a desktop rather than a pi with that volume of data. ;)

el-tocino avatar Jul 15 '20 23:07 el-tocino

After reviewing the common voice dataset more closely i think i'm pressed to trim down parts

based on silence between words

Do you mind sharing some useful sox commands?

Cheers

emphasize avatar Jul 17 '20 18:07 emphasize

I have a proposition myself.

https://d-rhyme.de/worte-verdrehen/

In general it's more for our german audience, but this particular section "twists words" in a way that the middle part of the name/word is replaced by random syllable(s?)/letters (word length is constant) - and therefor language agnostic

Let's say the wake word is "Samira". he spits out Salisa, Savita, Saliga, Sakita, ...

In my understanding that should be a great addition to the wordfinder/rhyme methods given by your howto.

emphasize avatar Jul 17 '20 20:07 emphasize

Try it and see?

Google sox silence, i don't have it handy and it'll explain the parameters better.

el-tocino avatar Jul 17 '20 21:07 el-tocino