Feat: Add normalize option to CER and WER metrics for normalized score calculation
This pull request introduces a normalize option to the compute() function of both the CER and WER metrics. When set to True, the metrics will calculate and return normalized scores.
This addresses the feature request raised in issue #161 from 2022, which has remained unaddressed. This implementation allows users to calculate CER and WER scores ranging from 0 to 100%, as requested in the issue.
The normalized CER is calculated as:
CER_normalized = (Insertions + Substitutions + Deletions) / (Insertions + Substitutions + Deletions + Correct Characters)
The normalized WER is calculated similarly, at the word level.
Hello @lhoestq,
I hope you're doing well.
I'm writing to gently follow up on this PR. It's a small and straightforward change that introduces normalized versions of WER and CER for ASR evaluation.
The goal is to provide a more robust metric against outliers, which can heavily skew the standard scores. Although the implementation is minimal, I believe this addition offers significant value to researchers.
Since the change is quite small, I hope it will be quick to review. Please let me know if you have any feedback.
Thank you!
I'd love this to be implemented!