just_audio
just_audio copied to clipboard
Normalize audio track while playing
Normalizing the audio track while playing is an advanced feature that automatically adjusts the volume level of an audio track in real-time as it is being played. This feature is particularly useful for situations where the volume level of a song may need to be adjusted on the fly.
Is your feature request related to a problem? Please describe. No, It's not a problem.
Describe the solution you'd like Users may specify the desired normalization level in decibels (dB) when requesting this feature.
Describe alternatives you've considered it is already an architecture to support audio effects so it can be added to the player.
Additional context Overall, this feature is powerful for anyone who needs to adjust the volume levels of audio tracks on the fly, and its inclusion in an audio playback app can greatly enhance its functionality and usability.
allows users to adjust the volume level of an audio track in real time
This is not what I would think of as normalisation since I would think it implies having an algorithm to determine the right level rather than letting the "user" adjust the level, and you don't really need a new feature for this. The setVolume
method already allows you to adjust the volume while it is being played.
This will help https://www.youtube.com/watch?v=IMQdSTlTXjA
I have edited the issue.
allows users to adjust the volume level of an audio track in real time
This is not what I would think of as normalisation since I would think it implies having an algorithm to determine the right level rather than letting the "user" adjust the level, and you don't really need a new feature for this. The
setVolume
method already allows you to adjust the volume while it is being played.
Is your feature request related to a problem? Please describe. No, It's not a problem.
Presumably the problem this feature is intended to address is that some songs may have a dynamic range that is too extreme to be comfortably listened to.
I think this is possible to implement natively using an AudioProcessor
on Android, and using the TAP on iOS.
But until then, since it's probably not a simple matter to implement, you may be able to workaround it using existing features. Here are a couple of approaches:
- Using the
visualizer
branch, use the real-time audio visualization data to get an indication of the current gain levels and use that to to guide how you will callsetVolume
. - Scan the entire audio file in advance (e.g. using just_waveform) and then pre-compute the volume levels that you will be using on the song's timeline, and then execute that with calls to
setVolume
.
Note that setVolume
can be called repeatedly to good effect. For example, people have used this approach to implement dynamic fade-ins and fade-outs.