SDRPlusPlus
SDRPlusPlus copied to clipboard
SSB AGC click on strong signals
Hardware
- CPU: Mac M1
- RAM: 32GB
- GPU: M1
- SDR: Remote Airspy HF+, via SpyServer, running Float32 sample mode.
Software
- Operating System: OSX 12.6.5
- SDR++: May 2nd 2023, 19:29, Intel build
Bug Description The SSB demodulator AGC seems to behave a little oddly with very strong SSB signals. In the example i've provided, you hear distinct 'clicks' when the station starts talking. Maybe I haven't got the AGC settings tuned correctly, and unfortunately I've long since lost the default settings (is there a button to reset these to defaults?).
Looking at the raw IQ data, it's not clipping there, it's just seems to be that the signal in question is very strong, and the band noise is very low. Any suggestions on how to reduce this effect would be appreciated!
Steps To Reproduce I've put a baseband recording of the signal in question here: https://www.dropbox.com/s/kpl2js5yxvpafk3/baseband_7137179Hz_10-05-59_14-05-2023.wav?dl=0 This is a float32 recording, and the 'strong' signal is at 7135 kHz LSB.
I don't think there's really any setting that can be adjusted to get around this. The signal level jumps up 60dB instantly when the AGC has had time to bring the gain back up because the guy stops talking for over a second, it's gonna sound like a click no matter what you do, because that's what it is.
You could try to increase attack and decrease decay to make the speed at while the gain is lowered faster and the speed at which it's increased back up slower but it doesn't change that much (the SSB defaults are 50 for attack and 5 for decay)
I might see later if I can modify the AGC to handle this but I doubt it, especially with the very little free time I've got at the moment...
I'm wondering how those AGC parameters compare to what the AGC in my commercial amateur rig is doing then? (As with similarly strong signals, I don't get that kind of clicking or popping sound) In my IC-7610, I don't have attack/decay controls, just a AGC time constant in seconds. The defaults are:
- Fast: 0.3 seconds
- Mid: 2 seconds
- Slow: 6 seconds (which is also the maximum the time constant can be set to, and I will often use this when there's a lot of band noise, like static crashes). I'm wondering if the controls currently available in SDR++ let me adjust to that kind of extent?
The attack and decay are simply separate time constants for raising and lowering the gain because you usually want the gain to drop faster than to increase.
So that suggests that with the current maximums (200 for attack, 50 for decay, both in ms), there's no way to get anywhere near the mid or slow settings that I have in my other SSB rigs?
the values are not in any particular unit (internally they actually mean fraction of the samplerate that the demodulator is running at).
No matter how much you raise that time constant, you're still gonna a click. Analog rigs are analog so they you don't really get the same kind of harsh clipping and digital rigs cheat by doing some tricks with the AGC which I haven't bothered looking into because for most realworld signal this is not really much of an issue
OK, so based on this new information, how do I convert from those values to a time constant in seconds for the attack/decay? What sample rate are we talking about here? (Noting that the baseband sample rate in this case is 24 kHz).
Having those values be a value in seconds would make this a lot more intuitive for users...
I still think being able to set the decay longer would improve maters, however it appears with the current settings I don't have the ability to be able to test this.
OK, so based on this new information, how do I convert from those values to a time constant in seconds for the attack/decay?
You don't, just pick something that sounds right.
Having those values be a value in seconds would make this a lot more intuitive for users...
As I said, they're not seconds. The DSP doesn't simulate electronics directly so the exact value is meaningless.
I still think being able to set the decay longer would improve maters
I doesn't, I tried.
I know they are not values in seconds, you made this abundantly clear further up the thread.
What I'm asking is how to convert the values to a time constant in seconds so I can try and set values that somewhat match what I can set in my other SSB radios (analog, digital, SDR in other software, etc...), so I can try and do some kind of meaningful comparison.
there's no easy conversion between the two. Could probably figure it out with a pen and paper and 30 minutes of calculus but I don't have the time for this at the moment
If you don't mind a little more latency in the signal, the undesirable effects can be avoided by sampling the strength and applying AGC to a delayed version of the samples. In effect, you get to "look ahead" by a few milliseconds and start reducing the gain early. About 5 ms should be sufficient. It is a common feature of editors, but also works for live signals with intentional latency applied.
I am not sure exactly what the best practices are for doing it. I would guess that a buffer is used - sampling strength on the input, calculating the necessary gain change, then applying it to samples emerging from the buffer. So the actual delay would need to be about 5 ms plus sufficient time to calculate the gain change.
If you look at the code you'll notice this is already the case within the currently processed buffer.
darksidelemm,
I have been running your IQ file over here in SDR++ and looking at some features up close in Audacity. I've also been looking at code for this software's AGC loop. It is interesting, and there appear to be more than one issue between the transmitter and SDR++. I believe four things can make things clean, clear, and a pleasure to hear.
First off, the station transmitting is putting out a strong signal. Great, except he is overdriving the transmitter on voice peaks, generating splatter visible on the waterfall (and audible above and below his signal). If he lowers his drive to the final amp, that should stop.
Next, the AGC in SDR++ doesn't decrease the gain soon enough. I'm going to assert to AlexandreRouma that it is well and good to look ahead to catch signals above the max amplitude. It is also good to do these three things:
- Look ahead and apply AGC for all signals, or at least for signals considered "strong" but not enough to exceed full strength. Adding 15 ms to 20 ms of look ahead time would catch the pops I found.
- When looking ahead and applying AGC early, it is good to apply a hang time to make sure the AGC does not decay early. In fact, even more hang time could prevent background noise pumping between syllables. For example, keeping the gain reduced for lookahead + 1/3 of the decay time could work nicely.
- Less gain on the ambient background noise would improve the overall user experience and require a smaller gain reduction when a large signal appears. That would result in SDR++ behaving as a "quiet receiver" as long as the user is judicious with their receiver hardware gain.
Even when setting up a low noise floor from the hardware, SDR++ applies so much gain that things get sporty upon the arrival of a large signal. A limit to the added gain on low level signals would make the strong stations less shocking when they come up.
Here are some more images showing comparisons between the audio output and the IQ recording.
Alright, gents, there's something to consider. LOL I guess I should learn some C and start poking around that AGC loop. More look ahead, balanced by hang time, and a little less gain on the background could be just the feature which fixes this issue.
Cheers, Phil / AB9IL
This is one of the 'downsides' of receiving from a site with a very low noise floor. Every strong station looks like it has terrible sidebands. In this case I think they are about 30 dB down, which isn't great, but I've certainly seen a lot worse. It does look bad on the waterfall though!