Does it make sense to do IQ imbalance correction after decimation?
Just a general RF DSP question. I doubt we'll be able to do IQ imbalance correction at 500ksps, especially on RP2040, but does it make sense to do this after decimation?
Yes, it still makes sense after decimation. I've been meaning to do some measurements of mirror suppression, so we can do a before and after.
Yes, it still makes sense after decimation. I've been meaning to do some measurements of mirror suppression, so we can do a before and after.
What exactly needs to be done? Maybe I could help.
I think the best way to measure the image rejection is to tune in a strong signal (as strong as possible without saturating the op-amp), note the power in dBm, then temporarily turn on "swap IQ" and see how much the signal is attenuated. Whenever I have tried this, the signal has disappeared completely below the noise floor, which suggests that the image rejection is already quite good. As a sense check, I temporarily disconnected the I or Q input from the ADC, and the image signal isn't rejected at all (as expected). To get an accurate measurement I was going to try injecting a signal with a signal generator to give the largest possible signal, with no noise from the antenna. Using CW mode with the narrowest bandwidth setting will reduce the noise floor as far as possible. I was then going to repeat the measurement over a few different bands to see if there was any drop-off in performance as the frequency increases.
I was thinking of implementing something along the lines described here: https://github.com/df8oe/UHSDR/wiki/IQ---correction-and-mirror-frequencies. They achieved an image rejection of 60-65dB by implementing IQ imbalance correction.
On Fri, 11 Oct 2024 at 12:51, mryndzionek @.***> wrote:
Yes, it still makes sense after decimation. I've been meaning to do some measurements of mirror suppression, so we can do a before and after.
What exactly needs to be done? Maybe I could help.
— Reply to this email directly, view it on GitHub https://github.com/dawsonjon/PicoRX/issues/136#issuecomment-2407248604, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFPFX4TVRZXVQWEWLJBYJDZ263SJAVCNFSM6AAAAABPYSQEEWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMBXGI2DQNRQGQ . You are receiving this because you commented.Message ID: @.***>
Thank for for the detailed explanation. So the idea is to have the correction coefficients fixed (or have few for different band)? I was thinking about constant software estimation and compensation. I was looking at this, but even computing all the needed moving averages seems to be too expensive in our case.
I was thinking of implementing automatic correction similar to the method you linked. The Mosley paper they followed on the uhsdr project describes an efficient method of estimating the imbalance and calculating the correction coefficients. It basically uses accumulators to estimate the phase and magnitude differences between I and Q. We wouldn't need to calculate the correction coefficients very often, once every few blocks would be plenty. Applying the correction should be achievable if we apply it after decimation, it should be slightly simpler than the existing frequency shift, only a couple of multiplies and adds for each sample.
I was thinking of implementing automatic correction similar to the method you linked. The Mosley paper they followed on the uhsdr project describes an efficient method of estimating the imbalance and calculating the correction coefficients. It basically uses accumulators to estimate the phase and magnitude differences between I and Q. We wouldn't need to calculate the correction coefficients very often, once every few blocks would be plenty. Applying the correction should be achievable if we apply it after decimation, it should be slightly simpler than the existing frequency shift, only a couple of multiplies and adds for each sample.
I managed to write something that is efficient enough and might actually work. The computed values seem to be stable and close to what is expected (c1 small, close to zero, c2 close to one). Just listening to stations it's hard to say what the effect is and I don't have equipment to test. The code is here if you're interested.
Nice! Looks like a very efficient implementation. I will try some tests and see if it improves the performance.
Nice! Looks like a very efficient implementation. I will try some tests and see if it improves the performance.
Here is a branch with some more adjustments and UI controls for the correction: https://github.com/mryndzionek/PicoRX/tree/iq_imb6
Something not quite right at the moment, enabling IQ correction is making the images worse rather than better. I expect its probably a simple fix somewhere. For interest, I have done some baseline measurements (a bit crude, only a single data point at each frequency) of the image rejection with no correction: [email protected], [email protected], 34dB@7MHz, 37dB@14MHz, 38dB@28MHz. I'm excited to see how much improvement we can get with IQ correction.
Something not quite right at the moment, enabling IQ correction is making the images worse rather than better. I expect its probably a simple fix somewhere.
I think we're running out of int32_t range on some accumulator variables...
I added one more commit to my branch. Now it should be less obviously worse after enabling correction :smile: (I still don't see a clearly positive effect).
On this branch I added some more adjustment and first order dc-block on IQ signals. This seems to have some positive effect and it's mostly due to dc-blocking, not the imbalance corrector.
Here is the final version. The impact is hard to judge. It's at least not obviously bad :wink: I'm not sure I'll be able to come up with something better.
Cool, testing it now!
I have made a simple (and fixed point) simulation in Python:
So yeah, it works. At least for clean IQ signals.
I tried a few tests with the iq_corr_mosely branch, but I still couldn't see any improvements to the image rejection. I have spent a bit of time trying to see if I could get it working. I wasn't able to put my finger on the issue, but I worked through testing each stage in turn and I came up with this.
It seems to work very well, improving the image rejection by at least 10-20dB, but it is now hard to measure the image rejection because the images are either below the noise floor, or too small to measure.
I really like your idea of putting the DC removal after the decimation, it makes it much faster, and works with I and Q individually. It turns out that the cic decimation doesn't care about the DC offset, so I was able to take out the earlier DC removal on the ADC inputs which saves a significant amount of CPU. I was also able to remove the rounding from the cic decimator which is now redundant, although the effect was much less noticeable.
Looks and works great! I would say shipit :shipit:
One more thing. I've been playing with CIC compensation filters. FIR needs at least 15 taps, so too expensive (gets me to 130% CPU load). However I managed to derive an IIR filter. Here is the implementation:
void __not_in_flash_func(rx_dsp ::comp_filt)(int16_t &i, int16_t &q)
{
#define COMPF_B0 (-89274L)
#define COMPF_A1 (42593L)
#define COMPF_A2 (13914L)
static int32_t i_yprev = 0;
static int32_t q_yprev = 0;
static int32_t i_ypprev = 0;
static int32_t q_ypprev = 0;
i = ((COMPF_B0 * i) - (COMPF_A1 * i_yprev) - (COMPF_A2 * i_ypprev)) >> 15;
q = ((COMPF_B0 * q) - (COMPF_A1 * q_yprev) - (COMPF_A2 * q_ypprev)) >> 15;
i_ypprev = i_yprev;
q_ypprev = q_yprev;
i_yprev = i;
q_yprev = q;
}
Seems to make spectrum flatter and boost the tuned signal by ~6dBm. What do you think?
That's a very neat implementation. I always struggle a bit with IIR filters, especially in fixed point. Would be very interested to see how this was derived.
It's fairly easy to compensate for the curve of the CIC filter in the frequency domain, I have tried this and it works well. There is a Python simulation of the CIC filter, I just took the reciprocal of this to work out the gain for each frequency point in the FFT. I then simply scale each frequency bin by the appropriate gain in the FFT filter. For an efficient implementation, it's only necessary to scale the frequency bins in the pass band. The whole spectrum is also flattened using this technique, but it is only necessary to apply the correction once, each time the display is updated.
I have a branch that includes most of this, I will push it and post a link when I get a spare moment.
That's a very neat implementation. I always struggle a bit with IIR filters, especially in fixed point. Would be very interested to see how this was derived.
The Python script is very messy. I used my own least squares FIR design function and Prony's method do derive the IIR. I can only show frequency responses for now:
The only potential problem might be nonlinear phase response:
If I remember correctly we're tuning around 6kHz, so still in linear range.
Regarding moving stuff to frequency domain, is this how frequency shifting supposed to be done?
Yes, that's right the frequency shift is effectively rotating the spectrum in the frequency domain. The difficulty comes when we need a resolution finer than 1 bin.
Its quite unusual to see frequency shifting implemented in the frequency domain. I think this is because frequency shifting is actually slower in the frequency domain than the time domain.
When we are filtering, a convolution operation in the time domain becomes a multiplication operation in the frequency domain, dramatically reducing the number of operations (even after the FFT and IFFT are taken into account).
When we are frequency shifting, the same principle works against us, a multiplication operation in the time domain becomes a convolution operation in the frequency domain, increasing the number of operations.
But how exactly is it slower? We are just shuffling memory without multiplications. Or are you talking about the resolution finer than 1 bin case?
Would be good to have a more specific TODO list of things that need to be moved to frequency domain.
Yes, I meant if it was finer than 1 bin. I'm not sure if there is anything existing that needs to be moved to the frequency domain, but we can exploit it when we add new features.
One thing I would like to implement is frequency based noise reduction. Essentially this works like having a squelch function for each frequency bin that blanks "noise" below a threshold. The difficult bit is estimating the noise level in each bin in order to set the thresholds, and to avoid distorting the signal.
Actually regarding squelch, I would like to improve it. I made #140 and I think having this state machine and adjustable timeout makes sense, but what is missing is automatic squelch, without explicitly setting the threshold. I'm planning to explore this. Should be efficient even on floats.
Yes, really like the idea of automatic squelch. I have pushed my TFT branch here. It is still a work in progress, but has support for a separate TFT waterfall, CAT control and a few other bits and pieces. I have implemented CIC correction in the frequency domain.
Yes, really like the idea of automatic squelch. I have pushed my TFT branch here. It is still a work in progress, but has support for a separate TFT waterfall, CAT control and a few other bits and pieces. I have implemented CIC correction in the frequency domain.
What is the plan with TFT branch? Will it be merged to testing? I really would like to have CIC correction in testing. Should I cherry-pick?
I think I have merged most features from testing (not the new squelch change yet), but I took a different path early on, so there are some differences in approach. I was thinking of making a new release based on the TFT branch (fairly) soon. I probably won't add any new features to it for now, but will focus on testing, documentation and bug fixes.
My plan was to keep using the testing branch as a place to try out and share new features and ideas, and incorporate the more stable changes into releases periodically. I could merge the TFT branch back into testing now, or I'm happy to cherry-pick, whichever you prefer.
Will the TFT branch also support OLED displays and u8g2?