Offset in accumulated carrier phase?
When examining the "dump" from the GPS_L1_CA_DLL_PLL_Tracking loop, I have noticed that there is an offset in the accumulated carrier phase (acc_carrier_phase_rad) compared to the carrier phase of the input signal.
I have a short simulated stretch of data with four PRNs: 5, 7, 8, and 13. To illustrate what I am talking about, I prepared a plot below. At an arbitrary sample time 13 seconds in to the data set, I grab the complex sample value out of the simulated data and plot each of those in the left-most plot. I also retrieved the estimated amplitude and carrier phase from the gnss-sdr dump, constructed y = Ahat * exp(j * phihat), and then plotted that in the middle plot. I have developed another tracking loop myself, and I show the outputs of this in the right-most plot. The right-most plot is what I expect to see from gnss-sdr, where the carrier phase matches that of the input signal (or is off by pi).
Side note: The tracking loop dump from gnss-sdr does not explicitly output the sample numbers, but it can be constructed using the below, in the case where the data type is ishort, at least. s = (h5pydata["aux2"] + h5pydata["aux1"]) * 2
gnss-sdr tracks the carrier phase well, except that each of the PRNs has a different unknown offset. Can someone explain the offset to me? Is this a bug, or is the carrier phase initialized in a different way than I am used to? I am hoping to remove the offset so that the gnss-sdr carrier phase lines up with the true carrier phase. Thank you.
Expected behavior: acc_carrier_phase_rad matches the input carrier phase (or is off by pi) Actual behavior: acc_carrier_phase_rad tracks the input carrier phase, but has an unexplained offset relative to the input carrier phase that is different between PRNs
Update: I poked around through the source code a bit more, particularly the dll_pll_veml_tracking.cc file. When I comment out the line below, the issue goes away. Is there any detailed documentation or algorithmic description of the DLL/PLL implemented here? I'm curious about the purpose of this line.
d_acc_carrier_phase_rad -= d_carrier_phase_step_rad * static_cast<double>(samples_offset);
If this is indeed the case it would explain an observed effect that the pseudorange, which includes carrier phase, differs between virtual satellites located at the same point, but with different PRNs.
Any updates from anyone?
What do you mean that the pseudorange includes the carrier phase? By default the code loop is aided by the carrier loop (the option carrier_aiding in the tracking config), but the pseudorange is just constructed using the time of reception, which is based on the code loop. You could try disabling carrier_aiding in your config, but my hunch is that it won't solve your problem.
HI, I have been working on the basis that the code loop is the 'coarse' estimation of pseudorange and the carrier loop is the 'fine'. However, I have just run the gnss-simulator which is part of the extended test code. It generates a 300 second 2.6Ms/s 8 bit IQ file which gnss-sdr can read. Used a patched file where 4 of the visible satellites were copied/pasted with the same navigation section, so will be in the same point in space - moving together. I would expect the same pseudorange but actually get a 13m difference. As you say, this is caused by a 43ns difference in the computed TOW_at_current_symbol. The gnss-simulator correctly generates 4 of the satellites with identical code phase (Chips) which should therefore match to about 1ns. Any idea why this happens. Only the PRN is different.
One more result. Using the test file which is also compatible with a hackrf one, the output was fed to a UBlox F9T. This also reported substantial differences in the pseudoranges. Obviously completely independent of GNSS-SDR. Tests using a £20K simulator some months ago gave expected perfect results. Anyone know of a cheap way for simulating gnss signals?
I'm not able to find the gnss-simulator that you're talking about. 2.6 Ms/s sounds a little low for GPS L1; you won't get the full main lobe of the signal. I usually use 6.25 Ms/s.
The GPS satellite clocks have some offset from GPS time though. When solving for the position solution, the receiver will account for this, but the raw measurements will probably contain that offset. You might be able to change that in the simulator. Or if the simulator uses real broadcast ephemerides, you can look those up on CDDIS to see if that matches the offset that you're seeing.
As for cheap ways of simulating gnss signals, I know that Safran has some software that I think is free to universities. Spirent sells simulators too, but I wouldn't call it "cheap" when the device will probably be more expensive than the building you put it in. I have a python simulator that I've built myself that I could polish up a little as well. If you want to talk about that part, you can reach email me: dawson dot beatty at gmail.
https://bitbucket.org/jarribas/gnss-simulator? It gets built with GNSS-SDR if ENABLE_GNSS_SIM_INSTALL is enabled at build time.
I found it, thanks. I'm curious if anyone has thoughts on my original question.