ggPMX
ggPMX copied to clipboard
NONMEM and nlmixr readers should use `NPD` instead of `NPDE` to match monolix
Describe the bug
The best practice goodness of fit plots are NPD instead of NPDE. In monolix, the NPDE=NPD since there is no decorrelation step performed. However, in nlmixr and NONMEM the NPDE perform a decorrelation step. So, to match best practice and Monolix methods, these should import NPD over NPDE.
In the nonmem reader:
https://github.com/ggPMXdevelopment/ggPMX/blob/de10f6168dd814b1dad62ecaea65bd5821f9119f/R/nonmem_reader.R#L233-L235
In the nlimixr reader:
https://github.com/ggPMXdevelopment/ggPMX/blob/02eca6d2069e741be630cf061545692a9558097e/R/nlmixr.R#L37
This also makes me wonder, should we name these plots NPD vs xxx to be clear in multiple software imports.
See http://doi.org/10.1002/psp4.12161 for a small discussion about this.
Thanks for raising this point @mattfidler
Questions for clarification
- which of the plotting functions would this apply to: all involving NPDE, or a subset?
- would it be relevant to also rename functions as well as labels?
ggPMX::pmx_plot_npde_timereplaced withggpMX::pmx_plot_npd_time(or one an alias with a note on reason for alias)?
We could simply provide a function ggpMX::pmx_plot_npd_time which would work for both.
Hi Matt,
I was thinking about this. Generally speaking, both of NPDE and NPD suffer from dependency issues but it's more with NPD than NPDE. I remember we had this discussion before back in summer 21'. I think (not sure though) that NPDE should be less biased than NPD. Thoughts?
There is a paper that says that NPD is less biased than NPDE and NPDE should not be used. (The other way around).
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5321813/
For graphical evaluation, it may sometimes be better to use npd instead of NPDE because decorrelation can induce artifacts (i.e., create trends or make trends less apparent in the scatterplots of NPDE vs. time or predictions),17 as we can see in Figure 33 d,e.
Monolix uses NPD for their NPDE output.
Also as a note CWRESI is also a bit biased and CWRES should be used over CWRESI
Thanks Matt. But shouldn't npd inflate type I error?
I use it for visual diagnostics.
Type I is when you reject when the null hypothesis was actually true. So, rejecting what? I don't see any hypotheses tested here, so no, not in my opinion. There are no explicit hypotheses.
If the NPDE is biased, then that could lead to inaccurate conclusions. I believe the paper says the odd NPDE behavior is artifactual and shouldn't be considered.
So the artifact should be produced from model simulation itself (model misspecifications?), since the decorrelation step includes substracting E(Ysim) and dividing by the std(Ysim). Thanks for sharing your thoughts!!