David Benkeser
David Benkeser
The easiest solution is to construct confidence intervals on the logit scale and back-transform, i.e., $ expit(logit(\hat{F}) \pm 1.96 * \hat{se}(logit(\hat{F}))) $ , where $\hat{se}(logit(\hat{F}))$ is the estimated standard error...
I assume you mean for situations when `method = "hazard"`? And I guess you'd want the actual number of iterations taken (which could be less than `maxIter`)?
Why don't we not worry about it and include an example in the vignette of how to do it with direct calls to hazard_tmle (or mean_tmle, which I suspect may...
Thanks for the comment. Adding weights that are known or can be estimated using nonparametric maximum likelihood would be a straightforward addition. If weights are estimated (esp. nonparametrically) using a...
Thanks for the report. I’ll look into it a bit more closely in the coming week. Off the top of my head it looks like it’s just a highly inconsiderate...
Commit 41a22a1546427fd6934062502484baa24306f3e6 fixes a bug in how p-values are generated for test of counterfactual means. Based on the previous code, I'd guess that the goal was to test null hypothesis...
Commit 83b100bfafb831a31d7e504574bcde5f72170c31 fixes bug in `summary.ltmle` induced by 41a22a1546427fd6934062502484baa24306f3e6. Also adds stability checks to `SuperLearner`. Specifically, * For binary outcomes, if fewer than 10 outcomes, change SuperLearner to `V=2` fold...
Thanks for working on this. Yes, it's a weird edge case, because basically I need it to estimate a propensity that is known to equal 1 irrespective of past covariates....