Add safety measures for failures in external models
I'm thinking we should wrap external solver calls in a try-catch and return missing on failures. This way you don't lose the whole analysis because of one failure.
Obviously we properly log the failure. (Spit out huge warning!)
We'll make this a flag of the ExternalModel and set it to false by default.
I think this is a good idea. One thing I would worry about is that the first time you try the algorithm, it returns you a result without your realising your model has crashed.
Also, when returning missing values, and computing the failure probability, you need to account for the fact that you ran random samples and some of them crashed. I.e., the histogram of the accepted samples will not follow your original input distribution, and your probability estimate will be biased!
We might however be able to return upper and lower bounds on the failure probability from a Monte Carlo simulation considering missing samples. Something like:
Let N = total runs, n_obs = number of successful runs, n_fail = N − n_obs, m = number of observed failures among successful runs.
Lower bound (optimistic): assume every failed run would not be a failure: $p_{low} = m /N$
Upper bound (pessimistic): assume every failed run would be a failure: $p_{high} = (m + n_{fail})/N$