beast2
beast2 copied to clipboard
Likelihood incorrectly calculated error
Hello,
I'm encountering this error for some files with Beast v2.7.6. I found that there had been a similar issue in Beast 2.0.2: https://groups.google.com/g/beast-users/c/xrvcxeU4wwA
At sample 81010000
Likelihood incorrectly calculated: -4005.621548446859 != -4019.4435423486443(13.82199390178539) Operator: kernel.BactrianNodeOperator(CoalescentConstantUniformOperator.t:file1)
Too many corrections. There is something seriously wrong that cannot be corrected
I've tried using it with and without beagle, same result.
@jkimsis It is quite hard to say what is going on without further information. Do you have some XML available I can have a look at so I can investigate the problem?
I've attached xmls and log output for two runs that failed like this (C and I), and one that ran all the way through (NO). I ran several processes in parallel, could that be a problem?
parallel "~/install/beast/bin/beast -working -overwrite {}/{}.xml &> {}/{}.runlog" :::: names
I understand that beast has it's own multithreading, but the server I'm working with has some issue with it. files.zip
Thanks, that is very helpful. I can reproduce the problem with these files without using threads, so I don't think that using the parallel processing is the problem. Will look further into what is causing this.
It looks like the problem is caused by numerical instability due to the clock rate exploding to a value very close to the maximum value that can be represented by a double value (around 1e304), which is unrealistically high.
Currently, for I.xml there are improper priors on the clock rate and pop sizes. Putting a much narrower prior on the clock rate, for example uniform(0, 1e-5) and an upper bound of say 100 on the popsize will fix the problem. There does not seem to be much signal in the sequences, so it will be necessary to keep these parameters in check in order for the analysis to succeed. Note that these values are just suggestions, but you have to make sure they are reasonable based on where the data came from.
Thank you, I'll experiment with that.