EasyTemporalPointProcess
EasyTemporalPointProcess copied to clipboard
Configuration for 'eval' and 'gen', Sampling in Intensity-Free Models, and compacted event times in Multi-Step Inference
Hello. Thank you for your efforts in TPP benchmarking.
I have a few questions.
Some models have both train, eval, and gen in examples/configs/experiment_config.yaml, but for models that don't have one of them (eval or gen), how should it be handled?
For Intensity Free (IFTPP), it seems that thinning is not used because intensity is not used. In that case, how should sampling be done when only Density is known? Looking at the EasyTPP paper, it seems that you have somehow addressed this, given the presence of RMSE and ACC.
In the case of multi-step inference, it seems that most events are clustered around the initial event. Is this a natural phenomenon? I observed the same phenomenon in both ODETPP and NHP.
I would really appreciate it if you could provide answers.
Hi,
For IFTPP, we have to follow the original author's approach to do the sampling, which is in fact not compatible with our current framework. Thats why the master branch has no such code yet. We are considering pushing it to a new branch in the future. For the moment, you can directly use author's code.
For multi-step sampling, we indeed notice similar things but we have not found any bug yet (if you found any please let us know). The only difference between our version and original version( e.g., https://github.com/ant-research/hypro_tpp/blob/main/hypro_tpp/lib/thinning.py, https://github.com/yangalan123/anhp-andtt/blob/master/anhp/esm/thinning.py) is, we perform the batch-wise prediction. We are committed to closely test this part of code again.
we will look at the multi-step generation code and get back to you shortly
Thank you for the response.
I have been trying to identify the cause of consistently small values in the sampled delta time over the past week.
I discovered that there is no cumulative process during the sampling of exp(lambda*) in the thinning algorithm.
In my opinion, it seems that the sampled dt values should be accumulated.
https://github.com/ant-research/EasyTemporalPointProcess/blob/1776ab5e929fb2d0fe6b353f042d7f572271bc77/easy_tpp/model/torch_model/torch_thinning.py#L179
After the above line, I think we have to add this below code line.
exp_numbers = torch.cumsum(exp_numbers, dim=-1)
Could you please review this once?
thanks for point this. Let me test it.
i add this line. exp_numbers = torch.cumsum(exp_numbers, dim=-1), but i found the results become event more clustered.
i am still working on this issue. will get back to you when i fix it.
The more clustered result seems somewhat unusual. This is because the sampled delta, when accumulated, will evidently create a larger variance.
Actually, when we made the code change, we had results that more closely approximate the actual delta distribution compared to before adding the code.
In Figures, the orange line denotes the true distribution of inter-event time (i.e., delta) and the blue line denotes the distribution of sampled delta.
-
Before
-
After
And, as seen in the pseudocode below, accumulating delta in the Thinning algorithm is a common practice.
I didn't tidy it up thoroughly because I was a bit lazy, but we are fairly confident about these results.
Hi,
Thanks for the analysis. It is a great job.
I do notice that the accumulation of dt sampling is missing in the code. Im working on check the bug these days and also did some tests.
Except this potential bug, there is also some padding problem for the multi-step generation code.
hope to fix all these in next version.
The more clustered result seems somewhat unusual. This is because the sampled delta, when accumulated, will evidently create a larger variance.
Actually, when we made the code change, we had results that more closely approximate the actual delta distribution compared to before adding the code.
In Figures, the orange line denotes the true distribution of inter-event time (i.e., delta) and the blue line denotes the distribution of sampled delta.
- Before
![]()
- After
![]()
And, as seen in the pseudocode below, accumulating delta in the Thinning algorithm is a common practice.
I didn't tidy it up thoroughly because I was a bit lazy, but we are fairly confident about these results.