causal-learn
causal-learn copied to clipboard
Granger causality
Update
- Add auto test to Granger Causality. The ground truth
p_value_matrix_truth
andadj_matrix_truth
are computed from the current version 1ebf232
Test plan
python -m unittest tests/TestGranger.py # should pass

TODO
- Add tests for more complicated data, such as non-Gaussian data, mix of continuous and discrete data.
And btw, did you check the p-values / coefficient produced by the codebase make sense? I remembered we figured out a bug by noticing p-values not making sense.
To add comments on how p-values are generated, here is an example: https://github.com/cmu-phil/causal-learn/pull/59/files
Something like: "# All the benchmark results of loaded files (e.g. "./TestData/benchmark_returned_results/") #
are obtained from the code of causal-learn as of commit
https://github.com/cmu-phil/causal-learn/commit/129dcdf (07-14-2022).
We are not sure if the results are completely "correct" (reflect ground truth graph) or not.
So if you find your tests failed, it means that your modified code is logically inconsistent
with the code as of 129dcdf, but not necessarily means that your code is "wrong".
If you are sure that your modification is "correct" (e.g. fixed some bugs in 129dcdf),
please report it to us. We will then modify these benchmark results accordingly. Thanks :) #"
Basically the idea is: make sure others know how your p-values are generated (instead of the magic number in code) --- you just need to say we use code in commit xxxx to generate the p-values.
And btw, did you check the p-values / coefficient produced by the codebase make sense? I remembered we figured out a bug by noticing p-values not making sense.
Yes, it make sense! Could you please report that bug? The current version works well to me.
And btw, did you check the p-values / coefficient produced by the codebase make sense? I remembered we figured out a bug by noticing p-values not making sense.
Yes, it make sense! Could you please report that bug? The current version works well to me.
Oh, I mean whether you checked the resulted p-values make sense or not. Previously we found out KCI p-values didn't make sense, that's why we figured out a bug in KCI, right?
After you check the p-values and think they make sense, then I can think we can push this PR.
After you check the p-values and think they make sense, then I can think we can push this PR.
Yes, they make sense. I think we can push this PR. @tofuwen @kunwuz Could you please help push this PR? Thanks!
@kunwuz could you help merge this PR?