[DOCUMENTATION] Is there any documentation that describes the input/output for predict/fit/predict_proba
Hi, I've been trying to use your package for some small experiments. I'm not sure if I'm providing my data in the right format (I've tried to copy your examples from Tuebingen and sachs), but it returns an error on predict or predict_proba (or sometimes both). Do you happen to have any documentation that explains how we can construct our inputs for pairwise analysis?
Kind regards, Navid
Hello,
You are right, I should be a bit more explicit on this one: for the pairwise models, we are using the Kaggle Cause-Effect Pair Challenge format (link here):
Here is a function we coded to read files in this format
In short: the required format is a pandas.DataFrame with three columns "SampleID", "A", and "B", where "A" and "B" describe arrays (each cell has an array) :
| SampleID | A | B |
|---|---|---|
| sample1 | np.ndarray([1,2.2,1,...]) | np.ndarray([3,4.2,2,...]) |
| sample2 | np.ndarray([0,2.2,0.3,...]) | np.ndarray([0.2,3.2,1,...]) |
| ... | ... | ... |
| samplek | np.ndarray([3,1.3,4,...]) | np.ndarray([1,2.2,1,...]) |
I will add all of this in the documentation. Thanks for pointing this out !
Best, Diviyan
Hi Diviyan, Many thanks for clarifying this. Best, Navid
I'll keep this issue open to keep track of the documentation update !
Okay, thanks. N
On Wed, May 26, 2021 at 4:51 PM Diviyan Kalainathan < @.***> wrote:
I'll keep this issue open to keep track of the documentation update !
— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/FenTechSolutions/CausalDiscoveryToolbox/issues/102#issuecomment-848514155, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADK3KJQFSMJBAXZ2JKNGXJ3TPSK5VANCNFSM45MFKO7Q .
-- Navid Shokouhi
Can you provide some example with code on how to use predict_prob? I tried to use the dataset in the example and the format function but still won't be able got error: cannot convert string to float: 'A'
Thanks a lot!