zipline icon indicating copy to clipboard operation
zipline copied to clipboard

How can I build my confidence that my strategy returns better

Open weiguang-zz opened this issue 4 years ago • 2 comments

How can I build my confidence that my strategy returns better?

For example, I have to return sequence. R1 is from my strategy, R2 is from benchmark R1: [r_11, r_12, r_13, r_14 .... r_1n] R2:[r_21, r_22,r_23, r_24 ... r_2n]

I have read a good paper:Robust performance hypothesis testing with the Sharpe ratio, The paper describes a method to test the sharpe ratio difference between two return series. It can deal with long-tail distribution and time series characteristics of the return very well. I have studied for a long time, Hoping to implement it with python. before this, I hope to get your opinions about this: 1 Did you encounter this problem? 2 Are there other better ways to evaluate a strategy? 3 Any other your thoughts?

Looking forward to your reply! thank you

weiguang-zz avatar Aug 06 '20 05:08 weiguang-zz

Good question and no answer here?

  1. Yes, while a strategy overfitting on the data, we got a super-good return from backtesting but super-bad result in real-life
  2. In research, the above paper is a good point to handle this case,
  3. Maybe this method willnot implement into this backtesting repo, maybe in alphalens or pyfolio. Could you take a look at those repo for this problem and give an review here, thanks you.

thank you for you reply @88d52bdba0366127fffca9dfa93895 I also raised an issue to alphalens, but no one responded sadly. I have implemented the algorithm in the paper, although some formulas are not well understood by me.

The first question you mentioned is mentioned in the quantopian course, I think this is a very good article. p-Hacking and Multiple Comparisons Bias

weiguang-zz avatar Aug 16 '20 09:08 weiguang-zz