pensieve icon indicating copy to clipboard operation
pensieve copied to clipboard

ISSUE #20 #27 #51 fix, MPC bug fix and its performance

Open quito418 opened this issue 6 years ago • 6 comments

The MPC performance was like this in simulation environment. (Also had checked in realworld experiment for mpc_server.py, but I don't have nice graph about it) mpc_bug_mpc_bug_fix_pretrain_pensieve

quito418 avatar Aug 02 '18 06:08 quito418

Hi quito418, How did you plot the above chart? I did not find any plot.py file for this chart from the repo

dixitabhanderi avatar Apr 28 '20 03:04 dixitabhanderi

Hi, I drew it using Microsoft Excel! I modified the CDF code in pensieve

quito418 avatar Apr 28 '20 05:04 quito418

Hi @quito418, @hongzimao
How exactly did you plot the average QoE vs CDF. What formula was used? for average QoE did you used the same formula as QoE/ total reward? In my case with CDF range does not stay between 0 to 1 it has to be 0 to 142 to plot the chart linearly. Thanks.

dixitabhanderi avatar May 16 '20 09:05 dixitabhanderi

Hi, To plot CDF I modified this code(https://github.com/hongzimao/pensieve/blob/master/test/plot_results.py)

I normalized the QoE (total reward divided by number of video chunks) the axis name should be changed to normalized QoE, sorry for confusion.

Hi @quito418, @hongzimao How exactly did you plot the average QoE vs CDF. What formula was used? for average QoE did you used the same formula as QoE/ total reward? In my case with CDF range does not stay between 0 to 1 it has to be 0 to 142 to plot the chart linearly. Thanks.

quito418 avatar May 16 '20 10:05 quito418

@quito418 how to normalize the QoE and CDF in order to get the figures like Fig. 7 and Fig. 8 in the paper? Would you mind share your modified code of plot? (https://github.com/hongzimao/pensieve/blob/master/test/plot_results.py)

xuzhiyuan1528 avatar Aug 13 '20 03:08 xuzhiyuan1528

@xuzhiyuan1528 Hi, changing line 130 at plot_results.py should be enough. Before: reward_all[scheme].append(np.sum(raw_reward_all[scheme][l][1:VIDEO_LEN])) After: reward_all[scheme].append(np.sum(raw_reward_all[scheme][l][1:VIDEO_LEN]) / VIDEO_LEN )

quito418 avatar Aug 13 '20 07:08 quito418