a-deep-rl-approach-for-sdn-routing-optimization icon indicating copy to clipboard operation
a-deep-rl-approach-for-sdn-routing-optimization copied to clipboard

can not get the paper result

Open softmicro929 opened this issue 6 years ago • 12 comments

have you anyone get the fig2 result in paper? my model doesn't convergence。

softmicro929 avatar Jan 20 '19 11:01 softmicro929

have you anyone get the fig2 result in paper? my model doesn't convergence。

I hava the same question, what's your specific condition? When i train the model, the reward almost don't change. When i test the TMs, i find the training as if never learned sth.

etleader avatar Jan 25 '19 15:01 etleader

have you anyone get the fig2 result in paper? my model doesn't convergence。

I hava the same question, what's your specific condition? When i train the model, the reward almost don't change. When i test the TMs, i find the training as if never learned sth.

yes, learn nothing, but you should fix the TM when testing over the training stage

softmicro929 avatar Feb 10 '19 13:02 softmicro929

have you anyone get the fig2 result in paper? my model doesn't convergence。

I hava the same question, what's your specific condition? When i train the model, the reward almost don't change. When i test the TMs, i find the training as if never learned sth.

yes, learn nothing, but you should fix the TM when testing over the training stage

Sorry to bother you, i don't really understand what u mean? How to fix the TMs?

etleader avatar Feb 10 '19 13:02 etleader

have you anyone get the fig2 result in paper? my model doesn't convergence。

I hava the same question, what's your specific condition? When i train the model, the reward almost don't change. When i test the TMs, i find the training as if never learned sth.

yes, learn nothing, but you should fix the TM when testing over the training stage

Sorry to bother you, i don't really understand what u mean? How to fix the TMs?

the author's code didn't do testing , so you have to write test code by yourself to get the fig result in paper.

softmicro929 avatar Feb 11 '19 07:02 softmicro929

have you anyone get the fig2 result in paper? my model doesn't convergence。

Sorry to bother you! Have You get the Fig. 1 result ? I still can't understand how to use the TMs mentioned in the paper to train this DRL-Agent . Can you explain the whole training process, because in the given code , I did not find any correlation between the previous state and the new state, It seems that they are all randomly generated using np.random.

Lui-Chiho avatar Apr 24 '19 14:04 Lui-Chiho

have you anyone get the fig2 result in paper? my model doesn't convergence。

Sorry to bother you! Have You get the Fig. 1 result ? I still can't understand how to use the TMs mentioned in the paper to train this DRL-Agent . Can you explain the whole training process, because in the given code , I did not find any correlation between the previous state and the new state, It seems that they are all randomly generated using np.random.

Excuse me. Also have similar question, I can't understand why the state(TMs) and the new_state(TMs) are randomly generated in the step function which in Environment.py .It isn't meeting the logic of DRL.

wqhcug avatar Apr 26 '19 14:04 wqhcug

Can any get the result of the same as in paper as the model is not converging

FaisalNaeem1990 avatar May 16 '19 18:05 FaisalNaeem1990

I have the same question. I dont't understand why the old state and the new state are randomly generated in the Environment.py.

CZMG avatar May 29 '19 13:05 CZMG

Did you run the whole simulations or not.

On Wednesday, May 29, 2019, CZMG [email protected] wrote:

I have the same question. I dont't understand why the old state and the new state are randomly generated in the Environment.py.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/knowledgedefinednetworking/a-deep-rl-approach-for-sdn-routing-optimization/issues/3?email_source=notifications&email_token=AMC5WSNJIYYDIC37B2Y2SEDPXZ6D3A5CNFSM4GRGLWCKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODWPIXII#issuecomment-496929697, or mute the thread https://github.com/notifications/unsubscribe-auth/AMC5WSOD2RZMH5N4WAZV5VDPXZ6D3ANCNFSM4GRGLWCA .

FaisalNaeem1990 avatar May 29 '19 14:05 FaisalNaeem1990

I have the same question. I dont't understand why the old state and the new state are randomly generated in the Environment.py. Did you run the whole simulations or not.

Excuse me. I run the whole simulations. But in my daily study, the STATE of Reinforcement Learning is usually changed by the ACTION, but in the code of this paper, we can find flie that in Environment.py, its NEW STATE and OLD STATE are randomly generated, which does not seem to meet the logic of Reinforcement Learning. Which teacher can answer my confusion? Thank you very much.

wqhcug avatar May 29 '19 14:05 wqhcug

I have the same question. I dont't understand why the old state and the new state are randomly generated in the Environment.py. Did you run the whole simulations or not.

Excuse me. I run the whole simulations. But in my daily study, the STATE of Reinforcement Learning is usually changed by the ACTION, but in the code of this paper, we can find flie that in Environment.py, its NEW STATE and OLD STATE are randomly generated, which does not seem to meet the logic of Reinforcement Learning. Which teacher can answer my confusion? Thank you very much.

I've also found this question. I think that the author need to do some explanations. It disobey the basic logic of reinforcement learning. @gissimo

ljh14 avatar Dec 05 '19 06:12 ljh14

hello,Please ask how I can run the whole simulation, can you tell me the approximate steps, thank you very much!

slblbwl avatar Nov 02 '23 05:11 slblbwl