neural-style-tf icon indicating copy to clipboard operation
neural-style-tf copied to clipboard

How can i get the value of loss fun when use lbfgs method

Open pep-pig opened this issue 6 years ago • 4 comments

i want to plot the convergence curves of different solve method ,but i can not get the value of loss function when use lbfgs,it only print value in cmd . i need save it into a list,how can i do that?

pep-pig avatar May 30 '18 04:05 pep-pig

Hello,

I'm actually doing this too, Have you find something ?

I'm doing some tests so it's NOT good code:

As a beginning, i have created a global array (LOSS_TOTAL) and I have set a callback function in minimize_with_lbfgs function optimizer.minimize(sess,loss_callback=save_loss,fetches=[L_total])

and save loss is defined as follow:

def save_loss(loss):
LOSS_TOTAL.append(loss)

in order to plot, i have added some lines after the display of elapsing time in render_signe_image function:

print('Single image elapsed time: {}'.format(tock - tick))
plt.plot(LOSS_TOTAL)
plt.savefig('fig.jpg')

So now i'm able to see my loss function evolution, except it's not very useful as starting loss value is way higher than loss value after some iterations

My final goal if to stop computing if loss function reaches asymptotic value.

chamoulox avatar Jun 04 '18 15:06 chamoulox

I add some code in lbfgs.py located in scipy\optimize , so i can plot the curve during the computing process. here is my project

pep-pig avatar Jun 05 '18 02:06 pep-pig

@vonlippmann Nice. Do you know why this happens?

cysmith avatar Jun 05 '18 08:06 cysmith

i am sorry , i don't how this happen too.

pep-pig avatar Jun 13 '18 02:06 pep-pig