tensorboardX
tensorboardX copied to clipboard
Protobuf failure
I'm occasionally getting an error of the following form:
[libprotobuf FATAL google/protobuf/wire_format.cc:830] CHECK failed: (output->ByteCount()) == (expected_endpoint): : Protocol message serialized to a size different from what was originally expected. Perhaps it was modified by another thread during serialization?
terminate called after throwing an instance of 'google::protobuf::FatalException'
what(): CHECK failed: (output->ByteCount()) == (expected_endpoint): : Protocol message serialized to a size different from what was originally expected. Perhaps it was modified by another thread during serialization?
Is this some underlying tensorboard issue, or due tensorboard-pytorch?
I can't determine from that. Which program spit that message? Did that stop your training?
This occurs when performing a hyperparameter search, where a new writer is created (and closed) for each set of parameters. It doesn't seem occur at any obvious point - e.g. it isn't the second time a writer is created. It does stop training, yes. I can try and make a minimal example if that would help!
A reproducible code would be great help! thanks
There is a writer.close() method. Did you close the old writer before opening a new one?
Ok, will write something! Yes, I used writer.close().
Hi, what is your protobuf version?
Same here. Occasionally get the same error. But error disappears if do not use tensorboard
@TengdaHan Can you provide more info?
Hey there @lanpa,
I think I tracked at least one possible root cause down. I get this exception whenever my events file explodes in size (around 827 MB). At the same time tensorboard itself crashes as well. For me the origin of this huge size where parameters which I stored as histograms using numpy and writer.add_histogram(name, param.data.cpu().numpy(), epoch, bins="auto") . This seems to cause problems when the distribution is really sharp around 0 (To strong weight decay). Changing it to bins="doane", solves it for me. I hope that helps some people to track down their problems as well :)
Cheers,
Jendrik
@jendrikjoe Thanks for the investigation and nice spot. I am curious if using the default tensorflow binning solves your problem too. If you would like to help to do the test, please test with pip install git+https://github.com/lanpa/tensorboardX. BTW, 827MB of histograms are pretty large, did you log the histogram very often?
Hey @lanpa, sorry for the long silence. The tensorflow binning seems to solve the issue as well. 827MB are indeed a lot. But it was coming from the binning method. I don't know why though maybe one of the methods called in auto has an issue with binning of lots of numbers close to zero. Not sure. If using the tensorflow binning the histogramms are around 1 MB :) Cheers,
Jendrik