vfdev
vfdev
@jdavidd sorry for replying very very late. Normally, we should not get a negative loss value. This is strange is you get a negative value. Let me refresh the repo...
Otherwise you can also use "Dark dimmed" github theme
A CMake version to compile python leveldb wrapper under Windows using Caffe-builder dependencies: https://github.com/vfdev-5/py-leveldb-windows
If you modify the file cmake/packages/leveldb.cmake ``` set(leveldb_CMAKE_ARGS -DBUILD_SHARED_LIBS=OFF -DCMAKE_DEBUG_POSTFIX=d -DCMAKE_BUILD_TYPE=Release -DBoost_USE_STATIC_LIBS=ON -DBoost_USE_MULTITHREAD=ON -DBoost_USE_STATIC_RUNTIME=OFF ) ``` it does not work ?
The way to choose 32 or 64 bit mode can be defined, for example, with vcvarsall.bat (https://msdn.microsoft.com/en-us/library/x4d2c09s.aspx). Somewhere in Visual Studio folder you, probably, have a folder with .bat files...
Hi @kazuma0606 , Please check the following docs and examples: - https://pytorch.org/ignite/generated/ignite.handlers.checkpoint.Checkpoint.html#ignite.handlers.checkpoint.Checkpoint.load_objects - https://github.com/pytorch/ignite/tree/master/examples/contrib/cifar10#check-resume-training In few lines of code, you can do the following: https://github.com/pytorch/ignite/blob/315b6b98012f636034453beb8c3c334229575918/examples/contrib/cifar10/main.py#L334 https://github.com/pytorch/ignite/blob/315b6b98012f636034453beb8c3c334229575918/examples/contrib/cifar10/main.py#L351-L357 HTH
@kazuma0606 Yes, you are correct. In order to save epoch and iteration, we need to save `trainer`: ```python to_save = { "generator_A2B": generator_A2B, "discriminator_B": discriminator_B, "generator_B2A": generator_B2A, "discriminator_A": discriminator_A, "optimizer_G":...
Hi @kazuma0606 > I am a little curious, is the following feature valuable from an academic point of view? I'm not sure about academic PoV but if it is about...
Hi @kazuma0606 > Functions run_evaluation() and log_generated_images() are called automatically at the start of training Complete code is the following: ```python @trainer.on(Events.EPOCH_STARTED) def run_evaluation(engine): evaluator.run(eval_train_loader) evaluator.run(eval_test_loader) def log_generated_images(engine, logger, event_name):...
Hi @kazuma0606 > By the way, is the TerminateOnNan flag a function to suppress over-learning? When loss goes Nan, learning is not possible anymore as weights are Nan as well...