anpark
anpark
@seiriosPlus 1.5.1
堆栈信息 #0 0x00007f00f3f6ccbb in paddle::memory::allocation::Allocator::FreeImpl(paddle::memory::allocation::Allocation*) () from /home//tools/paddle_release_home/paddle_gpu/lib/python2.7/site-packages/paddle/fluid/core_avx.so (gdb) bt #0 0x00007f00f3f6ccbb in paddle::memory::allocation::Allocator::FreeImpl(paddle::memory::allocation::Allocation*) () from /home//tools/paddle_release_home/paddle_gpu/lib/python2.7/site-packages/paddle/fluid/core_avx.so #1 0x00007f00f1f523c9 in std::_Sp_counted_base::_M_release() () from /home//tools/paddle_release_home/paddle_gpu/lib/python2.7/site-packages/paddle/fluid/core_avx.so #2 0x00007f00f1f53308 in paddle::framework::Variable::PlaceholderImpl::~PlaceholderImpl() () from...
that's a big mess, lr is very important! #205
Same problem here, if i use batching.map_and_batch, it's much slower than first batch and then map, example code: if use_map_and_batch: #x: serialized_example, y: index in current batch dataset = dataset.apply(...
@reedwm Hope you go on distributed performance, now tf.contrib.distribute develop quickly, this repo should rush now
#151 same problem, shift_ratio not used by use_dataset
@reedwm what's your roadmap for this repo? could you share it?
thanks, pytorch 1.0 is to be released with caffe for good performance, and more and more papers are written by pytorch. I hope tf can move further in performance, thanks!
why not merge in master?