Sarah
Sarah
I adopt your advice:activate quantization by only set “--quantize-bits 16” after training one model normally,the command to start up quanization is like this: " nohup ./marian/build/marian -d 3 -w 12000...
what is the correct way to get a 8-bit model? the doc says "add the following switches to the marian command: --quantize-bits 8" would work. q1: Using the project called...
Did this bug fixed? thanks
line 28:#include "spdlog/spdlog.h" line 29:#include "spdlog/sinks/basic_file_sink.h" line 30:#include "spdlog/sinks/daily_file_sink.h" ########################################' Using built-in specs. COLLECT_GCC=g++ COLLECT_LTO_WRAPPER=/usr/local/libexec/gcc/x86_64-pc-linux-gnu/7.3.0/lto-wrapper Target: x86_64-pc-linux-gnu Configured with: ./configure --enable-checking=release --enable-languages=c,c++ --disable-multilib Thread model: posix gcc version 7.3.0 (GCC)
you mean i can get a fixed-quant(16-bit or 8-bit) model by adding the command "--quantize-bits 16" using marian master
So I need two step to get the 8-bit model : Step1: do as the doc "https://github.com/browsermt/students/tree/master/train-student#5-optional-8bit-quantization" described to get the 8-bit model Step2: finetune the 8-bit model as "https://github.com/browsermt/students/tree/master/train-student/finetune"...
> 我用 darts 优化了一下 CppJieba 的内存占用,可以减少到原来的 1/100 : https://byronhe.com/post/2019/11/25/cppjieba-darts-DAT-memory_optimize/ > > 代码在: https://github.com/byronhe/cppjieba 您的代码我试了一下,无法make [ 6%] Building CXX object deps/gtest/CMakeFiles/gtest.dir/src/gtest-all.cc.o [ 12%] Linking CXX static library libgtest.a [ 12%] Built...
> 用 test/make_demo.sh 来 make,deps/limonp/Md5.hpp 我拆到 deps/limonp/Md5.cpp 里面了。 我集成后内存占用从原来的106M降到40M,为什么不是降到1%的1M呢?
> > > 用 test/make_demo.sh 来 make,deps/limonp/Md5.hpp 我拆到 deps/limonp/Md5.cpp 里面了。 > > > > > > 我集成后内存占用从原来的106M降到40M,为什么不是降到1%的1M呢? > > 可以用 jemalloc 的 heap profiler 查一下内存都是哪些数据结构在占用 那请问您得出的结论1%是怎么得出来的啊?