CTranslate
CTranslate copied to clipboard
Lightweight C++ translator for OpenNMT Torch models (deprecated)
This project is considered obsolete as the Torch framework is no longer maintained. For compatibility with OpenNMT-tf or OpenNMT-py, please check out CTranslate2.
CTranslate
CTranslate is a C++ implementation of OpenNMT's translate.lua script with no LuaTorch dependencies. It facilitates the use of OpenNMT models in existing products and on various platforms using Eigen as a backend.
CTranslate provides optimized CPU translation and optionally offloads matrix multiplication on a CUDA-compatible device using cuBLAS. It only supports OpenNMT models released with the release_model.lua script.
Dependencies
Optional
- CUDA for matrix multiplication offloading on a GPU
- Intel® MKL for an alternative BLAS backend
Compiling
CMake and a compiler that supports the C++11 standard are required to compile the project.
git submodule update --init
mkdir build
cd build
cmake ..
make
It will produce the dynamic library libonmt.so (or .dylib on Mac OS, .dll on Windows) and the translation client cli/translate.
CTranslate also bundles OpenNMT's Tokenizer which provides the tokenization tools lib/tokenizer/cli/tokenize and lib/tokenizer/cli/detokenize.
Options
- To give hints about Eigen location, use the
-DEIGEN3_ROOT=<path to Eigen library>option. - To compile only the library, use the
-DLIB_ONLY=ONflag. - To disable OpenMP, use the
-DWITH_OPENMP=OFFflag. - To enable optimization through quantization in matrix multiplications, use the
-DWITH_QLINEAR=AVX2|SSEflag (OFFby default) and set the appropriate extended instructions set via-DCMAKE_CXX_FLAGS:-DWITH_QLINEAR=AVX2requires at least-mavx2-DWITH_QLINEAR=SSErequires at least-mssse3
Performance tips
- Use extended instructions sets:
- if you are not cross-compiling, add
-DCMAKE_CXX_FLAGS="-march=native"to thecmakecommand above to optimize for speed; - otherwise, select a recent SIMD extensions to improve performance while meeting portability requirements.
- if you are not cross-compiling, add
- Consider installing Intel® MKL when you are targetting Intel®-powered platforms. If found, the project will automatically link against it.
- Consider using quantization options as described above.
- When using
cli/translate, consider fine-tuning the level of parallelism:- the
--paralleloption enables concurrent translation of--batch_sizesentences - the
--threadsoption enables each translation to use multiple threads - Bottom-line: if you want optimal throughput for a collection of sentences, increase
--paralleland set--threadsto 1; if you want minimal latency for a single batch, set--parallelto 1, and increase--threads.
- the
Using
Clients
See --help on the clients to discover available options and usage. They have the same interface as their Lua counterpart.
Library
This project is also a convenient way to load OpenNMT models and translate texts in existing software.
Here is a very simple example:
#include <iostream>
#include <onmt/onmt.h>
int main()
{
// Create a new Translator object.
auto translator = onmt::TranslatorFactory::build("enfr_model_release.t7");
// Translate a tokenized sentence.
std::cout << translator->translate("Hello world !") << std::endl;
return 0;
}
For a more advanced usage, see:
include/onmt/TranslatorFactory.hto instantiate a new translatorinclude/onmt/ITranslator.h(theTranslatorinterface) to translate sequences or batch of sequencesinclude/onmt/TranslationResult.hto retrieve results and attention vectorsinclude/onmt/Threads.hto programmatically control the number of threads to use
Also see the headers available in the Tokenizer that are accessible when linking against CTranslate.
Supported features
CTranslate focuses on supporting model configurations that are likely to be used in production settings. It covers models trained with the default options, plus some variants:
- additional input or output word features
brnnencoder (withsumorconcatmerge policy)dotattention- residual connections
- no input feeding
Additionally, CTranslate misses some advanced features of translate.lua:
- gold data score
- hypotheses filtering
- beam search normalization