glow
glow copied to clipboard
Trouble Installing torch_glow
I installed glow following the instructions in the GitHub, but I've been having trouble installing torch_glow. I tried running python setup.py install in the torch_glow folder, but I get the following error::
/home/ubuntu/glow/build/lib/Backends/Interpreter/libInterpreter.a(InterpreterDeviceManager.cpp.o):/home/ubuntu/glow/lib/Backends/Interpreter/InterpreterDeviceManager.cpp:24: multiple definition of interpreterMaxMem' ../../lib/Backends/Interpreter/libInterpreter.a(InterpreterDeviceManager.cpp.o):/home/ubuntu/glow/lib/Backends/Interpreter/InterpreterDeviceManager.cpp:24: first defined here /home/ubuntu/glow/build/lib/Backends/Interpreter/libInterpreter.a(InterpreterDeviceManager.cpp.o): In function
glow::runtime::createInterpreterDeviceManager(glow::runtime::DeviceConfig const&)':
/home/ubuntu/glow/lib/Backends/Interpreter/InterpreterDeviceManager.cpp:32: multiple definition of glow::runtime::createInterpreterDeviceManager(glow::runtime::DeviceConfig const&)' ../../lib/Backends/Interpreter/libInterpreter.a(InterpreterDeviceManager.cpp.o):/home/ubuntu/glow/lib/Backends/Interpreter/InterpreterDeviceManager.cpp:32: first defined here /home/ubuntu/glow/build/lib/Backends/Interpreter/libInterpreter.a(InterpreterDeviceManager.cpp.o): In function
glow::runtime::InterpreterDeviceManager::getMaximumMemory() const':
/home/ubuntu/glow/lib/Backends/Interpreter/InterpreterDeviceManager.cpp:42: multiple definition of glow::runtime::InterpreterDeviceManager::getMaximumMemory() const' ../../lib/Backends/Interpreter/libInterpreter.a(InterpreterDeviceManager.cpp.o):/home/ubuntu/glow/lib/Backends/Interpreter/InterpreterDeviceManager.cpp:42: first defined here /home/ubuntu/glow/build/lib/Backends/Interpreter/libInterpreter.a(InterpreterDeviceManager.cpp.o): In function
glow::runtime::InterpreterDeviceManager::getAvailableMemory() const':
/home/ubuntu/glow/lib/Backends/Interpreter/InterpreterDeviceManager.cpp:46: multiple definition of glow::runtime::InterpreterDeviceManager::getAvailableMemory() const' ../../lib/Backends/Interpreter/libInterpreter.a(InterpreterDeviceManager.cpp.o):/home/ubuntu/glow/lib/Backends/Interpreter/InterpreterDeviceManager.cpp:46: first defined here /home/ubuntu/glow/build/lib/Backends/Interpreter/libInterpreter.a(InterpreterDeviceManager.cpp.o): In function
glow::runtime::InterpreterDeviceManager::isMemoryAvailable(unsigned long) const':
/home/ubuntu/glow/lib/Backends/Interpreter/InterpreterDeviceManager.cpp:50: multiple definition of glow::runtime::InterpreterDeviceManager::isMemoryAvailable(unsigned long) const' ../../lib/Backends/Interpreter/libInterpreter.a(InterpreterDeviceManager.cpp.o):/home/ubuntu/glow/lib/Backends/Interpreter/InterpreterDeviceManager.cpp:50: first defined here /home/ubuntu/glow/build/lib/Backends/Interpreter/libInterpreter.a(InterpreterDeviceManager.cpp.o): In function
glow::runtime::InterpreterDeviceManager::getDeviceInfo() const':
/home/ubuntu/glow/lib/Backends/Interpreter/InterpreterDeviceManager.cpp:54: multiple definition of glow::runtime::InterpreterDeviceManager::getDeviceInfo() const' ../../lib/Backends/Interpreter/libInterpreter.a(InterpreterDeviceManager.cpp.o):/home/ubuntu/glow/lib/Backends/Interpreter/InterpreterDeviceManager.cpp:54: first defined here /home/ubuntu/glow/build/lib/Backends/Interpreter/libInterpreter.a(InterpreterDeviceManager.cpp.o): In function
llvm::ErrorList::join(llvm::Error, llvm::Error)':
/home/ubuntu/glow/lib/Backends/Interpreter/InterpreterDeviceManager.cpp:67: multiple definition of glow::runtime::InterpreterDeviceManager::addNetworkImpl(glow::Module const*, std::map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, glow::CompiledFunction*, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, glow::CompiledFunction*> > >, std::function<void (glow::Module const*, llvm::Error)>)' ../../lib/Backends/Interpreter/libInterpreter.a(InterpreterDeviceManager.cpp.o):/home/ubuntu/glow/lib/Backends/Interpreter/InterpreterDeviceManager.cpp:67: first defined here /home/ubuntu/glow/build/lib/Backends/Interpreter/libInterpreter.a(InterpreterDeviceManager.cpp.o): In function
glow::runtime::InterpreterDeviceManager::evictNetworkImpl(std::__cxx11::basic_string<char, std::char_traitsglow::runtime::InterpreterDeviceManager::evictNetworkImpl(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::function<void (std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, llvm::Error)>)' ../../lib/Backends/Interpreter/libInterpreter.a(InterpreterDeviceManager.cpp.o):/home/ubuntu/glow/lib/Backends/Interpreter/InterpreterDeviceManager.cpp:114: first defined here /home/ubuntu/glow/build/lib/Backends/Interpreter/libInterpreter.a(InterpreterDeviceManager.cpp.o): In function
glow::runtime::InterpreterDeviceManager::runFunctionImpl(unsigned long, std::__cxx11::basic_string<char, std::char_traits
I'm running an 18.04 Ubuntu instance on AWS and the latest PyTorch nightly build. Any idea what might be wrong?
What Glow commit are you on? There was a similar issue that was fixed yesterday I think so this may be already fixed if you update to latest.
I just updated to the latest, and it fixed the issue with python setup.py install, but when I tried to import torch_glow, I got the following error: ImportError: /home/ubuntu/anaconda3/lib/python3.7/site-packages/torch_glow-0.0.0-py3.7-linux-x86_64.egg/torch_glow/_torch_glow.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZNK5torch3jit5Value13debugNameBaseB5cxx11Ev
does it persist with python setup.py test --run_cmake
?
Yeah, I got similar errors when I ran python setup.py test --run_cmake
:
running pytest running egg_info writing torch_glow.egg-info/PKG-INFO writing dependency_links to torch_glow.egg-info/dependency_links.txt writing top-level names to torch_glow.egg-info/top_level.txt reading manifest file 'torch_glow.egg-info/SOURCES.txt' writing manifest file 'torch_glow.egg-info/SOURCES.txt' running build_ext running cmake_build -- Cannot find glog automatically. Using legacy find. -- Found glog (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libglog.so) -- Found LLVM 8.0.0 -- Using LLVMConfig.cmake in: /usr/lib/llvm-8/cmake Adding CPU backend. Adding Interpreter backend.
-- ******** Summary ******** -- CMake version : 3.10.2 -- CMake command : /usr/bin/cmake -- System : Linux -- C++ compiler : /usr/bin/c++ -- C++ compiler version : 7.4.0 -- CXX flags : -Wall -Wnon-virtual-dtor -fno-exceptions -fno-rtti -Wno-psabi -Wnon-virtual-dtor -- Build type : Debug -- Compile definitions : GIT_SHA1="bc9df224";GIT_DATE="2019-08-01";WITH_PNG;GLOW_WITH_LLVMIRCODEGEN=1;GLOW_WITH_CPU=1;GOOGLE_PROTOBUF_NO_RTTI;ONNX_NAMESPACE=glow_onnx -- CMAKE_PREFIX_PATH : -- CMAKE_INSTALL_PREFIX : /usr/local -- CMAKE_MODULE_PATH : /home/ubuntu/glow/cmake/modules
-- ONNX version : 1.5.0 -- ONNX NAMESPACE : glow_onnx -- ONNX_BUILD_TESTS : OFF -- ONNX_BUILD_BENCHMARKS : OFF -- ONNX_USE_LITE_PROTO : OFF -- ONNXIFI_DUMMY_BACKEND : OFF -- ONNXIFI_ENABLE_EXT : OFF
-- Protobuf compiler : /usr/bin/protoc -- Protobuf includes : /usr/include -- Protobuf libraries : /usr/lib/x86_64-linux-gnu/libprotobuf.so;-pthread -- BUILD_ONNX_PYTHON : OFF -- pybind11 v2.3.dev1 -- Using pytorch dir /home/ubuntu/anaconda3/lib/python3.7/site-packages/torch -- Failed to find LLVM FileCheck -- git Version: v1.5.0 -- Version: 1.5.0 -- Performing Test HAVE_STD_REGEX -- success -- Performing Test HAVE_GNU_POSIX_REGEX -- failed to compile -- Performing Test HAVE_POSIX_REGEX -- success -- Performing Test HAVE_STEADY_CLOCK -- success Skipping adding test en2gr_cpu_test because it requires a models directory. Configure with -DGLOW_MODELS_DIR. Skipping adding test en2gr_quantization_test because it requires a models directory. Configure with -DGLOW_MODELS_DIR. Skipping adding test en2gr_cpu_partition_test because it requires a models directory. Configure with -DGLOW_MODELS_DIR. Skipping adding test en2gr_cpu_config_test because it requires a models directory. Configure with -DGLOW_MODELS_DIR. Skipping adding test resnet_runtime_test because it requires a models directory. Configure with -DGLOW_MODELS_DIR. -- Configuring done -- Generating done -- Build files have been written to: /home/ubuntu/glow/build [ 0%] Built target MemberType [ 1%] Built target CPURuntimeNative [ 1%] Built target include-bin [ 3%] Built target Support [ 4%] Built target gen_onnx_proto [ 5%] Built target ConvBench [ 5%] Built target gtest [ 7%] Built target GemmBench [ 12%] Built target benchmark [ 12%] Built target CodeGen [ 15%] Built target Base [ 16%] Built target NodeGen [ 17%] Built target CPURuntime [ 18%] Built target InstrGen [ 21%] Built target onnx_proto [ 21%] Built target Testing [ 22%] Built target TestMain [ 22%] Built target TensorPool [ 22%] Built target AutoGenInstr [ 22%] Built target AutoGenNode [ 23%] Built target UtilsTest [ 24%] Built target MemoryAllocatorTest [ 24%] Built target Float16Test [ 25%] Built target ImageTest [ 25%] Built target AutoGen [ 25%] Built target ThreadPoolTest [ 27%] Built target QuantizationBase [ 28%] Built target png2bin [ 31%] Built target Graph [ 32%] Built target GraphOptimizerPipeline [ 32%] Built target ExecutionContext [ 32%] Built target Converter [ 36%] Built target IR [ 37%] Built target TensorPoolTest [ 37%] Built target Executor [ 40%] Built target Importer [ 41%] Built target Backend [ 41%] Built target IROptimizer [ 42%] Built target BasicIRTest [ 42%] Built target GraphSchedulerTest [ 43%] Built target Quantization [ 44%] Built target Exporter [ 44%] Built target TensorsTest [ 45%] Built target IROptTest [ 48%] Built target Interpreter [ 51%] Built target LLVMIRCodeGen [ 56%] Built target CPUBackend [ 57%] Built target Backends [ 57%] Built target LLVMIRGenTest [ 58%] Built target Provisioner [ 60%] Built target ThreadPoolExecutorTest [ 62%] Built target GraphOptimizer [ 63%] Built target ExecutionEngine [ 65%] Built target Partitioner [ 67%] Built target tracing-compare [ 68%] Built target resnet-verify [ 68%] Built target resnet-training [ 69%] Built target BackendTestUtils [ 69%] Built target GraphOptzTest [ 69%] Built target ProvisionerTest [ 70%] Built target _torch_glow [ 71%] Built target HostManager [ 72%] Built target ExecutionEngine2 [ 75%] Built target onnxifi-glow-lib [ 76%] Built target text-translator [ 77%] Built target onnxifi-glow [ 77%] Built target resnet-runtime [ 80%] Built target image-classifier [ 80%] Built target model-runner [ 80%] Built target mnist [ 81%] Built target fr2en [ 82%] Built target ptb [ 82%] Built target cifar10 [ 83%] Built target char-rnn [ 84%] Built target lenet-loader [ 85%] Built target TypeAToTypeBFunctionConverterTest [ 87%] Built target GemmTest [ 87%] Built target GraphGradTest [ 87%] Built target QuantizationTest [ 87%] Built target BackendTestUtils2 [ 88%] Built target Caffe2ImporterTest [ 89%] Built target MLTest [ 89%] Built target BackendTest [ 90%] Built target GradCheckTest [ 90%] Built target GlowOnnxifiManagerTest [ 90%] Built target GraphTest [ 91%] Built target HostManagerTest [ 91%] Built target OnnxExporterTest [ 91%] Built target HyphenTest [ 92%] Built target OnnxImporterTest [ 92%] Built target OperatorGradTest [ 92%] Built target PartitionerTest [ 94%] Built target OperatorTest [ 95%] Built target DeviceManagerTest [ 96%] Built target RecommendationSystemTest [ 96%] Built target ParameterSweepTest [ 97%] Built target RuntimeBench [ 97%] Built target SparseLengthsSumTest [ 98%] Built target TraceEventsTest [100%] Built target BackendCorrectnessTest dst /home/ubuntu/glow/torch_glow/build/lib.linux-x86_64-3.7/torch_glow/_torch_glow.cpython-37m-x86_64-linux-gnu.so copying build/lib.linux-x86_64-3.7/torch_glow/_torch_glow.cpython-37m-x86_64-linux-gnu.so -> torch_glow ===================================================== test session starts ===================================================== platform linux -- Python 3.7.3, pytest-5.0.1, py-1.8.0, pluggy-0.12.0 -- /home/ubuntu/anaconda3/bin/python cachedir: .pytest_cache rootdir: /home/ubuntu/glow/torch_glow, inifile: setup.cfg, testpaths: tests plugins: doctestplus-0.3.0, openfiles-0.3.2, remotedata-0.3.1, arraydiff-0.3 collected 0 items / 15 errors
=========================================================== ERRORS ============================================================
__________________________________ ERROR collecting tests/nodes/adaptive_avg_pool2d_test.py ___________________________________
ImportError while importing test module '/home/ubuntu/glow/torch_glow/tests/nodes/adaptive_avg_pool2d_test.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
tests/nodes/adaptive_avg_pool2d_test.py:3: in
What do you get when you run python -c "import torch; print(torch.__version__)"
1.2.0.dev20190728+cpu
Hmm torch::jit::Value::debugNameBase (the missing symbol) was added over a month ago, it seem like this is being built with a different version of pytorch than what's listed there which is strange. Other than blowing away your build directory and building everything over, I'm not sure what could help.
I tried re-building, but I actually got the error I was having issues with earlier:
Makefile:140: recipe for target 'all' failed
make: *** [all] Error 2
Traceback (most recent call last):
File "setup.py", line 233, in
Any idea why/what other things I can try? Thank you so much!
Ah I'm sorry, I don't know what could be the issue.
Is your build directory located at glow/build? This is an assumption made by setup.py
that would cause issues if it's not true.
I actually also got another error with PyTorchModelLoader.cpp, and I'm not sure how to fix it:
/home/ubuntu/glow/torch_glow/src/PyTorchModelLoader.cpp: In function ‘llvm::Expectedglow::Tensor glow::{anonymous}::ptTensorToGlowTensor(const at::Tensor&)’:
/home/ubuntu/glow/torch_glow/src/PyTorchModelLoader.cpp:54:10: error: could not convert ‘glowT’ from ‘glow::Tensor’ to ‘llvm::Expectedglow::Tensor’
return glowT;
^~~~~
In file included from /home/ubuntu/glow/torch_glow/src/PyTorchModelLoader.cpp:19:0:
/home/ubuntu/glow/torch_glow/src/PyTorchModelLoader.cpp: In instantiation of ‘llvm::Error glow::{anonymous}::contractParamIfNeeded(const glow::Handle<ElemTy>&, OutT&) [with T = int; OutT = unsigned int]’:
/home/ubuntu/glow/torch_glow/src/PyTorchModelLoader.cpp:681:3: required from here
/home/ubuntu/glow/torch_glow/src/PyTorchModelLoader.cpp:142:37: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
RETURN_ERR_IF_NOT(handle.raw(i) == firstElem,
/home/ubuntu/glow/include/glow/Support/Error.h:289:11: note: in definition of macro ‘RETURN_ERR_IF_NOT’
if (!(p)) {
^
In file included from /home/ubuntu/glow/include/glow/Graph/Graph.h:19:0,
from /home/ubuntu/glow/torch_glow/src/PyTorchModelLoader.h:24,
from /home/ubuntu/glow/torch_glow/src/PyTorchModelLoader.cpp:17:
/home/ubuntu/glow/include/glow/Base/Type.h: In static member function ‘static bool glow::Type::isType(glow::ElemKind) [with ElemTy = int]’:
/home/ubuntu/glow/include/glow/Base/Type.h:497:3: warning: control reaches end of non-void function [-Wreturn-type]
}
^
/home/ubuntu/glow/include/glow/Base/Type.h: In static member function ‘static unsigned int glow::Type::getElementSize(glow::ElemKind)’:
/home/ubuntu/glow/include/glow/Base/Type.h:542:3: warning: control reaches end of non-void function [-Wreturn-type]
}
^
/home/ubuntu/glow/include/glow/Base/Type.h: In static member function ‘static bool glow::Type::isType(glow::ElemKind) [with ElemTy = bool]’:
/home/ubuntu/glow/include/glow/Base/Type.h:497:3: warning: control reaches end of non-void function [-Wreturn-type]
}
^
/home/ubuntu/glow/include/glow/Base/Type.h: In static member function ‘static bool glow::Type::isType(glow::ElemKind) [with ElemTy = float]’:
/home/ubuntu/glow/include/glow/Base/Type.h:497:3: warning: control reaches end of non-void function [-Wreturn-type]
}
^
torch_glow/src/CMakeFiles/_torch_glow.dir/build.make:114: recipe for target 'torch_glow/src/CMakeFiles/_torch_glow.dir/PyTorchModelLoader.cpp.o' failed
Hi All ,
I am able to install glow compiler on Ubuntu 16.04 by following below steps. I faced may compilation issue due to low swap and RAM memory in Ubuntu. You can check the swap memory in ubuntu by using lsblk command in terminal
Note : Recommended Space 12GB to 25GB for Ubuntu
Some time OS will crash due to low swap and RAM memory ,
We need protect OS by follow below steps
- 1. sudo apt-get install overlayroot
- 2. overlayroot="tmpfs:swap=1,recurse=0" // enable or add on /etc/overlayroot.conf
- 3. sudo reboot
GLOW INSTALLATION STEPS On Ubuntu (16.04)
Glow Compiler Prerequisites:
Operating system : Ubuntu 16.04LTS
RAM : Minimum 16GB to 32GB
SWAP MEMORY : Minimum 12GB to 20GB
Memory Needed : 70GB
Total Memory needed : Minimum 150GB(LLVM&GLOW)
Glow Compiler Dependencies:
LLVM 8.0.1
Clang 8.0.1
Anaconda 3
` Pytorch if GPU is used need to install CUDA 10.1 and cuDNN 7.1
Glow Compiler Process
Step1:
Download glow repository from git hub
$git clone https://github.com/pytorch/glow.git
$cd glow
Step2:
#Glow depends on a few submodules: googletest, onnx, and a library for FP16 conversions.
#To get them, from the glow directory, run:
$git submodule update --init --recursive
Step3:
#If Protobuf is not installed install it by using shell script
#version should be 2.6.1
#PATH: glow/utils/
#run shell script
$./install_protobuf.sh
Step4:
#Create a build directory in glow
$mkdir build
#Change working directory to build
$cd build
#Now run cmake in Release mode providing Glow source directory as path
$cmake -DCMAKE_BUILD_TYPE=Release ../
#This will build files into ......( It will take 4 to 8 hours or more based on RAM and SWAP memory)
#if cmake is not installed install it by running following command
$sudo apt install cmake
step5:
#run make command to compile the source code
$make
Step6:
#run make install to install the library
$make install
Testing Glow:
#A few test programs that use Glow's C++ API are found under the examples/ subdirectory. The mnist, cifar10, fr2en and ptb programs train and
run digit recognition, image classification and language modeling benchmarks, respectively.
#To run these programs, build Glow in Release mode, then run the following commands to download the cifar10, mnist and ptb databases.
$python ../glow/utils/download_datasets_and_models.py --all-datasets
#Now run the examples. Note that the databases should be in the current working directory.
$./bin/mnist
$./bin/cifar10
$./bin/fr2en
$./bin/ptb
$./bin/char-rnn
#If everything goes well you should see:
mnist: pictures from the mnist digits database
cifar10: image classifications that steadily improve
fr2en: an interactive French-to-English translator
ptb: decreasing perplexity on the dataset as the network trains
char-rnn: generates random text based on some document
Regards, Srinivas
I also was facing the same error. But my aim was to install torch_glow If the objective is to install torch_glow, then the step of
python3 setup.py test --run_cmake step can be avoided.
and directly python3 setup.py install would install the torch_glow (Don't forget to restart the system before checking the import torch_glow command.