vits
vits copied to clipboard
Something wrong hapeened to my interpreter?RuntimeError
I ran this code on Linux and followed every step.But when running train.py it came the errors below which I almost had no idea on how to deal with the error.It seems like a C error ? Or the version of python I ran (3.8) is too high ? Or the version of some packages are too high ? I have been headache on this problem for days ,could some one help me?
RuntimeError: The following operation failed in the TorchScript interpreter. Traceback of TorchScript (most recent call last): RuntimeError: __nv_nvrtc_builtin_header.h(78048): error: function "operator delete(void *, size_t)" has already been defined
__nv_nvrtc_builtin_header.h(78049): error: function "operator delete[](void *, size_t)" has already been defined
2 errors detected in the compilation of "default_program".
nvrtc compilation failed:
#define NAN __int_as_float(0x7fffffff) #define POS_INFINITY __int_as_float(0x7f800000) #define NEG_INFINITY __int_as_float(0xff800000)
template<typename T> device T maximum(T a, T b) { return isnan(a) ? a : (a > b ? a : b); }
template<typename T> device T minimum(T a, T b) { return isnan(a) ? a : (a < b ? a : b); }
#define __HALF_TO_US(var) *(reinterpret_cast<unsigned short *>(&(var))) #define __HALF_TO_CUS(var) *(reinterpret_cast<const unsigned short *>(&(var))) #if defined(__cplusplus) struct align(2) __half { host device __half() { }
protected: unsigned short __x; };
/* All intrinsic functions are only available to nvcc compilers / #if defined(CUDACC) / Definitions of intrinsics */ device __half __float2half(const float f) { __half val; asm("{ cvt.rn.f16.f32 %0, %1;}\n" : "=h"(__HALF_TO_US(val)) : "f"(f)); return val; }
__device__ float __half2float(const __half h) {
float val;
asm("{ cvt.f32.f16 %0, %1;}\n" : "=f"(val) : "h"(__HALF_TO_CUS(h)));
return val;
}_
I had changed my python version to 3.6 and satisfied the packages as the 'requirements.txt' but still have the same error
Have you considered CUDA's and pytorch's version is incompatible? May be you can run
nvcc -V
to check your CUDA's version. Then reinstall pytorch especially cudatoolkit in torch according to Pytorch official with your CUDA's version.
A new virtual env may be better.
Have you considered CUDA's and pytorch's version is incompatible? May be you can run
nvcc -V
to check your CUDA's version. Then reinstall pytorch especially cudatoolkit in torch according to Pytorch official with your CUDA's version. A new virtual env may be better.
Thanks for helping!I checked my CUDA's version today and found it's truly wrong with that of my pytorch(Though torch.cuda is available before).I have changed my torch version and it can work well on my previous server.Thanks so much again!