jittor icon indicating copy to clipboard operation
jittor copied to clipboard

载入预训练的模型参数时遇到的问题

Open qinxiaodeng opened this issue 2 years ago • 4 comments

在修改jrender里面的demo7-nerf时遇到的问题,我希望能使用里面的render-only选项,需要完成载入已经训练好的模型参数,在载入模型参数时遇到了问题,使用jt的load_state_dic函数时遇到了如下问题: image

qinxiaodeng avatar May 21 '22 02:05 qinxiaodeng

谢谢您的反馈,请问您方便把报错的生成文件发给我们吗,名称为hash_xxxx_op.cc的文件

---原始邮件--- 发件人: @.> 发送时间: 2022年5月21日(周六) 上午10:23 收件人: @.>; 抄送: @.***>; 主题: [Jittor/jittor] 载入预训练的模型参数时遇到的问题 (Issue #328)

我在使用jt的load_state_dic函数时遇到了如下问题:

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you are subscribed to this thread.Message ID: @.***>

Jittor avatar May 21 '22 03:05 Jittor

您好请问是这个文件么

------------------ 原始邮件 ------------------ 发件人: "Jittor/jittor" @.>; 发送时间: 2022年5月21日(星期六) 中午11:18 @.>; @.@.>; 主题: Re: [Jittor/jittor] 载入预训练的模型参数时遇到的问题 (Issue #328)

谢谢您的反馈,请问您方便把报错的生成文件发给我们吗,名称为hash_xxxx_op.cc的文件

---原始邮件--- 发件人: @.> 发送时间: 2022年5月21日(周六) 上午10:23 收件人: @.>; 抄送: @.***>; 主题: [Jittor/jittor] 载入预训练的模型参数时遇到的问题 (Issue #328)

我在使用jt的load_state_dic函数时遇到了如下问题:

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you are subscribed to this thread.Message ID: @.> — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.>

qinxiaodeng avatar May 21 '22 07:05 qinxiaodeng

您好,我们看不到这个文件,能再发一次吗?

li-xl avatar May 23 '22 06:05 li-xl

您好,我将这个文件附加在附件上了,如果还没有的话,以下是这个文件的内容: 它的名称是“_opkey0_array_T_float32___opkey1_broadcast_to_Tx_float32__DIM_2__BCAST_1___opkey2_binary_T___hash_98f62ce8b14bb9ac_op.cc” 以下是它的内容 #define JIT 1 #define JIT_cuda 1 #include "ops/array_op.h" #include <cmath> #include <algorithm> #include "var.h" #include "ops/broadcast_to_op.h" #include "ops/op_register.h" #include <cmath> #include "var.h" #include "ops/binary_op.h" #include "ops/broadcast_to_op.h" #include "ops/op_register.h" #include <assert.h> #include "fused_op.h" #define op1_Tx float32 #define op1_DIM 2 #define op1_BCAST 1 #define op1_index_t int32 #define op2_Tx float32 #define op2_Ty float32 #define op2_Tz float32 #define op2_OP add #define op2_index_t int32 #include "misc/cuda_atomic.h" #include "misc/cuda_limits.h" #include "helper_cuda.h" using namespace jittor; #define INLINE_FUNC inline static void  launch_bounds(1024) global void func_98f62ce8b14bb9ac_0(float32 op0_outputv, op2_index_t range0, op2_index_t range1, float32* op0_outputp, op2_Tx* restrict op2_xp, op2_Tz* restrict op2_zp, int tn1, int tn0) {     int thread_id = blockIdx.x * blockDim.x + threadIdx.x;     int tn2 = 0;     int tnum0 = 1<<(tn0-tn1);     int tid0 = (thread_id>>tn1) & (tnum0-1);     int tnum1 = 1<<(tn1-tn2);     int tid1 = (thread_id>>tn2) & (tnum1-1);     op0_outputp[0] = op0_outputv;     op2_index_t op2_zstride1 = 1;     auto op2_zstride0 = op2_zstride1 * range1;     for (op2_index_t id0 = tid0; id0<range0; id0+=tnum0) {         for (op2_index_t id1 = tid1; id1<range1; id1+=tnum1) {             auto op0_outputid = + 0 * op0_outputstride0 + id1 * op0_outputstride1;             auto op1_zd = op0_outputp[op0_outputid];             op2_index_t op2_i = + id0 * op2_zstride0 + id1 * op2_zstride1;             op2_zp[op2_i] = ((op2_xp[op2_i])+(op1_zd       ));         }     } } #pragma GCC diagnostic ignored "-Wunused-function" inline static int get_thread_range_log(int& thread_num, int64 range) {     int nbits = NanoVector::get_nbits(std::min((int64)thread_num, range)) - 2;     thread_num >>= nbits;     return nbits; } void jittor::FusedOp::jit_run() {     Var* op0_output = ((ArrayOp*)(ops[0]))->output;     float32 op0_outputv = ((ArrayOp*)(ops[0]))->ptr<float32>()[0];     auto op2_x = ((BinaryOp*)(ops[2]))->x;     auto op2_z = ((BinaryOp*)(ops[2]))->z;     op2_index_t range0 = op2_x->shape[0];     op2_index_t range1 = op2_x->shape[1];     float32* op0_outputp = op0_output->ptr<float32>();     op0_outputp[0] = op0_outputv;     auto* restrict op2_xp = op2_x->ptr<op2_Tx>();     auto* restrict op2_zp = op2_z->ptr<op2_Tz>();     {         int thread_num = 2097152;         int thread_num_left = thread_num;         int tn1 = get_thread_range_log(thread_num_left, range1);         int tn0 = get_thread_range_log(thread_num_left, range0);         tn0=tn0+tn1;         tn0=std::max(tn0, 5);         thread_num=1<<tn0;         int p1 = std::max(thread_num/1024, 1);         int p2 = std::min(thread_num, 1024);         func_98f62ce8b14bb9ac_0<<<p1,p2>>>(op0_outputv,range0,range1,op0_outputp,op2_xp,op2_zp,tn1,tn0);     } }

------------------ 原始邮件 ------------------ 发件人: "Jittor/jittor" @.>; 发送时间: 2022年5月23日(星期一) 下午2:32 @.>; @.@.>; 主题: Re: [Jittor/jittor] 载入预训练的模型参数时遇到的问题 (Issue #328)

您好,我们看不到这个文件,能再发一次吗?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

qinxiaodeng avatar May 23 '22 08:05 qinxiaodeng