TensorRT-For-YOLO-Series
TensorRT-For-YOLO-Series copied to clipboard
关于V8 tensorrt 出现乱框的情况
您好,作者,我使用环境为ubuntu18.04 cuda10.2 cudnn8.1.1 tensorrt7.2.3.4,在使用nrom/yolo.cpp 推理时,使用官方v8s模型,转换tensorrt 模型,推理后的图片出现乱框情况,是否可以给个解答呢
@Linaom1214
您好,作者,我使用环境为ubuntu18.04 cuda10.2 cudnn8.1.1 tensorrt7.2.3.4,在使用nrom/yolo.cpp 推理时,使用官方v8s模型,转换tensorrt 模型,推理后的图片出现乱框情况,是否可以给个解答呢
提供一下您导出的命令
您好,我是自己编写的onnx-tensorrt 请您查看
// onnx to yolov8
void Yolo::onnxYoloEngine(const nvinfer1::DataType dataType){
if(fileExists(m_EnginePath))return;
// load onnx model
std::ifstream onnxFile(m_WtsFilePath.c_str(), std::ios::binary | std::ios::in);
if (!onnxFile.is_open()) {
std::cerr << "Error: Failed to open ONNX file." << std::endl;
}
// onnx create
auto builder = nvinfer1::createInferBuilder(gLogger);
const auto explicitBatch = 1U << static_cast<uint32_t>(NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);
auto network = builder->createNetworkV2(explicitBatch);
nvonnxparser::IParser* parser = nvonnxparser::createParser(*network, gLogger);
parser->parseFromFile(m_WtsFilePath.c_str(), static_cast<int>(Logger::Severity::kINFO));
for (int i = 0; i < parser->getNbErrors(); ++i)
{
std::cout << parser->getError(i)->desc() << std::endl;
}
std::cout << "successfully load the onnx model" << std::endl;
// build the engine
builder->setMaxBatchSize(batchSize);
IBuilderConfig* config = builder->createBuilderConfig();
config->setMaxWorkspaceSize(1 << 24); // 16MB
//precision int8 fp16 fp32
//if (dataType == nvinfer1::DataType::kINT8)
//{
// config->setFlag(nvinfer1::BuilderFlag::kINT8);
// config->setInt8Calibrator(calibrator);
//}
//else if (dataType == nvinfer1::DataType::kHALF)
//{
// config->setFlag(nvinfer1::BuilderFlag::kFP16);
//}
if(dataType == nvinfer1::DataType::kHALF)
{
config->setFlag(nvinfer1::BuilderFlag::kFP16);
}
engine = builder->buildEngineWithConfig(*network, *config);
std::cout << "Building the TensorRT Engine..." << std::endl;
//Serialize the engine
BuildOnnxEngine();
// onnx destroy
// engine->destroy();
network->destroy();
builder->destroy();
parser->destroy();
}
void Yolo::BuildOnnxEngine() {
std::cout << "Serializing the TensorRT Engine..." << std::endl;
assert(engine && "Invalid TensorRT Engine");
trtModelStream = engine->serialize();
assert(trtModelStream != nullptr);
assert(trtModelStream && "Unable to serialize engine");
assert(!m_EnginePath.empty() && "Enginepath is empty");
// write data to output file
std::stringstream gieModelStream;
gieModelStream.seekg(0, gieModelStream.beg);
gieModelStream.write(static_cast<const char*>(trtModelStream->data()), trtModelStream->size());
std::ofstream outFile;
outFile.open(m_EnginePath, std::ios::binary);
outFile << gieModelStream.rdbuf();
outFile.close();
std::cout << "Serialized plan file cached at location : " << m_EnginePath << std::endl;
}
您好,我是自己编写的onnx-tensorrt 请您查看
// onnx to yolov8 void Yolo::onnxYoloEngine(const nvinfer1::DataType dataType){ if(fileExists(m_EnginePath))return; // load onnx model std::ifstream onnxFile(m_WtsFilePath.c_str(), std::ios::binary | std::ios::in); if (!onnxFile.is_open()) { std::cerr << "Error: Failed to open ONNX file." << std::endl; } // onnx create auto builder = nvinfer1::createInferBuilder(gLogger); const auto explicitBatch = 1U << static_cast<uint32_t>(NetworkDefinitionCreationFlag::kEXPLICIT_BATCH); auto network = builder->createNetworkV2(explicitBatch); nvonnxparser::IParser* parser = nvonnxparser::createParser(*network, gLogger); parser->parseFromFile(m_WtsFilePath.c_str(), static_cast<int>(Logger::Severity::kINFO)); for (int i = 0; i < parser->getNbErrors(); ++i) { std::cout << parser->getError(i)->desc() << std::endl; } std::cout << "successfully load the onnx model" << std::endl; // build the engine builder->setMaxBatchSize(batchSize); IBuilderConfig* config = builder->createBuilderConfig(); config->setMaxWorkspaceSize(1 << 24); // 16MB //precision int8 fp16 fp32 //if (dataType == nvinfer1::DataType::kINT8) //{ // config->setFlag(nvinfer1::BuilderFlag::kINT8); // config->setInt8Calibrator(calibrator); //} //else if (dataType == nvinfer1::DataType::kHALF) //{ // config->setFlag(nvinfer1::BuilderFlag::kFP16); //} if(dataType == nvinfer1::DataType::kHALF) { config->setFlag(nvinfer1::BuilderFlag::kFP16); } engine = builder->buildEngineWithConfig(*network, *config); std::cout << "Building the TensorRT Engine..." << std::endl; //Serialize the engine BuildOnnxEngine(); // onnx destroy // engine->destroy(); network->destroy(); builder->destroy(); parser->destroy(); } void Yolo::BuildOnnxEngine() { std::cout << "Serializing the TensorRT Engine..." << std::endl; assert(engine && "Invalid TensorRT Engine"); trtModelStream = engine->serialize(); assert(trtModelStream != nullptr); assert(trtModelStream && "Unable to serialize engine"); assert(!m_EnginePath.empty() && "Enginepath is empty"); // write data to output file std::stringstream gieModelStream; gieModelStream.seekg(0, gieModelStream.beg); gieModelStream.write(static_cast<const char*>(trtModelStream->data()), trtModelStream->size()); std::ofstream outFile; outFile.open(m_EnginePath, std::ios::binary); outFile << gieModelStream.rdbuf(); outFile.close(); std::cout << "Serialized plan file cached at location : " << m_EnginePath << std::endl; }
先验证一下,直接用python推理结果对不对,可能模型本身就有问题
您那边使用官方的模型,转换tensorrt 并没有出现乱框的情况,是吗
您那边使用官方的模型,转换tensorrt 并没有出现乱框的情况,是吗
是的,不过要确保,转出的时候是全精度的onnx模型
您那边使用官方的模型,转换tensorrt 并没有出现乱框的情况,是吗
是的,不过要确保,转出的时候是全精度的onnx模型
FP32 精度是吗
您那边使用官方的模型,转换tensorrt 并没有出现乱框的情况,是吗
是的,不过要确保,转出的时候是全精度的onnx模型
FP32 精度是吗
是的
您那边使用官方的模型,转换tensorrt 并没有出现乱框的情况,是吗
是的,不过要确保,转出的时候是全精度的onnx模型
FP32 精度是吗
是的
好的,感谢您的回答,我验证一下,是否是模型问题,后续有问题,请多多指教
您那边使用官方的模型,转换tensorrt 并没有出现乱框的情况,是吗
是的,不过要确保,转出的时候是全精度的onnx模型
FP32 精度是吗
是的
您好,我尝试了使用trtexec 转换进行推理,还是出现上述情况 trtexec --onnx=/workspace/robot/ultralytics/yolov8s.onnx --saveEngine=yolov8s.engine --explicitBatch
您好,我是自己编写的onnx-tensorrt 请您查看
// onnx to yolov8 void Yolo::onnxYoloEngine(const nvinfer1::DataType dataType){ if(fileExists(m_EnginePath))return; // load onnx model std::ifstream onnxFile(m_WtsFilePath.c_str(), std::ios::binary | std::ios::in); if (!onnxFile.is_open()) { std::cerr << "Error: Failed to open ONNX file." << std::endl; } // onnx create auto builder = nvinfer1::createInferBuilder(gLogger); const auto explicitBatch = 1U << static_cast<uint32_t>(NetworkDefinitionCreationFlag::kEXPLICIT_BATCH); auto network = builder->createNetworkV2(explicitBatch); nvonnxparser::IParser* parser = nvonnxparser::createParser(*network, gLogger); parser->parseFromFile(m_WtsFilePath.c_str(), static_cast<int>(Logger::Severity::kINFO)); for (int i = 0; i < parser->getNbErrors(); ++i) { std::cout << parser->getError(i)->desc() << std::endl; } std::cout << "successfully load the onnx model" << std::endl; // build the engine builder->setMaxBatchSize(batchSize); IBuilderConfig* config = builder->createBuilderConfig(); config->setMaxWorkspaceSize(1 << 24); // 16MB //precision int8 fp16 fp32 //if (dataType == nvinfer1::DataType::kINT8) //{ // config->setFlag(nvinfer1::BuilderFlag::kINT8); // config->setInt8Calibrator(calibrator); //} //else if (dataType == nvinfer1::DataType::kHALF) //{ // config->setFlag(nvinfer1::BuilderFlag::kFP16); //} if(dataType == nvinfer1::DataType::kHALF) { config->setFlag(nvinfer1::BuilderFlag::kFP16); } engine = builder->buildEngineWithConfig(*network, *config); std::cout << "Building the TensorRT Engine..." << std::endl; //Serialize the engine BuildOnnxEngine(); // onnx destroy // engine->destroy(); network->destroy(); builder->destroy(); parser->destroy(); } void Yolo::BuildOnnxEngine() { std::cout << "Serializing the TensorRT Engine..." << std::endl; assert(engine && "Invalid TensorRT Engine"); trtModelStream = engine->serialize(); assert(trtModelStream != nullptr); assert(trtModelStream && "Unable to serialize engine"); assert(!m_EnginePath.empty() && "Enginepath is empty"); // write data to output file std::stringstream gieModelStream; gieModelStream.seekg(0, gieModelStream.beg); gieModelStream.write(static_cast<const char*>(trtModelStream->data()), trtModelStream->size()); std::ofstream outFile; outFile.open(m_EnginePath, std::ios::binary); outFile << gieModelStream.rdbuf(); outFile.close(); std::cout << "Serialized plan file cached at location : " << m_EnginePath << std::endl; }
先验证一下,直接用python推理结果对不对,可能模型本身就有问题
您可以将您的模型转换命令告诉一下吗,我可以尝试一下您的转换方式,并验证
您好,我是自己编写的onnx-tensorrt 请您查看
// onnx to yolov8 void Yolo::onnxYoloEngine(const nvinfer1::DataType dataType){ if(fileExists(m_EnginePath))return; // load onnx model std::ifstream onnxFile(m_WtsFilePath.c_str(), std::ios::binary | std::ios::in); if (!onnxFile.is_open()) { std::cerr << "Error: Failed to open ONNX file." << std::endl; } // onnx create auto builder = nvinfer1::createInferBuilder(gLogger); const auto explicitBatch = 1U << static_cast<uint32_t>(NetworkDefinitionCreationFlag::kEXPLICIT_BATCH); auto network = builder->createNetworkV2(explicitBatch); nvonnxparser::IParser* parser = nvonnxparser::createParser(*network, gLogger); parser->parseFromFile(m_WtsFilePath.c_str(), static_cast<int>(Logger::Severity::kINFO)); for (int i = 0; i < parser->getNbErrors(); ++i) { std::cout << parser->getError(i)->desc() << std::endl; } std::cout << "successfully load the onnx model" << std::endl; // build the engine builder->setMaxBatchSize(batchSize); IBuilderConfig* config = builder->createBuilderConfig(); config->setMaxWorkspaceSize(1 << 24); // 16MB //precision int8 fp16 fp32 //if (dataType == nvinfer1::DataType::kINT8) //{ // config->setFlag(nvinfer1::BuilderFlag::kINT8); // config->setInt8Calibrator(calibrator); //} //else if (dataType == nvinfer1::DataType::kHALF) //{ // config->setFlag(nvinfer1::BuilderFlag::kFP16); //} if(dataType == nvinfer1::DataType::kHALF) { config->setFlag(nvinfer1::BuilderFlag::kFP16); } engine = builder->buildEngineWithConfig(*network, *config); std::cout << "Building the TensorRT Engine..." << std::endl; //Serialize the engine BuildOnnxEngine(); // onnx destroy // engine->destroy(); network->destroy(); builder->destroy(); parser->destroy(); } void Yolo::BuildOnnxEngine() { std::cout << "Serializing the TensorRT Engine..." << std::endl; assert(engine && "Invalid TensorRT Engine"); trtModelStream = engine->serialize(); assert(trtModelStream != nullptr); assert(trtModelStream && "Unable to serialize engine"); assert(!m_EnginePath.empty() && "Enginepath is empty"); // write data to output file std::stringstream gieModelStream; gieModelStream.seekg(0, gieModelStream.beg); gieModelStream.write(static_cast<const char*>(trtModelStream->data()), trtModelStream->size()); std::ofstream outFile; outFile.open(m_EnginePath, std::ios::binary); outFile << gieModelStream.rdbuf(); outFile.close(); std::cout << "Serialized plan file cached at location : " << m_EnginePath << std::endl; }
先验证一下,直接用python推理结果对不对,可能模型本身就有问题
您可以将您的模型转换命令告诉一下吗,我可以尝试一下您的转换方式,并验证
参考本仓库提供的就行,或者直接采用trtexec
您好,我是自己编写的onnx-tensorrt 请您查看
// onnx to yolov8 void Yolo::onnxYoloEngine(const nvinfer1::DataType dataType){ if(fileExists(m_EnginePath))return; // load onnx model std::ifstream onnxFile(m_WtsFilePath.c_str(), std::ios::binary | std::ios::in); if (!onnxFile.is_open()) { std::cerr << "Error: Failed to open ONNX file." << std::endl; } // onnx create auto builder = nvinfer1::createInferBuilder(gLogger); const auto explicitBatch = 1U << static_cast<uint32_t>(NetworkDefinitionCreationFlag::kEXPLICIT_BATCH); auto network = builder->createNetworkV2(explicitBatch); nvonnxparser::IParser* parser = nvonnxparser::createParser(*network, gLogger); parser->parseFromFile(m_WtsFilePath.c_str(), static_cast<int>(Logger::Severity::kINFO)); for (int i = 0; i < parser->getNbErrors(); ++i) { std::cout << parser->getError(i)->desc() << std::endl; } std::cout << "successfully load the onnx model" << std::endl; // build the engine builder->setMaxBatchSize(batchSize); IBuilderConfig* config = builder->createBuilderConfig(); config->setMaxWorkspaceSize(1 << 24); // 16MB //precision int8 fp16 fp32 //if (dataType == nvinfer1::DataType::kINT8) //{ // config->setFlag(nvinfer1::BuilderFlag::kINT8); // config->setInt8Calibrator(calibrator); //} //else if (dataType == nvinfer1::DataType::kHALF) //{ // config->setFlag(nvinfer1::BuilderFlag::kFP16); //} if(dataType == nvinfer1::DataType::kHALF) { config->setFlag(nvinfer1::BuilderFlag::kFP16); } engine = builder->buildEngineWithConfig(*network, *config); std::cout << "Building the TensorRT Engine..." << std::endl; //Serialize the engine BuildOnnxEngine(); // onnx destroy // engine->destroy(); network->destroy(); builder->destroy(); parser->destroy(); } void Yolo::BuildOnnxEngine() { std::cout << "Serializing the TensorRT Engine..." << std::endl; assert(engine && "Invalid TensorRT Engine"); trtModelStream = engine->serialize(); assert(trtModelStream != nullptr); assert(trtModelStream && "Unable to serialize engine"); assert(!m_EnginePath.empty() && "Enginepath is empty"); // write data to output file std::stringstream gieModelStream; gieModelStream.seekg(0, gieModelStream.beg); gieModelStream.write(static_cast<const char*>(trtModelStream->data()), trtModelStream->size()); std::ofstream outFile; outFile.open(m_EnginePath, std::ios::binary); outFile << gieModelStream.rdbuf(); outFile.close(); std::cout << "Serialized plan file cached at location : " << m_EnginePath << std::endl; }
先验证一下,直接用python推理结果对不对,可能模型本身就有问题
您可以将您的模型转换命令告诉一下吗,我可以尝试一下您的转换方式,并验证
参考本仓库提供的就行,或者直接采用trtexec
我使用的则是trtexec trtexec --onnx=/workspace/robot/ultralytics/yolov8s.onnx --saveEngine=yolov8s.engine --explicitBatch
您好,我是自己编写的onnx-tensorrt 请您查看
// onnx to yolov8 void Yolo::onnxYoloEngine(const nvinfer1::DataType dataType){ if(fileExists(m_EnginePath))return; // load onnx model std::ifstream onnxFile(m_WtsFilePath.c_str(), std::ios::binary | std::ios::in); if (!onnxFile.is_open()) { std::cerr << "Error: Failed to open ONNX file." << std::endl; } // onnx create auto builder = nvinfer1::createInferBuilder(gLogger); const auto explicitBatch = 1U << static_cast<uint32_t>(NetworkDefinitionCreationFlag::kEXPLICIT_BATCH); auto network = builder->createNetworkV2(explicitBatch); nvonnxparser::IParser* parser = nvonnxparser::createParser(*network, gLogger); parser->parseFromFile(m_WtsFilePath.c_str(), static_cast<int>(Logger::Severity::kINFO)); for (int i = 0; i < parser->getNbErrors(); ++i) { std::cout << parser->getError(i)->desc() << std::endl; } std::cout << "successfully load the onnx model" << std::endl; // build the engine builder->setMaxBatchSize(batchSize); IBuilderConfig* config = builder->createBuilderConfig(); config->setMaxWorkspaceSize(1 << 24); // 16MB //precision int8 fp16 fp32 //if (dataType == nvinfer1::DataType::kINT8) //{ // config->setFlag(nvinfer1::BuilderFlag::kINT8); // config->setInt8Calibrator(calibrator); //} //else if (dataType == nvinfer1::DataType::kHALF) //{ // config->setFlag(nvinfer1::BuilderFlag::kFP16); //} if(dataType == nvinfer1::DataType::kHALF) { config->setFlag(nvinfer1::BuilderFlag::kFP16); } engine = builder->buildEngineWithConfig(*network, *config); std::cout << "Building the TensorRT Engine..." << std::endl; //Serialize the engine BuildOnnxEngine(); // onnx destroy // engine->destroy(); network->destroy(); builder->destroy(); parser->destroy(); } void Yolo::BuildOnnxEngine() { std::cout << "Serializing the TensorRT Engine..." << std::endl; assert(engine && "Invalid TensorRT Engine"); trtModelStream = engine->serialize(); assert(trtModelStream != nullptr); assert(trtModelStream && "Unable to serialize engine"); assert(!m_EnginePath.empty() && "Enginepath is empty"); // write data to output file std::stringstream gieModelStream; gieModelStream.seekg(0, gieModelStream.beg); gieModelStream.write(static_cast<const char*>(trtModelStream->data()), trtModelStream->size()); std::ofstream outFile; outFile.open(m_EnginePath, std::ios::binary); outFile << gieModelStream.rdbuf(); outFile.close(); std::cout << "Serialized plan file cached at location : " << m_EnginePath << std::endl; }
先验证一下,直接用python推理结果对不对,可能模型本身就有问题
您可以将您的模型转换命令告诉一下吗,我可以尝试一下您的转换方式,并验证
参考本仓库提供的就行,或者直接采用trtexec
我使用的则是trtexec trtexec --onnx=/workspace/robot/ultralytics/yolov8s.onnx --saveEngine=yolov8s.engine --explicitBatch
将模型重新转换 同样出现了上述乱框情况
您好,我是自己编写的onnx-tensorrt 请您查看
// onnx to yolov8 void Yolo::onnxYoloEngine(const nvinfer1::DataType dataType){ if(fileExists(m_EnginePath))return; // load onnx model std::ifstream onnxFile(m_WtsFilePath.c_str(), std::ios::binary | std::ios::in); if (!onnxFile.is_open()) { std::cerr << "Error: Failed to open ONNX file." << std::endl; } // onnx create auto builder = nvinfer1::createInferBuilder(gLogger); const auto explicitBatch = 1U << static_cast<uint32_t>(NetworkDefinitionCreationFlag::kEXPLICIT_BATCH); auto network = builder->createNetworkV2(explicitBatch); nvonnxparser::IParser* parser = nvonnxparser::createParser(*network, gLogger); parser->parseFromFile(m_WtsFilePath.c_str(), static_cast<int>(Logger::Severity::kINFO)); for (int i = 0; i < parser->getNbErrors(); ++i) { std::cout << parser->getError(i)->desc() << std::endl; } std::cout << "successfully load the onnx model" << std::endl; // build the engine builder->setMaxBatchSize(batchSize); IBuilderConfig* config = builder->createBuilderConfig(); config->setMaxWorkspaceSize(1 << 24); // 16MB //precision int8 fp16 fp32 //if (dataType == nvinfer1::DataType::kINT8) //{ // config->setFlag(nvinfer1::BuilderFlag::kINT8); // config->setInt8Calibrator(calibrator); //} //else if (dataType == nvinfer1::DataType::kHALF) //{ // config->setFlag(nvinfer1::BuilderFlag::kFP16); //} if(dataType == nvinfer1::DataType::kHALF) { config->setFlag(nvinfer1::BuilderFlag::kFP16); } engine = builder->buildEngineWithConfig(*network, *config); std::cout << "Building the TensorRT Engine..." << std::endl; //Serialize the engine BuildOnnxEngine(); // onnx destroy // engine->destroy(); network->destroy(); builder->destroy(); parser->destroy(); } void Yolo::BuildOnnxEngine() { std::cout << "Serializing the TensorRT Engine..." << std::endl; assert(engine && "Invalid TensorRT Engine"); trtModelStream = engine->serialize(); assert(trtModelStream != nullptr); assert(trtModelStream && "Unable to serialize engine"); assert(!m_EnginePath.empty() && "Enginepath is empty"); // write data to output file std::stringstream gieModelStream; gieModelStream.seekg(0, gieModelStream.beg); gieModelStream.write(static_cast<const char*>(trtModelStream->data()), trtModelStream->size()); std::ofstream outFile; outFile.open(m_EnginePath, std::ios::binary); outFile << gieModelStream.rdbuf(); outFile.close(); std::cout << "Serialized plan file cached at location : " << m_EnginePath << std::endl; }
先验证一下,直接用python推理结果对不对,可能模型本身就有问题
您可以将您的模型转换命令告诉一下吗,我可以尝试一下您的转换方式,并验证
参考本仓库提供的就行,或者直接采用trtexec
使用仓库的转换也是出现乱框情况
@SHOUshou0426 这个仓库v8 只支持end2end
@SHOUshou0426 这个仓库v8 只支持end2end
我使用的是支持nms 的库,nrom/yolo.cpp 那个不是end2end 的
@SHOUshou0426 这个仓库v8 只支持end2end
我查看了cpp 库两个文件夹,end2end 没有nms 操作,norm 有nms 操作,然后我使用的是norm 库里的,
@SHOUshou0426 这个仓库v8 只支持end2end
我查看了cpp 库两个文件夹,end2end 没有nms 操作,norm 有nms 操作,然后我使用的是norm 库里的,
好的,我之后找个时间测试一下
@SHOUshou0426 这个仓库v8 只支持end2end
我查看了cpp 库两个文件夹,end2end 没有nms 操作,norm 有nms 操作,然后我使用的是norm 库里的,
好的,我之后找个时间测试一下
我晚上查看一下后处理吧,如果修改了,我告诉您一下
@SHOUshou0426 这个仓库v8 只支持end2end
我查看了cpp 库两个文件夹,end2end 没有nms 操作,norm 有nms 操作,然后我使用的是norm 库里的,
好的,我之后找个时间测试一下
我晚上查看一下后处理吧,如果修改了,我告诉您一下
好的
您好,请问这个问题解决了吗?我用norm里的推理也是出现您这个情况,都是乱框的,请问下 怎么处理呢
您好,请问这个问题解决了吗?我用norm里的推理也是出现您这个情况,都是乱框的,请问下 怎么处理呢
我把修改后的重新发送给您
@Leoyed 给我一个邮箱我发送给您
您好,请问这个问题解决了吗?我用norm里的推理也是出现您这个情况,都是乱框的,请问下 怎么处理呢
我把修改后的重新发送给您
大佬,请问下这个是哪块有问题的,这两天看了下代码感觉后处理有点不太对,不知道是不是这样
您好,请问这个问题解决了吗?我用norm里的推理也是出现您这个情况,都是乱框的,请问下 怎么处理呢
我把修改后的重新发送给您
您可以提一个pr,我这边修复一下
@Linaom1214 pr 让 @Leoyed 提一下把,我这边有一些自己的工作
您好,请问这个问题解决了吗?我用norm里的推理也是出现您这个情况,都是乱框的,请问下 怎么处理呢
我把修改后的重新发送给您
大佬,请问下这个是哪块有问题的,这两天看了下代码感觉后处理有点不太对,不知道是不是这样
你好,请问这块是后处理的问题吗
您好,请问一下,norm里v8的后处理中,generate_yolo_proposals函数里面有段代码是这样的: float box_objectness = feat_blob[basic_pos+4]; // std::cout<<feat_blob<<std::endl; for (int class_idx = 0; class_idx < num_class; class_idx++) { float box_cls_score = feat_blob[basic_pos + 5 + class_idx]; float box_prob = box_objectness * box_cls_score; if (box_prob > prob_threshold){......} 这里用feat_blob[basic_pos+4]取置信度,但是v8中是没有回归置信度的,这样取到的是第一个类别的分类概率,这样后面再根据box_objectnessbox_cls_score过滤低置信度的检测框会不会有问题了,可能这是导致v8 norm出现乱框的原因哦