TensorRT-For-YOLO-Series icon indicating copy to clipboard operation
TensorRT-For-YOLO-Series copied to clipboard

关于V8 tensorrt 出现乱框的情况

Open SHOUshou0426 opened this issue 1 year ago • 33 comments

您好,作者,我使用环境为ubuntu18.04 cuda10.2 cudnn8.1.1 tensorrt7.2.3.4,在使用nrom/yolo.cpp 推理时,使用官方v8s模型,转换tensorrt 模型,推理后的图片出现乱框情况,是否可以给个解答呢 image

SHOUshou0426 avatar Dec 12 '23 05:12 SHOUshou0426

@Linaom1214

SHOUshou0426 avatar Dec 12 '23 05:12 SHOUshou0426

您好,作者,我使用环境为ubuntu18.04 cuda10.2 cudnn8.1.1 tensorrt7.2.3.4,在使用nrom/yolo.cpp 推理时,使用官方v8s模型,转换tensorrt 模型,推理后的图片出现乱框情况,是否可以给个解答呢 image

提供一下您导出的命令

Linaom1214 avatar Dec 12 '23 06:12 Linaom1214

您好,我是自己编写的onnx-tensorrt 请您查看

// onnx to yolov8
void Yolo::onnxYoloEngine(const nvinfer1::DataType dataType){
  if(fileExists(m_EnginePath))return;

  // load onnx model
  std::ifstream onnxFile(m_WtsFilePath.c_str(), std::ios::binary | std::ios::in);
  if (!onnxFile.is_open()) {
      std::cerr << "Error: Failed to open ONNX file." << std::endl;
  }
  // onnx create
  auto builder = nvinfer1::createInferBuilder(gLogger);
  const auto explicitBatch = 1U << static_cast<uint32_t>(NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);
  auto network = builder->createNetworkV2(explicitBatch);
  
	nvonnxparser::IParser* parser = nvonnxparser::createParser(*network, gLogger);

  parser->parseFromFile(m_WtsFilePath.c_str(), static_cast<int>(Logger::Severity::kINFO));
	for (int i = 0; i < parser->getNbErrors(); ++i)
	{
		std::cout << parser->getError(i)->desc() << std::endl;
	}
	std::cout << "successfully load the onnx model" << std::endl;

  // build the engine
  builder->setMaxBatchSize(batchSize);
	IBuilderConfig* config = builder->createBuilderConfig();
  
	config->setMaxWorkspaceSize(1 << 24); // 16MB
  //precision int8 fp16 fp32
  //if (dataType == nvinfer1::DataType::kINT8) 
  //{
  //  config->setFlag(nvinfer1::BuilderFlag::kINT8);
  //  config->setInt8Calibrator(calibrator);
  //} 
  //else if (dataType == nvinfer1::DataType::kHALF) 
  //{
  //  config->setFlag(nvinfer1::BuilderFlag::kFP16);
  //}

  if(dataType == nvinfer1::DataType::kHALF)
  {
    config->setFlag(nvinfer1::BuilderFlag::kFP16);
  }

  engine = builder->buildEngineWithConfig(*network, *config);

  std::cout << "Building the TensorRT Engine..." << std::endl;

  //Serialize the engine
  BuildOnnxEngine();

  // onnx destroy
  // engine->destroy();
  network->destroy();
  builder->destroy();
  parser->destroy();
}

void Yolo::BuildOnnxEngine() {
  std::cout << "Serializing the TensorRT Engine..." << std::endl;
  assert(engine && "Invalid TensorRT Engine");
  trtModelStream = engine->serialize();
  assert(trtModelStream != nullptr);

  assert(trtModelStream && "Unable to serialize engine");
  assert(!m_EnginePath.empty() && "Enginepath is empty");

  // write data to output file
  std::stringstream gieModelStream;
  gieModelStream.seekg(0, gieModelStream.beg);
  gieModelStream.write(static_cast<const char*>(trtModelStream->data()), trtModelStream->size());
  std::ofstream outFile;
  outFile.open(m_EnginePath, std::ios::binary);
  outFile << gieModelStream.rdbuf();
  outFile.close();

  std::cout << "Serialized plan file cached at location : " << m_EnginePath << std::endl;
}

SHOUshou0426 avatar Dec 12 '23 06:12 SHOUshou0426

您好,我是自己编写的onnx-tensorrt 请您查看

// onnx to yolov8
void Yolo::onnxYoloEngine(const nvinfer1::DataType dataType){
  if(fileExists(m_EnginePath))return;

  // load onnx model
  std::ifstream onnxFile(m_WtsFilePath.c_str(), std::ios::binary | std::ios::in);
  if (!onnxFile.is_open()) {
      std::cerr << "Error: Failed to open ONNX file." << std::endl;
  }
  // onnx create
  auto builder = nvinfer1::createInferBuilder(gLogger);
  const auto explicitBatch = 1U << static_cast<uint32_t>(NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);
  auto network = builder->createNetworkV2(explicitBatch);
  
	nvonnxparser::IParser* parser = nvonnxparser::createParser(*network, gLogger);

  parser->parseFromFile(m_WtsFilePath.c_str(), static_cast<int>(Logger::Severity::kINFO));
	for (int i = 0; i < parser->getNbErrors(); ++i)
	{
		std::cout << parser->getError(i)->desc() << std::endl;
	}
	std::cout << "successfully load the onnx model" << std::endl;

  // build the engine
  builder->setMaxBatchSize(batchSize);
	IBuilderConfig* config = builder->createBuilderConfig();
  
	config->setMaxWorkspaceSize(1 << 24); // 16MB
  //precision int8 fp16 fp32
  //if (dataType == nvinfer1::DataType::kINT8) 
  //{
  //  config->setFlag(nvinfer1::BuilderFlag::kINT8);
  //  config->setInt8Calibrator(calibrator);
  //} 
  //else if (dataType == nvinfer1::DataType::kHALF) 
  //{
  //  config->setFlag(nvinfer1::BuilderFlag::kFP16);
  //}

  if(dataType == nvinfer1::DataType::kHALF)
  {
    config->setFlag(nvinfer1::BuilderFlag::kFP16);
  }

  engine = builder->buildEngineWithConfig(*network, *config);

  std::cout << "Building the TensorRT Engine..." << std::endl;

  //Serialize the engine
  BuildOnnxEngine();

  // onnx destroy
  // engine->destroy();
  network->destroy();
  builder->destroy();
  parser->destroy();
}

void Yolo::BuildOnnxEngine() {
  std::cout << "Serializing the TensorRT Engine..." << std::endl;
  assert(engine && "Invalid TensorRT Engine");
  trtModelStream = engine->serialize();
  assert(trtModelStream != nullptr);

  assert(trtModelStream && "Unable to serialize engine");
  assert(!m_EnginePath.empty() && "Enginepath is empty");

  // write data to output file
  std::stringstream gieModelStream;
  gieModelStream.seekg(0, gieModelStream.beg);
  gieModelStream.write(static_cast<const char*>(trtModelStream->data()), trtModelStream->size());
  std::ofstream outFile;
  outFile.open(m_EnginePath, std::ios::binary);
  outFile << gieModelStream.rdbuf();
  outFile.close();

  std::cout << "Serialized plan file cached at location : " << m_EnginePath << std::endl;
}

先验证一下,直接用python推理结果对不对,可能模型本身就有问题

Linaom1214 avatar Dec 12 '23 06:12 Linaom1214

您那边使用官方的模型,转换tensorrt 并没有出现乱框的情况,是吗

SHOUshou0426 avatar Dec 12 '23 06:12 SHOUshou0426

您那边使用官方的模型,转换tensorrt 并没有出现乱框的情况,是吗

是的,不过要确保,转出的时候是全精度的onnx模型

Linaom1214 avatar Dec 12 '23 07:12 Linaom1214

您那边使用官方的模型,转换tensorrt 并没有出现乱框的情况,是吗

是的,不过要确保,转出的时候是全精度的onnx模型

FP32 精度是吗

SHOUshou0426 avatar Dec 12 '23 07:12 SHOUshou0426

您那边使用官方的模型,转换tensorrt 并没有出现乱框的情况,是吗

是的,不过要确保,转出的时候是全精度的onnx模型

FP32 精度是吗

是的

Linaom1214 avatar Dec 12 '23 07:12 Linaom1214

您那边使用官方的模型,转换tensorrt 并没有出现乱框的情况,是吗

是的,不过要确保,转出的时候是全精度的onnx模型

FP32 精度是吗

是的

好的,感谢您的回答,我验证一下,是否是模型问题,后续有问题,请多多指教

SHOUshou0426 avatar Dec 12 '23 07:12 SHOUshou0426

您那边使用官方的模型,转换tensorrt 并没有出现乱框的情况,是吗

是的,不过要确保,转出的时候是全精度的onnx模型

FP32 精度是吗

是的

您好,我尝试了使用trtexec 转换进行推理,还是出现上述情况 trtexec --onnx=/workspace/robot/ultralytics/yolov8s.onnx --saveEngine=yolov8s.engine --explicitBatch

SHOUshou0426 avatar Dec 12 '23 08:12 SHOUshou0426

您好,我是自己编写的onnx-tensorrt 请您查看

// onnx to yolov8
void Yolo::onnxYoloEngine(const nvinfer1::DataType dataType){
  if(fileExists(m_EnginePath))return;

  // load onnx model
  std::ifstream onnxFile(m_WtsFilePath.c_str(), std::ios::binary | std::ios::in);
  if (!onnxFile.is_open()) {
      std::cerr << "Error: Failed to open ONNX file." << std::endl;
  }
  // onnx create
  auto builder = nvinfer1::createInferBuilder(gLogger);
  const auto explicitBatch = 1U << static_cast<uint32_t>(NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);
  auto network = builder->createNetworkV2(explicitBatch);
  
	nvonnxparser::IParser* parser = nvonnxparser::createParser(*network, gLogger);

  parser->parseFromFile(m_WtsFilePath.c_str(), static_cast<int>(Logger::Severity::kINFO));
	for (int i = 0; i < parser->getNbErrors(); ++i)
	{
		std::cout << parser->getError(i)->desc() << std::endl;
	}
	std::cout << "successfully load the onnx model" << std::endl;

  // build the engine
  builder->setMaxBatchSize(batchSize);
	IBuilderConfig* config = builder->createBuilderConfig();
  
	config->setMaxWorkspaceSize(1 << 24); // 16MB
  //precision int8 fp16 fp32
  //if (dataType == nvinfer1::DataType::kINT8) 
  //{
  //  config->setFlag(nvinfer1::BuilderFlag::kINT8);
  //  config->setInt8Calibrator(calibrator);
  //} 
  //else if (dataType == nvinfer1::DataType::kHALF) 
  //{
  //  config->setFlag(nvinfer1::BuilderFlag::kFP16);
  //}

  if(dataType == nvinfer1::DataType::kHALF)
  {
    config->setFlag(nvinfer1::BuilderFlag::kFP16);
  }

  engine = builder->buildEngineWithConfig(*network, *config);

  std::cout << "Building the TensorRT Engine..." << std::endl;

  //Serialize the engine
  BuildOnnxEngine();

  // onnx destroy
  // engine->destroy();
  network->destroy();
  builder->destroy();
  parser->destroy();
}

void Yolo::BuildOnnxEngine() {
  std::cout << "Serializing the TensorRT Engine..." << std::endl;
  assert(engine && "Invalid TensorRT Engine");
  trtModelStream = engine->serialize();
  assert(trtModelStream != nullptr);

  assert(trtModelStream && "Unable to serialize engine");
  assert(!m_EnginePath.empty() && "Enginepath is empty");

  // write data to output file
  std::stringstream gieModelStream;
  gieModelStream.seekg(0, gieModelStream.beg);
  gieModelStream.write(static_cast<const char*>(trtModelStream->data()), trtModelStream->size());
  std::ofstream outFile;
  outFile.open(m_EnginePath, std::ios::binary);
  outFile << gieModelStream.rdbuf();
  outFile.close();

  std::cout << "Serialized plan file cached at location : " << m_EnginePath << std::endl;
}

先验证一下,直接用python推理结果对不对,可能模型本身就有问题

您可以将您的模型转换命令告诉一下吗,我可以尝试一下您的转换方式,并验证

SHOUshou0426 avatar Dec 12 '23 08:12 SHOUshou0426

您好,我是自己编写的onnx-tensorrt 请您查看

// onnx to yolov8
void Yolo::onnxYoloEngine(const nvinfer1::DataType dataType){
  if(fileExists(m_EnginePath))return;

  // load onnx model
  std::ifstream onnxFile(m_WtsFilePath.c_str(), std::ios::binary | std::ios::in);
  if (!onnxFile.is_open()) {
      std::cerr << "Error: Failed to open ONNX file." << std::endl;
  }
  // onnx create
  auto builder = nvinfer1::createInferBuilder(gLogger);
  const auto explicitBatch = 1U << static_cast<uint32_t>(NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);
  auto network = builder->createNetworkV2(explicitBatch);
  
	nvonnxparser::IParser* parser = nvonnxparser::createParser(*network, gLogger);

  parser->parseFromFile(m_WtsFilePath.c_str(), static_cast<int>(Logger::Severity::kINFO));
	for (int i = 0; i < parser->getNbErrors(); ++i)
	{
		std::cout << parser->getError(i)->desc() << std::endl;
	}
	std::cout << "successfully load the onnx model" << std::endl;

  // build the engine
  builder->setMaxBatchSize(batchSize);
	IBuilderConfig* config = builder->createBuilderConfig();
  
	config->setMaxWorkspaceSize(1 << 24); // 16MB
  //precision int8 fp16 fp32
  //if (dataType == nvinfer1::DataType::kINT8) 
  //{
  //  config->setFlag(nvinfer1::BuilderFlag::kINT8);
  //  config->setInt8Calibrator(calibrator);
  //} 
  //else if (dataType == nvinfer1::DataType::kHALF) 
  //{
  //  config->setFlag(nvinfer1::BuilderFlag::kFP16);
  //}

  if(dataType == nvinfer1::DataType::kHALF)
  {
    config->setFlag(nvinfer1::BuilderFlag::kFP16);
  }

  engine = builder->buildEngineWithConfig(*network, *config);

  std::cout << "Building the TensorRT Engine..." << std::endl;

  //Serialize the engine
  BuildOnnxEngine();

  // onnx destroy
  // engine->destroy();
  network->destroy();
  builder->destroy();
  parser->destroy();
}

void Yolo::BuildOnnxEngine() {
  std::cout << "Serializing the TensorRT Engine..." << std::endl;
  assert(engine && "Invalid TensorRT Engine");
  trtModelStream = engine->serialize();
  assert(trtModelStream != nullptr);

  assert(trtModelStream && "Unable to serialize engine");
  assert(!m_EnginePath.empty() && "Enginepath is empty");

  // write data to output file
  std::stringstream gieModelStream;
  gieModelStream.seekg(0, gieModelStream.beg);
  gieModelStream.write(static_cast<const char*>(trtModelStream->data()), trtModelStream->size());
  std::ofstream outFile;
  outFile.open(m_EnginePath, std::ios::binary);
  outFile << gieModelStream.rdbuf();
  outFile.close();

  std::cout << "Serialized plan file cached at location : " << m_EnginePath << std::endl;
}

先验证一下,直接用python推理结果对不对,可能模型本身就有问题

您可以将您的模型转换命令告诉一下吗,我可以尝试一下您的转换方式,并验证

参考本仓库提供的就行,或者直接采用trtexec

Linaom1214 avatar Dec 12 '23 08:12 Linaom1214

您好,我是自己编写的onnx-tensorrt 请您查看

// onnx to yolov8
void Yolo::onnxYoloEngine(const nvinfer1::DataType dataType){
  if(fileExists(m_EnginePath))return;

  // load onnx model
  std::ifstream onnxFile(m_WtsFilePath.c_str(), std::ios::binary | std::ios::in);
  if (!onnxFile.is_open()) {
      std::cerr << "Error: Failed to open ONNX file." << std::endl;
  }
  // onnx create
  auto builder = nvinfer1::createInferBuilder(gLogger);
  const auto explicitBatch = 1U << static_cast<uint32_t>(NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);
  auto network = builder->createNetworkV2(explicitBatch);
  
	nvonnxparser::IParser* parser = nvonnxparser::createParser(*network, gLogger);

  parser->parseFromFile(m_WtsFilePath.c_str(), static_cast<int>(Logger::Severity::kINFO));
	for (int i = 0; i < parser->getNbErrors(); ++i)
	{
		std::cout << parser->getError(i)->desc() << std::endl;
	}
	std::cout << "successfully load the onnx model" << std::endl;

  // build the engine
  builder->setMaxBatchSize(batchSize);
	IBuilderConfig* config = builder->createBuilderConfig();
  
	config->setMaxWorkspaceSize(1 << 24); // 16MB
  //precision int8 fp16 fp32
  //if (dataType == nvinfer1::DataType::kINT8) 
  //{
  //  config->setFlag(nvinfer1::BuilderFlag::kINT8);
  //  config->setInt8Calibrator(calibrator);
  //} 
  //else if (dataType == nvinfer1::DataType::kHALF) 
  //{
  //  config->setFlag(nvinfer1::BuilderFlag::kFP16);
  //}

  if(dataType == nvinfer1::DataType::kHALF)
  {
    config->setFlag(nvinfer1::BuilderFlag::kFP16);
  }

  engine = builder->buildEngineWithConfig(*network, *config);

  std::cout << "Building the TensorRT Engine..." << std::endl;

  //Serialize the engine
  BuildOnnxEngine();

  // onnx destroy
  // engine->destroy();
  network->destroy();
  builder->destroy();
  parser->destroy();
}

void Yolo::BuildOnnxEngine() {
  std::cout << "Serializing the TensorRT Engine..." << std::endl;
  assert(engine && "Invalid TensorRT Engine");
  trtModelStream = engine->serialize();
  assert(trtModelStream != nullptr);

  assert(trtModelStream && "Unable to serialize engine");
  assert(!m_EnginePath.empty() && "Enginepath is empty");

  // write data to output file
  std::stringstream gieModelStream;
  gieModelStream.seekg(0, gieModelStream.beg);
  gieModelStream.write(static_cast<const char*>(trtModelStream->data()), trtModelStream->size());
  std::ofstream outFile;
  outFile.open(m_EnginePath, std::ios::binary);
  outFile << gieModelStream.rdbuf();
  outFile.close();

  std::cout << "Serialized plan file cached at location : " << m_EnginePath << std::endl;
}

先验证一下,直接用python推理结果对不对,可能模型本身就有问题

您可以将您的模型转换命令告诉一下吗,我可以尝试一下您的转换方式,并验证

参考本仓库提供的就行,或者直接采用trtexec

我使用的则是trtexec trtexec --onnx=/workspace/robot/ultralytics/yolov8s.onnx --saveEngine=yolov8s.engine --explicitBatch

SHOUshou0426 avatar Dec 12 '23 08:12 SHOUshou0426

您好,我是自己编写的onnx-tensorrt 请您查看

// onnx to yolov8
void Yolo::onnxYoloEngine(const nvinfer1::DataType dataType){
  if(fileExists(m_EnginePath))return;

  // load onnx model
  std::ifstream onnxFile(m_WtsFilePath.c_str(), std::ios::binary | std::ios::in);
  if (!onnxFile.is_open()) {
      std::cerr << "Error: Failed to open ONNX file." << std::endl;
  }
  // onnx create
  auto builder = nvinfer1::createInferBuilder(gLogger);
  const auto explicitBatch = 1U << static_cast<uint32_t>(NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);
  auto network = builder->createNetworkV2(explicitBatch);
  
	nvonnxparser::IParser* parser = nvonnxparser::createParser(*network, gLogger);

  parser->parseFromFile(m_WtsFilePath.c_str(), static_cast<int>(Logger::Severity::kINFO));
	for (int i = 0; i < parser->getNbErrors(); ++i)
	{
		std::cout << parser->getError(i)->desc() << std::endl;
	}
	std::cout << "successfully load the onnx model" << std::endl;

  // build the engine
  builder->setMaxBatchSize(batchSize);
	IBuilderConfig* config = builder->createBuilderConfig();
  
	config->setMaxWorkspaceSize(1 << 24); // 16MB
  //precision int8 fp16 fp32
  //if (dataType == nvinfer1::DataType::kINT8) 
  //{
  //  config->setFlag(nvinfer1::BuilderFlag::kINT8);
  //  config->setInt8Calibrator(calibrator);
  //} 
  //else if (dataType == nvinfer1::DataType::kHALF) 
  //{
  //  config->setFlag(nvinfer1::BuilderFlag::kFP16);
  //}

  if(dataType == nvinfer1::DataType::kHALF)
  {
    config->setFlag(nvinfer1::BuilderFlag::kFP16);
  }

  engine = builder->buildEngineWithConfig(*network, *config);

  std::cout << "Building the TensorRT Engine..." << std::endl;

  //Serialize the engine
  BuildOnnxEngine();

  // onnx destroy
  // engine->destroy();
  network->destroy();
  builder->destroy();
  parser->destroy();
}

void Yolo::BuildOnnxEngine() {
  std::cout << "Serializing the TensorRT Engine..." << std::endl;
  assert(engine && "Invalid TensorRT Engine");
  trtModelStream = engine->serialize();
  assert(trtModelStream != nullptr);

  assert(trtModelStream && "Unable to serialize engine");
  assert(!m_EnginePath.empty() && "Enginepath is empty");

  // write data to output file
  std::stringstream gieModelStream;
  gieModelStream.seekg(0, gieModelStream.beg);
  gieModelStream.write(static_cast<const char*>(trtModelStream->data()), trtModelStream->size());
  std::ofstream outFile;
  outFile.open(m_EnginePath, std::ios::binary);
  outFile << gieModelStream.rdbuf();
  outFile.close();

  std::cout << "Serialized plan file cached at location : " << m_EnginePath << std::endl;
}

先验证一下,直接用python推理结果对不对,可能模型本身就有问题

您可以将您的模型转换命令告诉一下吗,我可以尝试一下您的转换方式,并验证

参考本仓库提供的就行,或者直接采用trtexec

我使用的则是trtexec trtexec --onnx=/workspace/robot/ultralytics/yolov8s.onnx --saveEngine=yolov8s.engine --explicitBatch

将模型重新转换 同样出现了上述乱框情况

SHOUshou0426 avatar Dec 12 '23 08:12 SHOUshou0426

您好,我是自己编写的onnx-tensorrt 请您查看

// onnx to yolov8
void Yolo::onnxYoloEngine(const nvinfer1::DataType dataType){
  if(fileExists(m_EnginePath))return;

  // load onnx model
  std::ifstream onnxFile(m_WtsFilePath.c_str(), std::ios::binary | std::ios::in);
  if (!onnxFile.is_open()) {
      std::cerr << "Error: Failed to open ONNX file." << std::endl;
  }
  // onnx create
  auto builder = nvinfer1::createInferBuilder(gLogger);
  const auto explicitBatch = 1U << static_cast<uint32_t>(NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);
  auto network = builder->createNetworkV2(explicitBatch);
  
	nvonnxparser::IParser* parser = nvonnxparser::createParser(*network, gLogger);

  parser->parseFromFile(m_WtsFilePath.c_str(), static_cast<int>(Logger::Severity::kINFO));
	for (int i = 0; i < parser->getNbErrors(); ++i)
	{
		std::cout << parser->getError(i)->desc() << std::endl;
	}
	std::cout << "successfully load the onnx model" << std::endl;

  // build the engine
  builder->setMaxBatchSize(batchSize);
	IBuilderConfig* config = builder->createBuilderConfig();
  
	config->setMaxWorkspaceSize(1 << 24); // 16MB
  //precision int8 fp16 fp32
  //if (dataType == nvinfer1::DataType::kINT8) 
  //{
  //  config->setFlag(nvinfer1::BuilderFlag::kINT8);
  //  config->setInt8Calibrator(calibrator);
  //} 
  //else if (dataType == nvinfer1::DataType::kHALF) 
  //{
  //  config->setFlag(nvinfer1::BuilderFlag::kFP16);
  //}

  if(dataType == nvinfer1::DataType::kHALF)
  {
    config->setFlag(nvinfer1::BuilderFlag::kFP16);
  }

  engine = builder->buildEngineWithConfig(*network, *config);

  std::cout << "Building the TensorRT Engine..." << std::endl;

  //Serialize the engine
  BuildOnnxEngine();

  // onnx destroy
  // engine->destroy();
  network->destroy();
  builder->destroy();
  parser->destroy();
}

void Yolo::BuildOnnxEngine() {
  std::cout << "Serializing the TensorRT Engine..." << std::endl;
  assert(engine && "Invalid TensorRT Engine");
  trtModelStream = engine->serialize();
  assert(trtModelStream != nullptr);

  assert(trtModelStream && "Unable to serialize engine");
  assert(!m_EnginePath.empty() && "Enginepath is empty");

  // write data to output file
  std::stringstream gieModelStream;
  gieModelStream.seekg(0, gieModelStream.beg);
  gieModelStream.write(static_cast<const char*>(trtModelStream->data()), trtModelStream->size());
  std::ofstream outFile;
  outFile.open(m_EnginePath, std::ios::binary);
  outFile << gieModelStream.rdbuf();
  outFile.close();

  std::cout << "Serialized plan file cached at location : " << m_EnginePath << std::endl;
}

先验证一下,直接用python推理结果对不对,可能模型本身就有问题

您可以将您的模型转换命令告诉一下吗,我可以尝试一下您的转换方式,并验证

参考本仓库提供的就行,或者直接采用trtexec

使用仓库的转换也是出现乱框情况

SHOUshou0426 avatar Dec 12 '23 09:12 SHOUshou0426

@SHOUshou0426 这个仓库v8 只支持end2end

Linaom1214 avatar Dec 12 '23 10:12 Linaom1214

@SHOUshou0426 这个仓库v8 只支持end2end

我使用的是支持nms 的库,nrom/yolo.cpp 那个不是end2end 的

SHOUshou0426 avatar Dec 12 '23 10:12 SHOUshou0426

@SHOUshou0426 这个仓库v8 只支持end2end

我查看了cpp 库两个文件夹,end2end 没有nms 操作,norm 有nms 操作,然后我使用的是norm 库里的,

SHOUshou0426 avatar Dec 12 '23 10:12 SHOUshou0426

@SHOUshou0426 这个仓库v8 只支持end2end

我查看了cpp 库两个文件夹,end2end 没有nms 操作,norm 有nms 操作,然后我使用的是norm 库里的,

好的,我之后找个时间测试一下

Linaom1214 avatar Dec 12 '23 10:12 Linaom1214

@SHOUshou0426 这个仓库v8 只支持end2end

我查看了cpp 库两个文件夹,end2end 没有nms 操作,norm 有nms 操作,然后我使用的是norm 库里的,

好的,我之后找个时间测试一下

我晚上查看一下后处理吧,如果修改了,我告诉您一下

SHOUshou0426 avatar Dec 12 '23 10:12 SHOUshou0426

@SHOUshou0426 这个仓库v8 只支持end2end

我查看了cpp 库两个文件夹,end2end 没有nms 操作,norm 有nms 操作,然后我使用的是norm 库里的,

好的,我之后找个时间测试一下

我晚上查看一下后处理吧,如果修改了,我告诉您一下

好的

Linaom1214 avatar Dec 12 '23 10:12 Linaom1214

您好,请问这个问题解决了吗?我用norm里的推理也是出现您这个情况,都是乱框的,请问下 怎么处理呢

Leoyed avatar Mar 31 '24 12:03 Leoyed

您好,请问这个问题解决了吗?我用norm里的推理也是出现您这个情况,都是乱框的,请问下 怎么处理呢

我把修改后的重新发送给您

SHOUshou0426 avatar Apr 02 '24 01:04 SHOUshou0426

@Leoyed 给我一个邮箱我发送给您

SHOUshou0426 avatar Apr 02 '24 04:04 SHOUshou0426

@Leoyed 给我一个邮箱我发送给您

谢谢老板您,这是我的邮箱:[email protected]

Leoyed avatar Apr 02 '24 06:04 Leoyed

您好,请问这个问题解决了吗?我用norm里的推理也是出现您这个情况,都是乱框的,请问下 怎么处理呢

我把修改后的重新发送给您

大佬,请问下这个是哪块有问题的,这两天看了下代码感觉后处理有点不太对,不知道是不是这样

Leoyed avatar Apr 02 '24 06:04 Leoyed

您好,请问这个问题解决了吗?我用norm里的推理也是出现您这个情况,都是乱框的,请问下 怎么处理呢

我把修改后的重新发送给您

您可以提一个pr,我这边修复一下

Linaom1214 avatar Apr 03 '24 01:04 Linaom1214

@Linaom1214 pr 让 @Leoyed 提一下把,我这边有一些自己的工作

SHOUshou0426 avatar Apr 08 '24 06:04 SHOUshou0426

您好,请问这个问题解决了吗?我用norm里的推理也是出现您这个情况,都是乱框的,请问下 怎么处理呢

我把修改后的重新发送给您

大佬,请问下这个是哪块有问题的,这两天看了下代码感觉后处理有点不太对,不知道是不是这样

你好,请问这块是后处理的问题吗

wyq-aki avatar Apr 24 '24 03:04 wyq-aki

您好,请问一下,norm里v8的后处理中,generate_yolo_proposals函数里面有段代码是这样的: float box_objectness = feat_blob[basic_pos+4]; // std::cout<<feat_blob<<std::endl; for (int class_idx = 0; class_idx < num_class; class_idx++) { float box_cls_score = feat_blob[basic_pos + 5 + class_idx]; float box_prob = box_objectness * box_cls_score; if (box_prob > prob_threshold){......} 这里用feat_blob[basic_pos+4]取置信度,但是v8中是没有回归置信度的,这样取到的是第一个类别的分类概率,这样后面再根据box_objectnessbox_cls_score过滤低置信度的检测框会不会有问题了,可能这是导致v8 norm出现乱框的原因哦

leayz-888 avatar Apr 26 '24 07:04 leayz-888