PaddleNLP icon indicating copy to clipboard operation
PaddleNLP copied to clipboard

pipeline server init error

Open wenjin11 opened this issue 2 years ago • 12 comments

欢迎您反馈PaddleNLP使用问题,非常感谢您对PaddleNLP的贡献! 在留下您的问题时,辛苦您同步提供如下信息:

  • 版本、环境信息 1)PaddleNLP和PaddlePaddle版本:请提供您的PaddleNLP和PaddlePaddle版本号,例如PaddleNLP 2.2.6 PaddlePaddle2.2.2 2)系统环境:请您描述系统类型,Windows,python3.9

  • 复现信息:如为报错,请给出复现环境、复现步骤 最后Paddle Serving部署时出错pipeline server init error Traceback (most recent call last): File "C:\Users\666\anaconda3\lib\site-packages\paddle_serving_server\pipeline\error_catch.py", line 97, in wrapper res = func(*args, **kw) File "C:\Users\666\anaconda3\lib\site-packages\paddle_serving_server\pipeline\error_catch.py", line 163, in wrapper result = function(*args, **kwargs) File "C:\Users\666\anaconda3\lib\site-packages\paddle_serving_server\pipeline\pipeline_server.py", line 51, in init_helper self._dag_executor = dag.DAGExecutor(response_op, dag_conf, worker_idx) File "C:\Users\666\anaconda3\lib\site-packages\paddle_serving_server\pipeline\dag.py", line 85, in init self._tracer = PerformanceTracer( File "C:\Users\666\anaconda3\lib\site-packages\paddle_serving_server\pipeline\profiler.py", line 45, in init self._data_buffer = multiprocessing.Manager().Queue() File "C:\Users\666\anaconda3\lib\multiprocessing\context.py", line 57, in Manager m.start() File "C:\Users\666\anaconda3\lib\multiprocessing\managers.py", line 554, in start self._process.start() File "C:\Users\666\anaconda3\lib\multiprocessing\process.py", line 121, in start self._popen = self._Popen(self) File "C:\Users\666\anaconda3\lib\multiprocessing\context.py", line 327, in _Popen return Popen(process_obj) File "C:\Users\666\anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 45, in init prep_data = spawn.get_preparation_data(process_obj._name) File "C:\Users\666\anaconda3\lib\multiprocessing\spawn.py", line 154, in get_preparation_data _check_not_importing_main() File "C:\Users\666\anaconda3\lib\multiprocessing\spawn.py", line 134, in _check_not_importing_main raise RuntimeError(''' RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase.

      This probably means that you are not using fork to start your
      child processes and you have forgotten to use the proper idiom
      in the main module:
    
          if __name__ == '__main__':
              freeze_support()
              ...
    
      The "freeze_support()" line can be omitted if the program
      is not going to be frozen to produce an executable.
    

Classname: PipelineServicer.init..init_helper FunctionName: init_helper

wenjin11 avatar May 16 '22 07:05 wenjin11

您好,请问是运行什么程序导致的错误? @wenjin11

w5688414 avatar May 16 '22 08:05 w5688414

您好,请问是运行什么程序导致的错误? @wenjin11

部署服务端运行web_service.py

wenjin11 avatar May 16 '22 08:05 wenjin11

您好,请问是运行什么程序导致的错误? @wenjin11

faq_system项目中的

wenjin11 avatar May 16 '22 08:05 wenjin11

请问您在哪个群,可以艾特一下,咱们沟通一下?

w5688414 avatar May 16 '22 09:05 w5688414

交流三群 我不知道您的群名称或Q名 这是我的2541352748

wenjin11 avatar May 16 '22 09:05 wenjin11

这是原生Windows系统上运行吗?

TeslaZhao avatar May 16 '22 10:05 TeslaZhao

serving目前没有适配windows,可以使用linux或者docker环境

w5688414 avatar May 16 '22 10:05 w5688414

@w5688414 同样的项目,macos 下也有报错,报错信息如下,麻烦请问该如何解决呢,谢谢

❯ python web_service.py
config use_encryption model here None
config use_encryption model here None
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/Users/luxu/.pyenv/versions/3.8.12/lib/python3.8/multiprocessing/spawn.py", line 116, in spawn_main
    exitcode = _main(fd, parent_sentinel)
  File "/Users/luxu/.pyenv/versions/3.8.12/lib/python3.8/multiprocessing/spawn.py", line 125, in _main
    prepare(preparation_data)
  File "/Users/luxu/.pyenv/versions/3.8.12/lib/python3.8/multiprocessing/spawn.py", line 236, in prepare
    _fixup_main_from_path(data['init_main_from_path'])
  File "/Users/luxu/.pyenv/versions/3.8.12/lib/python3.8/multiprocessing/spawn.py", line 287, in _fixup_main_from_path
    main_content = runpy.run_path(main_path,
  File "/Users/luxu/.pyenv/versions/3.8.12/lib/python3.8/runpy.py", line 265, in run_path
    return _run_module_code(code, init_globals, run_name,
  File "/Users/luxu/.pyenv/versions/3.8.12/lib/python3.8/runpy.py", line 97, in _run_module_code
    _run_code(code, mod_globals, init_globals,
  File "/Users/luxu/.pyenv/versions/3.8.12/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/Users/luxu/Documents/faq_system/deploy/python/web_service.py", line 82, in <module>
    ernie_service.run_service()
  File "/Users/luxu/.pyenv/versions/paddle/lib/python3.8/site-packages/paddle_serving_server/web_service.py", line 69, in run_service
    self._server.run_server()
  File "/Users/luxu/.pyenv/versions/paddle/lib/python3.8/site-packages/paddle_serving_server/pipeline/pipeline_server.py", line 314, in run_server
    PipelineServicer(self._name, self._response_op, self._conf),
  File "/Users/luxu/.pyenv/versions/paddle/lib/python3.8/site-packages/paddle_serving_server/pipeline/pipeline_server.py", line 57, in __init__
    raise CustomException(CustomExceptionCode.INIT_ERROR, "pipeline server init error")
paddle_serving_server.pipeline.error_catch.CustomException: 
        exception_code: 3003
        exception_type: INIT_ERROR
        error_msg: pipeline server init error
        is_send_to_user: False
Traceback (most recent call last):
  File "web_service.py", line 82, in <module>
    ernie_service.run_service()
  File "/Users/luxu/.pyenv/versions/paddle/lib/python3.8/site-packages/paddle_serving_server/web_service.py", line 69, in run_service
    self._server.run_server()
  File "/Users/luxu/.pyenv/versions/paddle/lib/python3.8/site-packages/paddle_serving_server/pipeline/pipeline_server.py", line 314, in run_server
    PipelineServicer(self._name, self._response_op, self._conf),
  File "/Users/luxu/.pyenv/versions/paddle/lib/python3.8/site-packages/paddle_serving_server/pipeline/pipeline_server.py", line 57, in __init__
    raise CustomException(CustomExceptionCode.INIT_ERROR, "pipeline server init error")
paddle_serving_server.pipeline.error_catch.CustomException: 
        exception_code: 3003
        exception_type: INIT_ERROR
        error_msg: pipeline server init error
        is_send_to_user: False

luxu1220 avatar Jun 29 '22 02:06 luxu1220

将 is_thread_op 改为 true 后

# op资源类型, True, 为线程模型;False,为进程模型
  is_thread_op: True

报错变为

❯ python web_service.py
config use_encryption model here None
[DAG] Succ init
E0629 11:19:20.553887 302106112 analysis_config.cc:95] Please compile with gpu to EnableGpu()
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [layer_norm_fuse_pass]
---    Fused 0 subgraphs into layer_norm op.
--- Running IR pass [attention_lstm_fuse_pass]
--- Running IR pass [seqconv_eltadd_relu_fuse_pass]
--- Running IR pass [seqpool_cvm_concat_fuse_pass]
--- Running IR pass [mul_lstm_fuse_pass]
--- Running IR pass [fc_gru_fuse_pass]
---    fused 0 pairs of fc gru patterns
--- Running IR pass [mul_gru_fuse_pass]
--- Running IR pass [seq_concat_fc_fuse_pass]
--- Running IR pass [gpu_cpu_squeeze2_matmul_fuse_pass]
--- Running IR pass [gpu_cpu_reshape2_matmul_fuse_pass]
--- Running IR pass [gpu_cpu_flatten2_matmul_fuse_pass]
--- Running IR pass [matmul_v2_scale_fuse_pass]
--- Running IR pass [gpu_cpu_map_matmul_v2_to_mul_pass]
I0629 11:19:21.893158 302106112 fuse_pass_base.cc:57] ---  detected 74 subgraphs
--- Running IR pass [gpu_cpu_map_matmul_v2_to_matmul_pass]
I0629 11:19:21.896107 302106112 fuse_pass_base.cc:57] ---  detected 12 subgraphs
--- Running IR pass [matmul_scale_fuse_pass]
--- Running IR pass [gpu_cpu_map_matmul_to_mul_pass]
--- Running IR pass [fc_fuse_pass]
I0629 11:19:22.389168 302106112 fuse_pass_base.cc:57] ---  detected 74 subgraphs
--- Running IR pass [repeated_fc_relu_fuse_pass]
--- Running IR pass [squared_mat_sub_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [conv_transpose_bn_fuse_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [memory_optimize_pass]
I0629 11:19:22.469272 302106112 memory_optimize_pass.cc:216] Cluster name : token_type_ids  size: 8
I0629 11:19:22.469298 302106112 memory_optimize_pass.cc:216] Cluster name : matmul_33.tmp_0  size: 48
I0629 11:19:22.469305 302106112 memory_optimize_pass.cc:216] Cluster name : relu_28.tmp_0  size: 12288
I0629 11:19:22.469309 302106112 memory_optimize_pass.cc:216] Cluster name : layer_norm_93.tmp_2  size: 3072
I0629 11:19:22.469314 302106112 memory_optimize_pass.cc:216] Cluster name : transpose_140.tmp_0  size: 3072
I0629 11:19:22.469318 302106112 memory_optimize_pass.cc:216] Cluster name : layer_norm_89.tmp_2  size: 3072
I0629 11:19:22.469322 302106112 memory_optimize_pass.cc:216] Cluster name : unsqueeze2_2.tmp_0  size: 4
I0629 11:19:22.469326 302106112 memory_optimize_pass.cc:216] Cluster name : layer_norm_78.tmp_2  size: 3072
--- Running analysis [ir_graph_to_program_pass]
I0629 11:19:22.614665 302106112 analysis_predictor.cc:1007] ======= optimize end =======
I0629 11:19:22.615101 302106112 naive_executor.cc:102] ---  skip [feed], feed -> token_type_ids
I0629 11:19:22.616220 302106112 naive_executor.cc:102] ---  skip [feed], feed -> input_ids
I0629 11:19:22.621551 302106112 naive_executor.cc:102] ---  skip [elementwise_div_2], fetch -> fetch
[PipelineServicer] succ init
Traceback (most recent call last):
  File "web_service.py", line 82, in <module>
    ernie_service.run_service()
  File "/Users/luxu/.pyenv/versions/paddle/lib/python3.8/site-packages/paddle_serving_server/web_service.py", line 69, in run_service
    self._server.run_server()
  File "/Users/luxu/.pyenv/versions/paddle/lib/python3.8/site-packages/paddle_serving_server/pipeline/pipeline_server.py", line 318, in run_server
    self._run_grpc_gateway(
  File "/Users/luxu/.pyenv/versions/paddle/lib/python3.8/site-packages/paddle_serving_server/pipeline/pipeline_server.py", line 148, in _run_grpc_gateway
    self._proxy_server.start()
  File "/Users/luxu/.pyenv/versions/3.8.12/lib/python3.8/multiprocessing/process.py", line 121, in start
    self._popen = self._Popen(self)
  File "/Users/luxu/.pyenv/versions/3.8.12/lib/python3.8/multiprocessing/context.py", line 224, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "/Users/luxu/.pyenv/versions/3.8.12/lib/python3.8/multiprocessing/context.py", line 284, in _Popen
    return Popen(process_obj)
  File "/Users/luxu/.pyenv/versions/3.8.12/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__
    super().__init__(process_obj)
  File "/Users/luxu/.pyenv/versions/3.8.12/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__
    self._launch(process_obj)
  File "/Users/luxu/.pyenv/versions/3.8.12/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch
    reduction.dump(process_obj, fp)
  File "/Users/luxu/.pyenv/versions/3.8.12/lib/python3.8/multiprocessing/reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
TypeError: cannot pickle '_thread.lock' object

luxu1220 avatar Jun 29 '22 03:06 luxu1220

我也碰到了同样的问题,最后修改config.yml中模型文件夹的名称为正确名称就可以了。

zhaobinchen avatar Aug 02 '22 09:08 zhaobinchen

我也遇到问题,请问后来是如何解决的,谢谢

JaylonXiwei avatar Oct 27 '22 05:10 JaylonXiwei

将 is_thread_op 改为 true 后

# op资源类型, True, 为线程模型;False,为进程模型
  is_thread_op: True

报错变为

❯ python web_service.py
config use_encryption model here None
[DAG] Succ init
E0629 11:19:20.553887 302106112 analysis_config.cc:95] Please compile with gpu to EnableGpu()
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [layer_norm_fuse_pass]
---    Fused 0 subgraphs into layer_norm op.
--- Running IR pass [attention_lstm_fuse_pass]
--- Running IR pass [seqconv_eltadd_relu_fuse_pass]
--- Running IR pass [seqpool_cvm_concat_fuse_pass]
--- Running IR pass [mul_lstm_fuse_pass]
--- Running IR pass [fc_gru_fuse_pass]
---    fused 0 pairs of fc gru patterns
--- Running IR pass [mul_gru_fuse_pass]
--- Running IR pass [seq_concat_fc_fuse_pass]
--- Running IR pass [gpu_cpu_squeeze2_matmul_fuse_pass]
--- Running IR pass [gpu_cpu_reshape2_matmul_fuse_pass]
--- Running IR pass [gpu_cpu_flatten2_matmul_fuse_pass]
--- Running IR pass [matmul_v2_scale_fuse_pass]
--- Running IR pass [gpu_cpu_map_matmul_v2_to_mul_pass]
I0629 11:19:21.893158 302106112 fuse_pass_base.cc:57] ---  detected 74 subgraphs
--- Running IR pass [gpu_cpu_map_matmul_v2_to_matmul_pass]
I0629 11:19:21.896107 302106112 fuse_pass_base.cc:57] ---  detected 12 subgraphs
--- Running IR pass [matmul_scale_fuse_pass]
--- Running IR pass [gpu_cpu_map_matmul_to_mul_pass]
--- Running IR pass [fc_fuse_pass]
I0629 11:19:22.389168 302106112 fuse_pass_base.cc:57] ---  detected 74 subgraphs
--- Running IR pass [repeated_fc_relu_fuse_pass]
--- Running IR pass [squared_mat_sub_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [conv_transpose_bn_fuse_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [memory_optimize_pass]
I0629 11:19:22.469272 302106112 memory_optimize_pass.cc:216] Cluster name : token_type_ids  size: 8
I0629 11:19:22.469298 302106112 memory_optimize_pass.cc:216] Cluster name : matmul_33.tmp_0  size: 48
I0629 11:19:22.469305 302106112 memory_optimize_pass.cc:216] Cluster name : relu_28.tmp_0  size: 12288
I0629 11:19:22.469309 302106112 memory_optimize_pass.cc:216] Cluster name : layer_norm_93.tmp_2  size: 3072
I0629 11:19:22.469314 302106112 memory_optimize_pass.cc:216] Cluster name : transpose_140.tmp_0  size: 3072
I0629 11:19:22.469318 302106112 memory_optimize_pass.cc:216] Cluster name : layer_norm_89.tmp_2  size: 3072
I0629 11:19:22.469322 302106112 memory_optimize_pass.cc:216] Cluster name : unsqueeze2_2.tmp_0  size: 4
I0629 11:19:22.469326 302106112 memory_optimize_pass.cc:216] Cluster name : layer_norm_78.tmp_2  size: 3072
--- Running analysis [ir_graph_to_program_pass]
I0629 11:19:22.614665 302106112 analysis_predictor.cc:1007] ======= optimize end =======
I0629 11:19:22.615101 302106112 naive_executor.cc:102] ---  skip [feed], feed -> token_type_ids
I0629 11:19:22.616220 302106112 naive_executor.cc:102] ---  skip [feed], feed -> input_ids
I0629 11:19:22.621551 302106112 naive_executor.cc:102] ---  skip [elementwise_div_2], fetch -> fetch
[PipelineServicer] succ init
Traceback (most recent call last):
  File "web_service.py", line 82, in <module>
    ernie_service.run_service()
  File "/Users/luxu/.pyenv/versions/paddle/lib/python3.8/site-packages/paddle_serving_server/web_service.py", line 69, in run_service
    self._server.run_server()
  File "/Users/luxu/.pyenv/versions/paddle/lib/python3.8/site-packages/paddle_serving_server/pipeline/pipeline_server.py", line 318, in run_server
    self._run_grpc_gateway(
  File "/Users/luxu/.pyenv/versions/paddle/lib/python3.8/site-packages/paddle_serving_server/pipeline/pipeline_server.py", line 148, in _run_grpc_gateway
    self._proxy_server.start()
  File "/Users/luxu/.pyenv/versions/3.8.12/lib/python3.8/multiprocessing/process.py", line 121, in start
    self._popen = self._Popen(self)
  File "/Users/luxu/.pyenv/versions/3.8.12/lib/python3.8/multiprocessing/context.py", line 224, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "/Users/luxu/.pyenv/versions/3.8.12/lib/python3.8/multiprocessing/context.py", line 284, in _Popen
    return Popen(process_obj)
  File "/Users/luxu/.pyenv/versions/3.8.12/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__
    super().__init__(process_obj)
  File "/Users/luxu/.pyenv/versions/3.8.12/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__
    self._launch(process_obj)
  File "/Users/luxu/.pyenv/versions/3.8.12/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch
    reduction.dump(process_obj, fp)
  File "/Users/luxu/.pyenv/versions/3.8.12/lib/python3.8/multiprocessing/reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
TypeError: cannot pickle '_thread.lock' object

mac m1 同样的问题

dougzhi avatar Oct 28 '22 07:10 dougzhi

This issue is stale because it has been open for 60 days with no activity. 当前issue 60天内无活动,被标记为stale。

github-actions[bot] avatar Jan 06 '23 05:01 github-actions[bot]

This issue was closed because it has been inactive for 14 days since being marked as stale. 当前issue 被标记为stale已有14天,即将关闭。

github-actions[bot] avatar Jan 21 '23 00:01 github-actions[bot]