PaddleX icon indicating copy to clipboard operation
PaddleX copied to clipboard

paddlex_restful 无法访问

Open mr-abccc opened this issue 4 years ago • 18 comments

问题类型:PaddleX可视化客户端
问题描述
Windows 10 防火墙已关闭 paddlex 1.3.4

PS C:\WINDOWS\system32> D:\Code\Paddle\PaddleX\venv\Scripts\paddlex_restful.exe --start_restful --port 8840 --workspace_dir 'D:\Code\Paddlex_Workspace' Traceback (most recent call last): File "D:\Program Files\Python\lib\runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "D:\Program Files\Python\lib\runpy.py", line 87, in run_code exec(code, run_globals) File "D:\Code\Paddle\PaddleX\venv\Scripts\paddlex_restful.exe_main.py", line 7, in File "d:\code\paddle\paddlex\venv\lib\site-packages\paddlex_restful\command.py", line 59, in main pdxr.restful.app.run(port, workspace_dir) File "d:\code\paddle\paddlex\venv\lib\site-packages\paddlex_restful\restful\app.py", line 958, in run init(dirname, logger) File "d:\code\paddle\paddlex\venv\lib\site-packages\paddlex_restful\restful\app.py", line 43, in init get_system_info(SD.machine_info) File "d:\code\paddle\paddlex\venv\lib\site-packages\paddlex_restful\restful\system.py", line 35, in get_system_info gpu_info, message = get_gpu_info() File "d:\code\paddle\paddlex\venv\lib\site-packages\paddlex_restful\restful\utils.py", line 509, in get_gpu_info gpu_info = queue.get(timeout=2) File "D:\Program Files\Python\lib\multiprocessing\queues.py", line 108, in get raise Empty

请在这里描述您在使用GUI过程中的问题

无法访问网页

C:\WINDOWS\system32>telnet 8840 正在连接8840...无法打开到主机的连接。 在端口 23: 连接失败

mr-abccc avatar Jan 22 '21 19:01 mr-abccc

@Cllan-z 你好请问是否又正确安装pycuda,使用GPU版本的restful服务需要安装pycuda pip install pycuda

syyxsxx avatar Jan 23 '21 08:01 syyxsxx

有安装, 版本为2020.1 拉取最新代码,在路径: paddlex_restful/restful/utils.py 第256行,有个变量报错

mr-abccc avatar Jan 23 '21 10:01 mr-abccc

@Cllan-z 你好,我这边拉取paddlex最新的代码,运行restful服务第256行是没有报错的,请问你那边是什么错误呢? 另外对你提问中出现的错误是因为pycuda模块没有正常获取到GPU信息所导致,我这边提供了一个测试脚本

test.py.zip

看运行下是否类似如下的提示

检测到GPU
{'mem_free': [15843, 15843, 15843, 15843, 15843, 15843, 15843, 15843], 'mem_used': [317, 317, 317, 317, 317, 317, 317, 317], 'mem_total': [16160, 16160, 16160, 16160, 16160, 16160, 16160, 16160], 'driver_version': 10020, 'gpu_num': 8}

syyxsxx avatar Jan 23 '21 13:01 syyxsxx

你好 运行 test 测试结果 PS D:\Code> .\Paddle\PaddleX\venv\Scripts\python.exe .\test.py Traceback (most recent call last): File "", line 1, in File "D:\Program Files\Python\lib\multiprocessing\spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) File "D:\Program Files\Python\lib\multiprocessing\spawn.py", line 125, in _main prepare(preparation_data) File "D:\Program Files\Python\lib\multiprocessing\spawn.py", line 236, in prepare _fixup_main_from_path(data['init_main_from_path']) File "D:\Program Files\Python\lib\multiprocessing\spawn.py", line 287, in _fixup_main_from_path main_content = runpy.run_path(main_path, File "D:\Program Files\Python\lib\runpy.py", line 265, in run_path return _run_module_code(code, init_globals, run_name, File "D:\Program Files\Python\lib\runpy.py", line 97, in _run_module_code _run_code(code, mod_globals, init_globals, File "D:\Program Files\Python\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "D:\Code\test.py", line 46, in p.start() File "D:\Program Files\Python\lib\multiprocessing\process.py", line 121, in start self._popen = self._Popen(self) File "D:\Program Files\Python\lib\multiprocessing\context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "D:\Program Files\Python\lib\multiprocessing\context.py", line 327, in _Popen return Popen(process_obj) File "D:\Program Files\Python\lib\multiprocessing\popen_spawn_win32.py", line 45, in init prep_data = spawn.get_preparation_data(process_obj._name) File "D:\Program Files\Python\lib\multiprocessing\spawn.py", line 154, in get_preparation_data _check_not_importing_main() File "D:\Program Files\Python\lib\multiprocessing\spawn.py", line 134, in _check_not_importing_main raise RuntimeError(''' RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.

Traceback (most recent call last): File ".\test.py", line 48, in gpu_info = queue.get(timeout=2) File "D:\Program Files\Python\lib\multiprocessing\queues.py", line 108, in get raise Empty _queue.Empty

GPU 为 1650 cuda 11 paddlepaddle-gpu 2.0

windows 客户端版本可检测到cuda image

但是我装的是11 nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2020 NVIDIA Corporation Built on Tue_Sep_15_19:12:04_Pacific_Daylight_Time_2020 Cuda compilation tools, release 11.1, V11.1.74 Build cuda_11.1.relgpu_drvr455TC455_06.29069683_0

报错行 未解析的变量

mr-abccc avatar Jan 23 '21 14:01 mr-abccc

test.zip @Cllan-z 重新修改了下脚本,麻烦你那边再跑一下

syyxsxx avatar Jan 26 '21 08:01 syyxsxx

你好,运行结果如下: PS D:\Code\Paddle\PaddleX\venv\Scripts> .\python.exe D:\Code\test.py Traceback (most recent call last): File "D:\Code\test.py", line 59, in main() File "D:\Code\test.py", line 50, in main gpu_info = queue.get(timeout=2) File "D:\Program Files\Python\lib\multiprocessing\queues.py", line 108, in get raise Empty _queue.Empty

mr-abccc avatar Jan 30 '21 08:01 mr-abccc

@Cllan-z 看样子是pycuda不能正常使用,可以把python脚本中_get_gpu_info函数单独跑一下吗?

    gpu_info = dict()
    mem_free = list()
    mem_used = list()
    mem_total = list()
    import pycuda.driver as drv
    from pycuda.tools import clear_context_caches
    clear_context_caches()
    drv.init()
    driver_version = drv.get_driver_version()
    gpu_num = drv.Device.count()
    for gpu_id in range(gpu_num):
        dev = drv.Device(gpu_id)
        try:
            context = dev.make_context()
            free, total = drv.mem_get_info()
            context.pop()
            free = free // 1024 // 1024
            total = total // 1024 // 1024
            used = total - free
        except:
            free = 0
            total = 0
            used = 0
        mem_free.append(free)
        mem_used.append(used)
        mem_total.append(total)
    gpu_info['mem_free'] = mem_free
    gpu_info['mem_used'] = mem_used
    gpu_info['mem_total'] = mem_total
    gpu_info['driver_version'] = driver_version
    gpu_info['gpu_num'] = gpu_num
    print(gpu_info)

syyxsxx avatar Feb 02 '21 02:02 syyxsxx

同样问题,win10 paddlex_restful 启动失败

PaddleX客户端应用程序使用是没有问题的,可以正常训练,预测。 想使用PaddleX RESTful进行web操作,服务启动失败。报错内容一样。

test.py 脚本,在 gpu_num = drv.Device.count() print(gpu_num) dev = drv.Device(gpu_id) print(dev)

可以获取信息,之后内容就没有打印,直接退出了 微信图片_20210326152951

AN-ZE avatar Mar 26 '21 07:03 AN-ZE

同样问题,win10 paddlex_restful 启动失败

PaddleX客户端应用程序使用是没有问题的,可以正常训练,预测。 想使用PaddleX RESTful进行web操作,服务启动失败。报错内容一样。

test.py 脚本,在 gpu_num = drv.Device.count() print(gpu_num) dev = drv.Device(gpu_id) print(dev)

可以获取信息,之后内容就没有打印,直接退出了 微信图片_20210326152951

@AN-ZE 你好,上面有两个测试脚本,您这边跑的哪一个呢?麻烦跑下第二个脚本,然后贴一下打印信息

syyxsxx avatar Mar 26 '21 08:03 syyxsxx

@syyxsxx
图片

打印是没有东西的,在for循环里面停止了

AN-ZE avatar Mar 26 '21 09:03 AN-ZE

@AN-ZE 您好,PaddleX restful服务GPU版本需要依赖pycuda能正常使用,可以把python脚本中_get_gpu_info函数在主进程里面跑一下,上面回复有示例代码,确保pycuda可以正常获取到GPU信息和方便定位问题

syyxsxx avatar Mar 29 '21 07:03 syyxsxx

图片

图片

@syyxsxx for循环里面是打印不出东西的

AN-ZE avatar Mar 29 '21 07:03 AN-ZE

@AN-ZE pycuda是安装的是pycuda-2020.1版本吧。 另外nvidia-smi看下现在用的显卡gpu_id是不是等于0呢

syyxsxx avatar Mar 29 '21 11:03 syyxsxx

@syyxsxx
图片

对,是使用pip install 直接安装的,版本为pycuda 2020.1

paddle版本为2.0.1

AN-ZE avatar Mar 30 '21 02:03 AN-ZE

你好,请问这个问题解决了么,怎么弄得,可以说一下么

Enn29 avatar Jun 07 '21 00:06 Enn29

Hi, 遇到了相同的问题, 有啥新进展吗?

Dmcz avatar Jun 30 '22 10:06 Dmcz

Hi, 遇到了相同的问题, 有啥新进展吗?

去https://www.lfd.uci.edu/~gohlke/pythonlibs/#pycuda找到对应你版本的whl,用这个安装的pycuda就没有这些问题

Enn29 avatar Jul 01 '22 01:07 Enn29

Hi, 遇到了相同的问题, 有啥新进展吗?

https://www.lfd.uci.edu/~gohlke/pythonlibs/#pycuda找到对应你版本的whl,用这个安装的pycuda就没有这些问题

跑起来了, 谢谢!

Dmcz avatar Jul 01 '22 01:07 Dmcz