CHINESE-OCR
CHINESE-OCR copied to clipboard
配置了四五天都还没成功,麻了,提示大概是Blas GEMM launch failed
PS C:\Users\Pondsi\Downloads\temp\CHINESE-OCR-master> & C:/builds/anaconda/envs/tensorflow/python.exe c:/Users/Pondsi/Downloads/temp/CHINESE-OCR-master/demo.py Using TensorFlow backend. C:\builds\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) C:\builds\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) C:\builds\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) C:\builds\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) C:\builds\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\dtypes.py:521: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) C:\builds\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)]) Tensor("Placeholder:0", shape=(?, ?, ?, 3), dtype=float32) Tensor("conv5_3/conv5_3:0", shape=(?, ?, ?, 512), dtype=float32) Tensor("rpn_conv/3x3/rpn_conv/3x3:0", shape=(?, ?, ?, 512), dtype=float32) WARNING:tensorflow:From C:\builds\anaconda\envs\tensorflow\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version. Instructions for updating: Use the retry module or similar alternatives. Tensor("lstm_o/Reshape_2:0", shape=(?, ?, ?, 512), dtype=float32) Tensor("lstm_o/Reshape_2:0", shape=(?, ?, ?, 512), dtype=float32) Tensor("rpn_cls_score/Reshape_1:0", shape=(?, ?, ?, 20), dtype=float32) Tensor("rpn_cls_prob:0", shape=(?, ?, ?, ?), dtype=float32) Tensor("Reshape_2:0", shape=(?, ?, ?, 20), dtype=float32) Tensor("rpn_bbox_pred/Reshape_1:0", shape=(?, ?, ?, 40), dtype=float32) Tensor("Placeholder_1:0", shape=(?, 3), dtype=float32) 2023-10-04 11:17:17.456323: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 2023-10-04 11:17:17.611372: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1344] Found device 0 with properties: name: NVIDIA GeForce RTX 3060 Laptop GPU major: 8 minor: 6 memoryClockRate(GHz): 1.702 pciBusID: 0000:01:00.0 totalMemory: 6.00GiB freeMemory: 5.01GiB 2023-10-04 11:17:17.611968: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1423] Adding visible gpu devices: 0 2023-10-04 11:17:18.040100: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix: 2023-10-04 11:17:18.040344: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:917] 0 2023-10-04 11:17:18.040582: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:930] 0: N 2023-10-04 11:17:18.040833: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4914 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 3060 Laptop GPU, pci bus id: 0000:01:00.0, compute capability: 8.6) Tensor_name is : lstm_o/bidirectional_rnn/bw/lstm_cell/kernel Tensor_name is : conv1_1/biases Tensor_name is : conv3_1/biases Tensor_name is : conv1_1/weights Tensor_name is : conv1_2/biases Tensor_name is : conv1_2/weights Tensor_name is : conv2_1/weights Tensor_name is : conv4_3/weights Tensor_name is : conv2_1/biases Tensor_name is : conv2_2/biases Tensor_name is : conv2_2/weights Tensor_name is : conv3_1/weights Tensor_name is : conv3_2/biases Tensor_name is : conv3_2/weights Tensor_name is : conv3_3/biases Tensor_name is : conv3_3/weights Tensor_name is : conv4_1/biases Tensor_name is : rpn_conv/3x3/biases Tensor_name is : conv4_1/weights Tensor_name is : conv4_2/biases Tensor_name is : conv4_2/weights Tensor_name is : conv4_3/biases Tensor_name is : conv5_1/biases Tensor_name is : conv5_1/weights Tensor_name is : conv5_2/biases Tensor_name is : conv5_2/weights Tensor_name is : conv5_3/biases Tensor_name is : conv5_3/weights Tensor_name is : lstm_o/biases Tensor_name is : lstm_o/bidirectional_rnn/bw/lstm_cell/bias Tensor_name is : lstm_o/bidirectional_rnn/fw/lstm_cell/bias Tensor_name is : lstm_o/bidirectional_rnn/fw/lstm_cell/kernel Tensor_name is : lstm_o/weights Tensor_name is : rpn_bbox_pred/weights Tensor_name is : rpn_bbox_pred/biases Tensor_name is : rpn_cls_score/biases Tensor_name is : rpn_cls_score/weights Tensor_name is : rpn_conv/3x3/weights load vggnet done 2023-10-04 11:17:19.301524: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1423] Adding visible gpu devices: 0 2023-10-04 11:17:19.301815: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix: 2023-10-04 11:17:19.302066: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:917] 0 2023-10-04 11:17:19.302267: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:930] 0: N 2023-10-04 11:17:19.302464: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 4914 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 3060 Laptop GPU, pci bus id: 0000:01:00.0, compute capability: 8.6) The angel of this character is: 0 Rotate the array of this img! 2023-10-04 11:17:22.792416: W T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.83GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. 2023-10-04 11:17:22.792834: W T:\src\github\tensorflow\tensorflow\core\common_runtime\bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.83GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. 2023-10-04 11:17:23.319220: E T:\src\github\tensorflow\tensorflow\stream_executor\cuda\cuda_blas.cc:654] failed to run cuBLAS routine cublasSgemm_v2: CUBLAS_STATUS_EXECUTION_FAILED Traceback (most recent call last): File "C:\builds\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py", line 1327, in _do_call return fn(*args) File "C:\builds\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py", line 1312, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "C:\builds\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\client\session.py", line 1420, in _call_tf_sessionrun status, run_metadata) File "C:\builds\anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 516, in exit c_api.TF_GetCode(self.status.status)) tensorflow.python.framework.errors_impl.InternalError: Blas GEMM launch failed : a.shape=(5115, 256), b.shape=(256, 512), m=5115, n=512, k=256 [[Node: lstm_o/MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/device:GPU:0"](lstm_o/Reshape_1, lstm_o/weights/read/_153)]] [[Node: rpn_bbox_pred/Reshape_1/_165 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_490_rpn_bbox_pred/Reshape_1", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:/Users/Pondsi/Downloads/temp/CHINESE-OCR-master/demo.py", line 21, in
Caused by op 'lstm_o/MatMul', defined at:
File "c:/Users/Pondsi/Downloads/temp/CHINESE-OCR-master/demo.py", line 8, in
InternalError (see above for traceback): Blas GEMM launch failed : a.shape=(5115, 256), b.shape=(256, 512), m=5115, n=512, k=256 [[Node: lstm_o/MatMul = MatMul[T=DT_FLOAT, transpose_a=false, transpose_b=false, _device="/job:localhost/replica:0/task:0/device:GPU:0"](lstm_o/Reshape_1, lstm_o/weights/read/_153)]] [[Node: rpn_bbox_pred/Reshape_1/_165 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_490_rpn_bbox_pred/Reshape_1", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]