tvm icon indicating copy to clipboard operation
tvm copied to clipboard

[Bug] Check failed: learner_model_param_.num_feature >= p_fmat->Info().num_col_ (835 vs. 919) : Number of columns does not match number of features in booster

Open hcms1994 opened this issue 2 years ago • 2 comments

Hi all, I try to use the autotuning module to search for the optimal library, but an error was reported. Attached is the model file picodet_new.onnx.zip

pool is not optimized for arm cpu. Extract tasks... get network success... Tuning... [Task 1/52] Current/Best: 3.84/ 47.36 GFLOPS | Progress: (576/576) | 942.71 s Done. [Task 2/52] Current/Best: 11.06/ 28.70 GFLOPS | Progress: (776/1000) | 1541.06 s Done. [Task 3/52] Current/Best: 13.94/ 45.63 GFLOPS | Progress: (704/1000) | 1198.90 s Done. [Task 4/52] Current/Best: 14.51/ 27.14 GFLOPS | Progress: (696/1000) | 1375.30 s Done. [Task 5/52] Current/Best: 9.40/ 65.11 GFLOPS | Progress: (768/768) | 1282.82 s Done. [Task 6/52] Current/Best: 27.17/ 35.97 GFLOPS | Progress: (1000/1000) | 2061.47 s Done. [Task 7/52] Current/Best: 36.12/ 145.86 GFLOPS | Progress: (616/1000) | 1041.53 s Done. [Task 8/52] Current/Best: 37.84/ 71.57 GFLOPS | Progress: (1000/1000) | 2304.04 s Done. [Task 9/52] Current/Best: 30.33/ 91.75 GFLOPS | Progress: (1000/1000) | 1706.85 s Done. [Task 10/52] Current/Best: 15.53/ 54.76 GFLOPS | Progress: (824/1000) | 1632.02 s Done. [Task 11/52] Current/Best: 7.98/ 66.98 GFLOPS | Progress: (864/960) | 1513.51 s Done. [Task 12/52] Current/Best: 26.76/ 67.60 GFLOPS | Progress: (1000/1000) | 2103.65 s Done. [Task 13/52] Current/Best: 62.60/ 153.52 GFLOPS | Progress: (648/1000) | 1203.46 s Done. [Task 14/52] Current/Best: 14.19/ 98.03 GFLOPS | Progress: (960/1000) | 3060.50 s Done. [Task 15/52] Current/Best: 69.88/ 143.46 GFLOPS | Progress: (808/1000) | 1494.49 s Done. [Task 16/52] Current/Best: 22.57/ 89.38 GFLOPS | Progress: (776/1000) | 1591.37 s Done. [Task 17/52] Current/Best: 35.75/ 133.76 GFLOPS | Progress: (776/1000) | 1370.02 s Done. [Task 18/52] Current/Best: 55.20/ 81.16 GFLOPS | Progress: (1000/1000) | 2558.72 s Done. [Task 19/52] Current/Best: 62.47/ 130.66 GFLOPS | Progress: (608/1000) | 1026.17 s Done. [Task 20/52] Current/Best: 31.73/ 69.75 GFLOPS | Progress: (864/1000) | 2060.20 s Done. [Task 21/52] Current/Best: 18.59/ 110.98 GFLOPS | Progress: (816/1000) | 1324.28 s Done. [Task 22/52] Current/Best: 11.29/ 51.83 GFLOPS | Progress: (1000/1000) | 2105.17 s Done. [Task 23/52] Current/Best: 2.37/ 11.02 GFLOPS | Progress: (800/800) | 1250.26 s Done. [Task 24/52] Current/Best: 4.44/ 6.52 GFLOPS | Progress: (912/1000) | 1622.29 s Done. [Task 25/52] Current/Best: 21.41/ 106.14 GFLOPS | Progress: (616/1000) | 1040.57 s Done. [Task 26/52] Current/Best: 16.15/ 46.35 GFLOPS | Progress: (616/1000) | 1420.74 s Done. [Task 27/52] Current/Best: 23.23/ 64.34 GFLOPS | Progress: (776/1000) | 1216.78 s Done. [Task 28/52] Current/Best: 16.34/ 27.72 GFLOPS | Progress: (1000/1000) | 2019.74 s Done. [Task 29/52] Current/Best: 5.26/ 17.20 GFLOPS | Progress: (800/800) | 1294.08 s Done. [Task 30/52] Current/Best: 4.21/ 11.27 GFLOPS | Progress: (768/1000) | 1394.74 s Done. [Task 31/52] Current/Best: 3.37/ 16.32 GFLOPS | Progress: (752/768) | 1200.67 s Done. [Task 32/52] Current/Best: 4.32/ 12.32 GFLOPS | Progress: (1000/1000) | 1838.50 s Done. [Task 33/52] Current/Best: 16.47/ 70.21 GFLOPS | Progress: (624/1000) | 1024.04 s Done. [Task 34/52] Current/Best: 20.03/ 36.60 GFLOPS | Progress: (1000/1000) | 2361.25 s Done. [Task 35/52] Current/Best: 14.07/ 54.69 GFLOPS | Progress: (608/1000) | 951.49 s Done. [Task 36/52] Current/Best: 14.23/ 24.36 GFLOPS | Progress: (1000/1000) | 2282.57 s Done. [Task 37/52] Current/Best: 9.40/ 37.86 GFLOPS | Progress: (768/768) | 1238.55 s Done. [Task 38/52] Current/Best: 10.34/ 20.86 GFLOPS | Progress: (976/1000) | 1790.84 s Done. [Task 39/52] Current/Best: 7.46/ 29.37 GFLOPS | Progress: (576/576) | 915.50 s Done. [Task 40/52] Current/Best: 5.07/ 19.89 GFLOPS | Progress: (1000/1000) | 1882.90 s Done. [Task 41/52] Current/Best: 8.10/ 41.97 GFLOPS | Progress: (616/720) | 991.51 s Done. [Task 42/52] Current/Best: 7.30/ 22.45 GFLOPS | Progress: (1000/1000) | 2165.44 s Done. [Task 43/52] Current/Best: 18.56/ 45.00 GFLOPS | Progress: (576/576) | 941.53 s Done. [Task 44/52] Current/Best: 9.23/ 28.91 GFLOPS | Progress: (888/1000) | 1649.56 s Done. [Task 45/52] Current/Best: 16.35/ 76.66 GFLOPS | Progress: (648/1000) | 1101.68 s Done. [Task 46/52] Current/Best: 19.67/ 42.18 GFLOPS | Progress: (944/1000) | 2283.94 s Done. [Task 47/52] Current/Best: 9.82/ 58.84 GFLOPS | Progress: (632/960) | 1039.73 s Done. [Task 48/52] Current/Best: 14.56/ 21.36 GFLOPS | Progress: (1000/1000) | 2293.03 s Done. [Task 49/52] Current/Best: 5.48/ 72.21 GFLOPS | Progress: (1000/1000) | 1728.76 s Done. [Task 50/52] Current/Best: 25.33/ 45.19 GFLOPS | Progress: (944/1000) | 1801.64 s/usr/local/lib/python3.6/dist-packages/setuptools-58.5.3-py3.6.egg/pkg_resources/init.py:119: PkgResourcesDeprecationWarning: 0.18ubuntu0.18.04.1 is an invalid version and will not be supported in a future release PkgResourcesDeprecationWarning, /usr/local/lib/python3.6/dist-packages/xgboost/training.py:17: UserWarning: Old style callback is deprecated. See: https://xgboost.readthedocs.io/en/latest/python/callbacks.html warnings.warn(f'Old style callback is deprecated. See: {link}', UserWarning) Done. Traceback (most recent call last): File "autoTVM_tune_relay_cuda_agx_1000.py", line 282, in tune_and_evaluate(tuning_option) File "autoTVM_tune_relay_cuda_agx_1000.py", line 250, in tune_and_evaluate tune_tasks(tasks, **tuning_opt) File "autoTVM_tune_relay_cuda_agx_1000.py", line 216, in tune_tasks tuner_obj.load_history(autotvm.record.load_from_file(tmp_log_file)) File "/home/caros/vis_work/code/apache-tvm-src-v0.10.0/python/tvm/autotvm/tuner/model_based_tuner.py", line 314, in load_history maximums = self.model_optimizer.find_maximums(base_model, self.plan_size, self.visited) File "/home/caros/vis_work/code/apache-tvm-src-v0.10.0/python/tvm/autotvm/tuner/sa_model_optimizer.py", line 89, in find_maximums scores = model.predict(points) File "/home/caros/vis_work/code/apache-tvm-src-v0.10.0/python/tvm/autotvm/tuner/xgboost_cost_model.py", line 311, in predict return self.bst.predict(dtest, output_margin=output_margin) File "/usr/local/lib/python3.6/dist-packages/xgboost/core.py", line 1920, in predict ctypes.byref(preds) File "/usr/local/lib/python3.6/dist-packages/xgboost/core.py", line 218, in check_call raise XGBoostError(py_str(LIB.XGBGetLastError())) xgboost.core.XGBoostError: [02:04:57] /workspace/src/learner.cc:1257: Check failed: learner_model_param.num_feature >= p_fmat->Info().num_col (835 vs. 919) : Number of columns does not match number of features in booster. Stack trace: [bt] (0) /usr/local/lib/python3.6/dist-packages/xgboost/lib/libxgboost.so(+0x1658c0) [0x7fa07368c0] [bt] (1) /usr/local/lib/python3.6/dist-packages/xgboost/lib/libxgboost.so(+0x165d90) [0x7fa0736d90] [bt] (2) /usr/local/lib/python3.6/dist-packages/xgboost/lib/libxgboost.so(+0x16f6d0) [0x7fa07406d0] [bt] (3) /usr/local/lib/python3.6/dist-packages/xgboost/lib/libxgboost.so(+0x16f840) [0x7fa0740840] [bt] (4) /usr/local/lib/python3.6/dist-packages/xgboost/lib/libxgboost.so(XGBoosterPredictFromDMatrix+0x2dc) [0x7fa062a18c] [bt] (5) /usr/lib/aarch64-linux-gnu/libffi.so.6(ffi_call_SYSV+0x64) [0x7facfccd28] [bt] (6) /usr/lib/aarch64-linux-gnu/libffi.so.6(ffi_call+0xc8) [0x7facfcd698] [bt] (7) /usr/lib/python3.6/lib-dynload/_ctypes.cpython-36m-aarch64-linux-gnu.so(_ctypes_callproc+0x420) [0x7f99950198] [bt] (8) /usr/lib/python3.6/lib-dynload/_ctypes.cpython-36m-aarch64-linux-gnu.so(+0x104a8) [0x7f999504a8]

hcms1994 avatar Dec 07 '22 08:12 hcms1994

I got same issue, not with all models. I fix this issue pick best records of log and re-run tune model:

from tvm import autotvm
import sys
filename = sys.argv[1]
output_filename = sys.argv[2]

autotvm.record.pick_best(filename, output_filename)
os.remove(filename)
os.rename(filename, output_filename)

lhelontra avatar Jan 22 '23 04:01 lhelontra

Hello, I also encountered the same problem. Did the author finally solve it?

Millie-Xu avatar Jan 26 '24 08:01 Millie-Xu