SENet-for-Weakly-Supervised-Relation-Extraction icon indicating copy to clipboard operation
SENet-for-Weakly-Supervised-Relation-Extraction copied to clipboard

TypeError: unhashable type: 'list'

Open SeekPoint opened this issue 6 years ago • 4 comments

gpuws@gpuws32g:~/ub16_prj/SENet-for-Weakly-Supervised-Relation-Extraction$ python3.5 train.py 2018-12-03 20:38:09.902655: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA 2018-12-03 20:38:10.010289: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:892] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2018-12-03 20:38:10.010727: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties: name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.6575 pciBusID: 0000:01:00.0 totalMemory: 10.91GiB freeMemory: 8.59GiB 2018-12-03 20:38:10.010769: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)

Parameters: ALLOW_SOFT_PLACEMENT=True BATCH_SIZE=64 DROPOUT_KEEP_PROB=0.5 EMBEDDING_DIM=50 FILTER_SIZES=3 L2_REG_LAMBDA=0.0001 LOG_DEVICE_PLACEMENT=False NUM_EPOCHS=300 NUM_FILTERS=128 SEQUENCE_LENGTH=100

WordTotal= 114043 Word dimension= 50 RelationTotal: 53 Start loading training data.

Start loading testing data.

train set and test set size are: 570088 96678 Finish randomize data Start Training 2018-12-03 20:38:34.521647: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1) Initialize variables. Batch data num_epoch 1 epoch_step 1 / 8908 loss 3.94999, acc 0.03125 num_epoch 1 epoch_step 2 / 8908 loss 2.20825, acc 0.734375 num_epoch 1 epoch_step 3 / 8908 loss 2.50987, acc 0.65625 num_epoch 1 epoch_step 4 / 8908 loss 2.26494, acc 0.625 num_epoch 1 epoch_step 5 / 8908 loss 1.84646, acc 0.578125 num_epoch 1 epoch_step 6 / 8908 loss 1.58758, acc 0.6875 num_epoch 1 epoch_step 7 / 8908 loss 1.43806, acc 0.71875 num_epoch 1 epoch_step 8 / 8908 loss 1.4291, acc 0.765625 num_epoch 1 epoch_step 9 / 8908 loss 1.9987, acc 0.65625 num_epoch 1 epoch_step 10 / 8908 loss 1.43472, acc 0.703125 num_epoch 1 epoch_step 11 / 8908 loss 1.66532, acc 0.6875 num_epoch 1 epoch_step 12 / 8908 loss 1.65045, acc 0.640625 num_epoch 1 epoch_step 13 / 8908 loss 1.86547, acc 0.59375 num_epoch 1 epoch_step 14 / 8908 loss 1.5699, acc 0.640625 num_epoch 1 epoch_step 15 / 8908 loss 1.97835, acc 0.625 num_epoch 1 epoch_step 16 / 8908 loss 1.57536, acc 0.71875 num_epoch 1 epoch_step 17 / 8908 loss 1.79578, acc 0.59375 num_epoch 1 epoch_step 18 / 8908 loss 1.44287, acc 0.734375 num_epoch 1 epoch_step 19 / 8908 loss 1.21793, acc 0.703125 num_epoch 1 epoch_step 20 / 8908 loss 1.39004, acc 0.765625 num_epoch 1 epoch_step 21 / 8908 loss 1.14212, acc 0.796875 num_epoch 1 epoch_step 22 / 8908 loss 0.91511, acc 0.828125 num_epoch 1 epoch_step 23 / 8908 loss 2.20525, acc 0.625 num_epoch 1 epoch_step 24 / 8908 loss 1.63561, acc 0.71875 num_epoch 1 epoch_step 25 / 8908 loss 1.36389, acc 0.734375 num_epoch 1 epoch_step 26 / 8908 loss 1.01296, acc 0.84375 num_epoch 1 epoch_step 27 / 8908 loss 1.24955, acc 0.78125 num_epoch 1 epoch_step 28 / 8908 loss 1.52108, acc 0.6875 num_epoch 1 epoch_step 29 / 8908 loss 1.21105, acc 0.6875 num_epoch 1 epoch_step 30 / 8908 loss 1.24232, acc 0.765625 num_epoch 1 epoch_step 31 / 8908 loss 1.69791, acc 0.65625 num_epoch 1 epoch_step 32 / 8908 loss 1.56406, acc 0.6875 num_epoch 1 epoch_step 33 / 8908 loss 1.4502, acc 0.6875 num_epoch 1 epoch_step 34 / 8908 loss 1.43388, acc 0.75 num_epoch 1 epoch_step 35 / 8908 loss 1.20509, acc 0.703125 Traceback (most recent call last): File "train.py", line 160, in batch = data_aug(batch) File "train.py", line 91, in data_aug data_item.words = aug(data_item) File "train.py", line 78, in aug pkl_dict, random_lin_adv_prob) File "/home/gpuws/ub16_prj/SENet-for-Weakly-Supervised-Relation-Extraction/sentence_aug.py", line 36, in random_lin_adv_noise aug_sentences = pkl_dict[entity_pair][sentence] TypeError: unhashable type: 'list' gpuws@gpuws32g:~/ub16_prj/SENet-for-Weakly-Supervised-Relation-Extraction$

SeekPoint avatar Dec 03 '18 12:12 SeekPoint

what?

SeekPoint avatar Dec 12 '18 14:12 SeekPoint

do not call data_aug()

Theodoric008 avatar Dec 13 '18 07:12 Theodoric008

it works, but how do you do data_aug, maybe can fix it

SeekPoint avatar Dec 13 '18 11:12 SeekPoint

data_aug is too slow and i am working on it , thks

Theodoric008 avatar Dec 13 '18 15:12 Theodoric008