Marshall Case

Results 7 comments of Marshall Case

I'm still getting the same issue too - getting the modules as close to the version Wengong was using as in issue #20 as follows: ``` rdkit 2019.03.4.0 py37hc20afe1_1 python...

Getting the same issue - here's the exact error message for others' reference: ``` python preprocess.py --train data/chembl/all.txt --vocab data/chembl/vocab.txt --ncpu 16 --mode single multiprocessing.pool.RemoteTraceback: """ Traceback (most recent call...

Found a super easy solution to this problem - just generate a fresh vocab from the dataset rather than using the one provided. I think an rdkit update changed a...

getting a very similar issue when running `train_generator.py`: ```(hgraph-rdkit) C:\Users\Marshall\hgraph2graph-master>python train_generator.py --train train_processed/ --vocab data/chembl/vocab.txt --save_dir ckpt/cyclic_truncated_pretrained --hidden_size=125 --batch_size=20 Namespace(anneal_iter=25000, anneal_rate=0.9, atom_vocab=, batch_size=20, clip_norm=5.0, depthG=15, depthT=15, diterG=3, diterT=1, dropout=0.0, embed_size=250,...

Actually, I think I figured it out. There's a parameter defined in `mol_graph.py` , `MAX_POS = 20`, which limits the E_apos matrix, E_pos matrix, and subsequently when in the enconder,...

Just had the same issue as @purplerainf and @Nokimann's suggestion seemed to fix the issue. Thanks!

Can you post a picture of your molecule / encoded graph? In my experience you get n-m > 1 when there's atoms that aren't connected to the rest of the...