Tuomo Hiippala
Tuomo Hiippala
Hey Jacob, thanks for your input! My training data contains tanks in various sizes, i.e. shot from medium to long-distance as well, but you're absolutely right: opting for convolutional instead...
Hi @jacobgil, I've been exploring this issue, but I have trouble wrapping my head around the idea of converting the dense layer to a convolutional one using previously trained weights,...
I've been experimenting with converting the final layers of the model from dense to convolutional, adding another network architecture to [nets.py](https://github.com/thiippal/tankbuster/blob/master/tankbuster/cnn/nets.py) under MiniVGGNetFC. The initial work can be found in...
Hi @ogarciasierra! Just to make sure: I haven't really looked at doing this directly HuggingFace Transformers, so I assume that you would like to do extract contextual word embeddings for...
Okay @ogarciasierra, one way to do this is to follow the process [here](https://applied-language-technology.readthedocs.io/en/latest/notebooks/part_iii/04_embeddings_continued.html#contextual-word-embeddings-from-transformers). 1. Create the custom component for assigning Transformer features to the `vector` attribute of spaCy Token/Span/Doc elements....
Hi! Thanks for the kind words; I'm really happy that you've found the materials useful. I will look into this and get back to you!
Hi @Jupiter79, can you provide me with an example of the error message raised when dealing with Docs in multiple batches? Thanks!
Hi @Jupiter79! I've now updated the materials – I ran some experiments using a longer text and updated the code to deal with batched outputs from the Transformer. Essentially, I...
Hey @Jupiter79, good to hear it works! In your case, I would perhaps go for "traditional" word embeddings, since they seek to learn representations for particular words, such as the...
Hi @mehmetilker! Okay, a couple of questions: 1. Are you trying to compare the cosine similarity of a large batch of Doc objects? 2. Which model / architecture are you...