David Bourgin

Results 21 comments of David Bourgin

Don't be sorry - this question is _very_ justified -- there is almost no usage documentation right now! First, the bad news: at the moment, implementing word2vec will require a...

Finally, one last caveat - if you're interested in training a non-toy word embedding model, I'd highly recommend using a library like keras, since it will make use of performance-optimized...

> So I was thinking of using your library and altering it so that it saves the unused weights to disk, and only loads them when they are trained or...

Heya @Santosh-Gupta - I've just pushed a preliminary version of an `NCELoss` and word2vec model [here](https://github.com/ddbourgin/numpy-ml/blob/master/numpy_ml/neural_nets/models/w2v.py) and [here](https://github.com/ddbourgin/numpy-ml/blob/3bf1893083d3133f0fcd5fd4ce7624eeb4be03b3/numpy_ml/neural_nets/losses/losses.py#L464). Unfortunately, I suspect that if you're going to use the models for...

Thanks for raising this + the comprehensive Colab notebook - I appreciate it! Yup, you're right, it looks like the "updated" version in the repo is returning the `beta` transpose...

Hi Rishabh, thanks for the PR, and sorry for my delayed response. Also apologies for not closing issue #67 after @sfsf9797 's PR. I suspect this may have created some...

Unfortunately there really is no good high-level documentation at this point. This is on my TODO list, but is likely to take some time as there's a lot to document...

In general, if you want to implement a model, you'll probably want the following methods as a bare-minimum: ```python _build_network(self, ...): # initialize the network layers and store them within...

Yeah, more or less. The major difference is that this code won't have a built-in `backward` method - you have to implement it yourself for each model

Sorry to let this sit - I need to think about the best way to include regularization on a per-layer basis. We'll need the loss objects to be able to...