kerasR
kerasR copied to clipboard
States in Embedding Layer
Hi ... I'm trying to implement Jason Brownlee's Python example How to Use Word Embedding Layers for Deep Learning with Keras which uses the following Embedding layer with an embedding_matrix
of GloVe weights:
model = Sequential()
e = Embedding(vocab_size, 100, weights=[embedding_matrix], input_length=4, trainable=False)
model.add(e)
Could you please help me understand adding weights to a KerasR layer? In https://github.com/statsmaths/kerasR/issues/5 you suggest applying add_weight
after the embedding layer. How should the arguments be specified to apply the GloVe weights in this kerasR code?
mod <- Sequential()
mod$add(Embedding(vocab_size,100,input_length=4,input_shape=c(4)))
mod$add_weight(???, trainable=FALSE)
For example, name
and initializer
.
Thanks!
Tensorflow Keras documentation provides these arguments:
- name: String, the name for the weight variable.
- dtype: The dtype of the weight.
- initializer: An Initializer instance (callable).
- regularizer: An optional Regularizer instance.
- trainable: A boolean, whether the weight should be trained via backprop or not (assuming that the layer itself is also trainable).
- constraint: An optional Constraint instance.
I cannot see how to use add_weight
as you suggest ...
However from SiWorgan's comment here there seems to be a workround:
mod$layers[[1]]$kernel_initializer=embedding_matrix
mod$layers[[1]]$trainable=FALSE
I could not get weights
to work as in the following:
mod$layers[[1]]$weights=embedding_matrix
Your thoughts on using add_weight
and the above workarounds would be much apreciated.