keras-nlp icon indicating copy to clipboard operation
keras-nlp copied to clipboard

Modular Natural Language Processing workflows with Keras

Results 360 keras-nlp issues
Sort by recently updated
recently updated
newest added

**Is your feature request related to a problem? Please describe.** **BLIP: Bootstrapping Language-Image Pre-training (2022 )**, is a model that is able to perform various multi-modal tasks including - Image...

type:feature

**Is your feature request related to a problem? Please describe.** I am not sure if it is possible with the api. I like to follow this **[example](https://keras.io/examples/nlp/question_answering)** exactly in keras-nlp,...

Just a draft for now, to show how to save models in our new Kaggle preset format.

**Describe the bug** Bert Tokenizer can't tokenize [MASK] token. it should return 103. but it returns 1031, 7308, 1033. **Proof** **keras_nlp library:** keras_nlp.models.BertTokenizer.from_preset('bert_tiny_en_uncased', sequence_length=12)(['i am going to [MASK] to study...

type:feature

We have been slowly working on factoring out common test routines for our testing code. The last piece of this work will be to port all `keras_nlp/tokenizers/*_test.py` and `keras_nlp/layers/preprocessing/*_test.py` layers...

type:feature

https://twitter.com/ylecun/status/1681336284453781505 I'm up for contributing this.

type:feature

T5 backbone presets test are failing for Keras 2, but pass for Keras 3 (all backends). Log: https://btx-internal.corp.google.com/invocations/5135a970-cae4-4635-a2be-c9e3e8a73215/targets/keras_nlp%2Fgithub%2Fubuntu%2Fgpu%2Fkeras2%2Fpresubmit/log This test is now disabled for Keras 2 so the builds are...

type:Bug

**Describe the bug** Trying to save a model containing `TransformerEncoder` layer results in an error. This is due to some weights of it having the same exact name, and h5...

type:Bug
stat:awaiting response from contributor
stale

**Is your feature request related to a problem? Please describe.** This is a request forwarded from @martin-gorner based on the user feedback. Currently when creating model from preset() API with...

type:feature

**Describe the bug** Unable to generate a sequences of `SEQ_LEN` words from a custom `Tokenizer` + `CasualMLProcessor`. How can i train from scratch a custom GPT model ? With custom...

type:Bug
stat:awaiting response from contributor