keras-nlp icon indicating copy to clipboard operation
keras-nlp copied to clipboard

Modular Natural Language Processing workflows with Keras

Results 360 keras-nlp issues
Sort by recently updated
recently updated
newest added

Is there any certain guide to upload weights of pretrained models to kaggle. As per the `CONTRIBUTING_MODELS.md` I have uploaded my weights on GCP as mentioned in #1384.

type:feature
stat:awaiting response from contributor
stale

Hello folks, I'm new to *Keras_nlp*. From what I've seen till now is that - all the models that are provided by the keras_nlp community are only those that have...

type:support

type:Bug
stat:awaiting response from contributor

Hello, I'm having some problems with the codebase of the keras_nlp, in case anyone understands the codebase and how model interact within, feel free to say. - Suppose, I do...

stat:awaiting keras-eng
type:support

**Describe the bug** Recently [unsloth.ai](https://unsloth.ai/blog/gemma-bugs) has pointed out some discrepancies in **Gemma** model implementations. I think it would be nice to have them verified and addressed. Specially, they mentioned **Keras...

Gemma

Hi there, I encountered a strange bug after trying to load the gemma-2b model using kerasnlp. My finetuning code is the following: ` def fine_tune(self, X, y): data = generate_training_prompts(X,...

type:Bug
Gemma
stat:awaiting response from contributor

type:feature
stat:awaiting response from contributor
stale
team-created

The `` token is not really used by the `BytePairTokenizer`, instead oov tokens will be mapped to -1, That will cause index error for embedding layer. This will only occur...

type:feature

**Is your feature request related to a problem? Please describe.** When testing Keras 3 in the MLX branch, I got a TensorFlow import error. I explicitly set the environment variable...

stat:awaiting response from contributor
type:support

This PR enables tokenization of special tokens for `SentencePieceTokenizer` as was suggested in keras-team/keras-nlp#1395 and it's a follow up of this PR keras-team/keras-nlp#1397 .