PRATIK C

Results 25 issues of PRATIK C

``` %load_ext autoreload %autoreload 2 import pandas as pd import time from pandarallel import pandarallel import math import numpy as np ``` ``` pandarallel.initialize(nb_workers=60,progress_bar=True) df_size = int(5e7) df = pd.DataFrame(dict(a=np.random.rand(df_size)...

I am using your model to fine-tune on binary classification task. ( **Number of classes =2**) **instead of 16.** **My class labels are just 0 and 1** https://huggingface.co/unitary/unbiased-toxic-roberta/tree/main I am...

I am using Google Colab to run on simple experiment. The idea is to visulaize attention weights and predictions on text data. Here is the code ``` !pip install witwidget...

I am using this model to do inferencing on 1 million data point using `A100 GPU's ` with `4 GPU`. I am launching a `inference.py` code using `Googles vertex-ai Container.`...

Do we have the flexibility to create dashboard for NLP related tasks from `Hugging face` libraries https://huggingface.co/. **Example1**: Comparing two classification models using NLP models like `BERT` and `Roberta`. Compare...

enhancement

It is a multi-class classification model with sklearn. I am using `OneVsOneClassifier` model to train and predict `150 intents`. Its a multi-class classification problem. **Data:** text intents text1 int1 text2...

pending user response
new converter
done

Here is the code: ``` from lit_nlp.api import types as lit_types from lit_nlp.examples.datasets import glue import tensorflow_datasets as tfds # https://github.com/PAIR-code/lit/wiki/api.md#adding-models-and-data import sys from absl import app from absl import...

Here is the code. ``` import sys from absl import app from absl import flags from absl import logging from lit_nlp import dev_server from lit_nlp import server_flags from lit_nlp.api import...

# Install LIT and transformers packages. The transformers package is needed by the model and dataset we are using. # Replace tensorflow-datasets with the nightly package to get up-to-date dataset...

I am using this code to load the model and the tokenizer: ``` tokenizer = AutoTokenizer.from_pretrained("satyaalmasian/temporal_tagger_DATEBERT_tokenclassifier", use_fast=False) model = tr.BertForTokenClassification.from_pretrained("satyaalmasian/temporal_tagger_DATEBERT_tokenclassifier") ``` I have a list of text: `examples=['Texas DRIVER LICENSE...