gpt-tokenizer
gpt-tokenizer copied to clipboard
avoid expensive initialization
Hello,
I'm using the following:
import { encode, isWithinTokenLimit } from 'gpt-tokenizer/model/text-davinci-003';
which seems to slow down the initialization, enough that I can't deploy to cloudflare workers with this library. Is there a way to lazily initialize things?