openai-java icon indicating copy to clipboard operation
openai-java copied to clipboard

How to calculate the length of tokens

Open gitcyh opened this issue 2 years ago • 15 comments

This model's maximum context length is 4097 tokens. However, you requested 4497 tokens (497 in the messages, 4000 in the completion). Please reduce the length of the messages or completion.

gitcyh avatar Jun 26 '23 09:06 gitcyh

https://platform.openai.com/tokenizer

cryptoapebot avatar Jun 26 '23 14:06 cryptoapebot

i am facing similar issue, for some reason i count 10 tokens but the api says i am sending 18.

if i send an empty payload (payload = ""), it says i'm sending 8 tokens:

ChatCompletionRequest
            .builder()
            .model("gpt-3.5-turbo")
            .messages(listOf(ChatMessage(ChatMessageRole.USER.value(), payload)))
            .maxTokens(4097)
            .n(1)
            .build()

error message:

com.theokanning.openai.OpenAiHttpException: This model's maximum context length is 4097 tokens. However, you requested 4105 tokens (8 in the messages, 4097 in the completion). Please reduce the length of the messages or completion.

sombriks avatar Jun 26 '23 20:06 sombriks

Just to be clear, they stem the words, so you might just try counting syllables and you'll always be near enough, but under. https://pypi.org/project/syllables/

Of course don't forget this local package which is recommended: https://github.com/openai/tiktoken

Also be aware, max GPT-3.5-turbo tokens is 4096 and you put 4097 for some reason.

cryptoapebot avatar Jun 26 '23 22:06 cryptoapebot

p.s. Post the actual text you are sending here, including all ChatMessageRole including USER, SYSTEM, and ASSISTANT.

cryptoapebot avatar Jun 26 '23 22:06 cryptoapebot

But the length of the tokens I calculated is 232, which is still far from 4096. Why did I report this error? However, I switched to the GPt-3.5-turbo-16k-0613 model and still reported the same error

gitcyh avatar Jun 27 '23 07:06 gitcyh

Again, post the whole text you are sending here including any USER, SYSTEM, or ASSISTANT text and stop words so it can be tested.

cryptoapebot avatar Jun 27 '23 12:06 cryptoapebot

@gitcyh @cryptoapebot this small project has one testcase sampling how i am using the prompt and showing the difference between local token counting and reported token counting. https://github.com/sombriks/aitokens

i can add more scenarios (or PRM me more scenarios!) in order to either clarify my usage or correct it.

please when possible take a look at it, i had expectations on proper count tokens i create, since produced tokens are mostly underterministic.

sombriks avatar Jun 27 '23 14:06 sombriks

Okay, here's my guess where that number is coming from.

In the OpenAI documentation they say prompt tokens + result tokens cannot be more than max tokens.

I stuck "Hello there!" into the tokenizer and got 3 tokens (just like you). I took the default response from chat.openai.com and plugged the prompt and response into the tokenizer and got 13 tokens. I am using GPT-4 and cant change my default chat to GPT3.5-turbo, so I am assuming that is the difference between your result, 11 tokens, and mine, 13.

I'm guessing it's a definition problem where they have a definition of what max_tokens means that is different. I tried the same thing with a different command asking it to countdown backwards from 10.
Tokens: 56 Characters: 162

Countdown backwards from 10. 
Sure, here's the countdown starting from 10:

10
9
8
7
6
5
4
3
2
1

I hope that helps! Is there anything else I can assist you with?

tokens-default-chat-platform

cryptoapebot avatar Jun 27 '23 14:06 cryptoapebot

@cryptoapebot so you mean there is nothing wrong with our approach of count tokens but we should take those extra tokens into account always?

i could call them "margin tokens" or something.

one minor question, should openai-java offer a call to count tokens or use other libraries is good enough?

if this token counting difference existis and can't be helped at least one docs section should be present to clarify it.

sombriks avatar Jun 27 '23 15:06 sombriks

It looks like tokens.size() only counts prompt tokens result.getUsage().getPromptTokens gets total prompt and response tokens.

There is no way to predict how many tokens come back other than setting a max_tokens <= 4096 (or lesser).

cryptoapebot avatar Jun 27 '23 16:06 cryptoapebot

@cryptoapebot that's the issue, by setting max_tokens we get the original error reported by @gitcyh .

i have hope to be capable of count prompt tokens BEFORE submit them to the openai api.

however api responds with a different value for prompt tokens from the one we calculated before the submit.

my question is it a bug or something else i am missing.

package sample.issue.tokens;

import com.didalgo.gpt3.Encoding;
import com.didalgo.gpt3.GPT3Tokenizer;
import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.Test;

public class CountAndAskTest {

    private final String OPENAI_MODEL = System.getenv("OPENAI_MODEL");
    private CountAndAsk service = new CountAndAsk();

    @Test
    public void shouldReturnSameTokenCount() {

        String hello = "Hello there!";

        GPT3Tokenizer tokenizer = new GPT3Tokenizer(Encoding.forModel(OPENAI_MODEL));
        var tokens = tokenizer.encode(hello);

        var result = service.ask(hello, tokens.size());
        Assertions.assertNotNull(result);
        Assertions.assertEquals(tokens.size(), result.getUsage().getPromptTokens());
    }
}

i've updated the sample project with maxTokens this second test fails with the stacktrace bellow:


com.theokanning.openai.OpenAiHttpException: This model's maximum context length is 4097 tokens. However, you requested 4105 tokens (11 in the messages, 4094 in the completion). Please reduce the length of the messages or completion.

	at com.theokanning.openai.service.OpenAiService.execute(OpenAiService.java:326)
	at com.theokanning.openai.service.OpenAiService.createChatCompletion(OpenAiService.java:131)
	at sample.issue.tokens.CountAndAsk.ask(CountAndAsk.java:26)
	at sample.issue.tokens.CountAndAskTest.throwsErrorDueToCountTokens(CountAndAskTest.java:34)
	at java.base/jdk.internal.reflect.DirectMethodHandleAccessor.invoke(DirectMethodHandleAccessor.java:104)
	at java.base/java.lang.reflect.Method.invoke(Method.java:578)
	at org.junit.platform.commons.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:727)
	at org.junit.jupiter.engine.execution.MethodInvocation.proceed(MethodInvocation.java:60)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain$ValidatingInvocation.proceed(InvocationInterceptorChain.java:131)
	at org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:156)
	at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:147)
	at org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:86)
	at org.junit.jupiter.engine.execution.InterceptingExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(InterceptingExecutableInvoker.java:103)
	at org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.lambda$invoke$0(InterceptingExecutableInvoker.java:93)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
	at org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
	at org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:92)
	at org.junit.jupiter.engine.execution.InterceptingExecutableInvoker.invoke(InterceptingExecutableInvoker.java:86)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:217)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:213)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:138)
	at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:68)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57)
	at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:54)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:147)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:127)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:90)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:55)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:102)
	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:54)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114)
	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86)
	at org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86)
	at org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53)
	at com.intellij.junit5.JUnit5IdeaTestRunner.startRunnerWithArgs(JUnit5IdeaTestRunner.java:57)
	at com.intellij.rt.junit.IdeaTestRunner$Repeater$1.execute(IdeaTestRunner.java:38)
	at com.intellij.rt.execution.junit.TestsRepeater.repeat(TestsRepeater.java:11)
	at com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:35)
	at com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:232)
	at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:55)
Caused by: retrofit2.adapter.rxjava2.HttpException: HTTP 400 
	at retrofit2.adapter.rxjava2.BodyObservable$BodyObserver.onNext(BodyObservable.java:57)
	at retrofit2.adapter.rxjava2.BodyObservable$BodyObserver.onNext(BodyObservable.java:38)
	at retrofit2.adapter.rxjava2.CallExecuteObservable.subscribeActual(CallExecuteObservable.java:48)
	at io.reactivex.Observable.subscribe(Observable.java:10151)
	at retrofit2.adapter.rxjava2.BodyObservable.subscribeActual(BodyObservable.java:35)
	at io.reactivex.Observable.subscribe(Observable.java:10151)
	at io.reactivex.internal.operators.observable.ObservableSingleSingle.subscribeActual(ObservableSingleSingle.java:35)
	at io.reactivex.Single.subscribe(Single.java:2517)
	at io.reactivex.Single.blockingGet(Single.java:2001)
	at com.theokanning.openai.service.OpenAiService.execute(OpenAiService.java:317)
	... 71 more


sombriks avatar Jun 27 '23 16:06 sombriks

Yeah, maxTokens <= 4096 (not 4097 inclusive)

Try just setting .maxTokens(4096) Or else make it dynamic and 4096 - prompt_tokens.

cryptoapebot avatar Jun 27 '23 17:06 cryptoapebot

Already tried, similar error.

sombriks avatar Jun 27 '23 17:06 sombriks

再次,在此处发布您要发送的整个文本,包括任何用户、系统或助手文本和停用词,以便对其进行测试。

public Boolean isPutAds(HhAdxPlanInfoVo planInfoVo) { String logBy = this.getClass().getName() + ".isPutAds:{}:{}:{}:{}"; if (null == planInfoVo) { return false; } String planId = String.valueOf(planInfoVo.getId()); String planName = planInfoVo.getPlanName(); if (planInfoVo.getStatus() != AdxSwitch.PLAN_RELEASE_STATUS_OPEN) { log.info(logBy, AdxSourceInfo.PUT_LOGO, LogTips.ADS_PLAN_STATUS_NOT_OPEN, planId, planName); return false; } // 是否在指定投放时间段 if (!isPutTimePeriod(planInfoVo.getDateTimeMangagerInfoVos())) { log.info(logBy, AdxSourceInfo.PUT_LOGO, LogTips.ADS_NOT_IN_TIME_PERIOD, planId, planName); return false; }

// 是否超预算
if (isOverBudget(planInfoVo)) {
    return false;
}

return true;

}请帮我优化上诉代码

gitcyh avatar Jun 28 '23 01:06 gitcyh

the issue seems to be in the api itself: https://community.openai.com/t/tokenizer-and-playground-calculated-a-mismatch-between-the-number-of-tokens-and-the-bill-for-text-davinc-003/70624

sombriks avatar Jun 29 '23 13:06 sombriks