LLMLingua icon indicating copy to clipboard operation
LLMLingua copied to clipboard

[Feature Request]: Token compression using GPT-3.5-turbo

Open ohdearquant opened this issue 1 year ago • 3 comments

Is your feature request related to a problem? Please describe.

local models are too slow in compressing tokens, cannot fulfill needs of bigger dataset

Describe the solution you'd like

instead of only relying on hugging face, provide API backed models for compression

Additional context

No response

ohdearquant avatar Mar 06 '24 16:03 ohdearquant

Hi @ohdearquant, thank you for your suggestion. We have plans to support the API-based model as a compressor.

related issue #44.

iofu728 avatar Mar 07 '24 08:03 iofu728

@iofu728, is there any timeline for when this will land?

younes-io avatar Mar 10 '24 11:03 younes-io

Hi @younes-io, there are still some blocking issues that need to be resolved. Once they are addressed, we will promptly support the corresponding feature.

iofu728 avatar Mar 11 '24 12:03 iofu728