MagicSource

Results 945 issues of MagicSource

Hi, any plan on support python API?

Hello, for llama when decoding Chinese or Japanese characters, since one character mgith need 2 or more tokens to decode, so when streaming, the chunk returned one token decode result...

``` Sure, why not? What would you like to know? ``` why it can not stop at stop tokenizer, it will print out this token

Hello, I get a weired issue when serving using ctransformers. code: ```python model = AutoModelForCausalLM.from_pretrained( args.base_model, **config ) iterator: Generator = model.generate(gen_kwargs["inputs"]) for chat_chunk in iterator: new_text = model.detokenize(chat_chunk) print(new_text,...

I using chat prompt send to llm(query), but the generate result can not stop. Am using codellama, does there any chat example to reference?

How's the DownSampleBlock performance compare with CAbstractor?

Audio has info redundancy compare with picture.

MHFormer seems didnt provide root transition, BTW, will it have a metahuman version?

eigen虽好但不好维护(应该也不是最快的方案),有没有兴趣将onnxruntime作为候选项支持?