MagicSource

Results 1296 comments of MagicSource

error: `-Csplit-debuginfo=unpacked` is unstable on this platform How to disable?

@YugiFanGX thank u, is that possible add pass code in youtube for this file to view?

@randaller Hello, your code actually not work, tried on 7B not work as expected.

@randaller Indeed your repo, but **NOT** work, please try yourself, your code even didn't have a print, how did it work? https://github.com/randaller/llama-chat/blob/8178f70fc21790bfe3ef2837b5a973e2c93e5b89/example-chat.py#L111

@ggerganov It looks like very nice, on CPU but gives a reasonable speed. I can run 13b even on my PC, how do u think the inference speed on a...

@DongqiShen could be a possible reason. But why the tokenizer various so much, llama also include Chinese tokens, even more multi language tokens.

@DongqiShen the comparasion is unfair, llama is not train specific for chat, alpaca might be more feasible for chat. My point is not about the performance though. The 6b is...

The loss 0 not comes from int8, but huggingface default fp16 set to True. However, if you using int8 training on V100, it was extremly slow both caused by hf...