llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

Random seed possible problems.

Open 0wwafa opened this issue 1 year ago • 17 comments

I ran llama.cpp (latest version) with these parameters:

prompt="""
Tell me a long story.
"""

llama-cli --seed 1721414715 -c 4096 -m /content/$m -t $(nproc) -ngl 999 -p "User: Hi\nBot:Hi\nUser: {prompt}\nBot:"

and in the log I read the seed was: 1721414715

so at the next run I used --seed 1721414715 but the story was a different one.

why?

0wwafa avatar Jul 19 '24 19:07 0wwafa

the second time I ran llama.cpp with the same seed it told me the same story.

so I don't understand why, when I did not specify the seed, the log shown the seed main: seed = 1721414715

and when I entered it manually instead told me a different story,

then run again with the same seed manually, it told the same story.

I see 2 possibilities:

  1. when not specified, the seed is shown "wrong"
  2. when entered manually the seed is interpreted differently.

0wwafa avatar Jul 19 '24 19:07 0wwafa

The CUDA version introduces some randomness even with the same seed.

Rotatingxenomorph avatar Jul 19 '24 20:07 Rotatingxenomorph

The CUDA version introduces some randomness even with the same seed.

I am using CPU ONLY.

0wwafa avatar Jul 19 '24 21:07 0wwafa

The CUDA version introduces some randomness even with the same seed.

I am using CPU ONLY.

Why the -ngl 999 then?

Rotatingxenomorph avatar Jul 19 '24 21:07 Rotatingxenomorph

I see 2 possibilities:

  1. when not specified, the seed is shown "wrong"
  2. when entered manually the seed is interpreted differently.

This is weird because both of these possibilities don't seem to be what's happening, which means it might be hard to debug.

https://github.com/ggerganov/llama.cpp/blob/87e397d00bdcedd5cbf6dfda06a7b0f302462728/examples/main/main.cpp#L188-L194

then run again with the same seed manually, it told the same story.

This rules out non-determinism of the backend.

EDIT: I can also reproduce this problem on my machine (with CPU-only inference). It's a very weird behavior.

compilade avatar Jul 19 '24 21:07 compilade

AHA! The sampling seed in params.sparams.seed is set by --seed, but not when choosing a default seed in main.cpp.

This seems to fix it:

diff --git a/examples/main/main.cpp b/examples/main/main.cpp
index a0d817b1..ceed4ce5 100644
--- a/examples/main/main.cpp
+++ b/examples/main/main.cpp
@@ -187,6 +187,7 @@ int main(int argc, char ** argv) {
 
     if (params.seed == LLAMA_DEFAULT_SEED) {
         params.seed = time(NULL);
+        sparams.seed = params.seed;
     }
 
     LOG_TEE("%s: seed  = %u\n", __func__, params.seed);

I see 2 possibilities:

  1. when not specified, the seed is shown "wrong"
  2. when entered manually the seed is interpreted differently.

It seems like BOTH of theses guesses were true after all.

compilade avatar Jul 19 '24 21:07 compilade

The CUDA version introduces some randomness even with the same seed.

The CUDA backend is deterministic as in the results for the same input parameters will have the same output logits. However, if you use >1 slots or prompt caching on the server then the input parameters can vary and thus the outputs will vary too.

JohannesGaessler avatar Jul 20 '24 06:07 JohannesGaessler

The CUDA backend is deterministic as in the results for the same input parameters will have the same output logits. However, if you use >1 slots or prompt caching on the server then the input parameters can vary and thus the outputs will vary too.

That's good to learn! Thank you.

Rotatingxenomorph avatar Jul 20 '24 08:07 Rotatingxenomorph

@compilade

It seems like BOTH of theses guesses were true after all. :D so what was the seed when not specified? 0?

0wwafa avatar Jul 20 '24 17:07 0wwafa

so what was the seed when not specified? 0?

When not specified, the sampling seed is random.

https://github.com/ggerganov/llama.cpp/blob/22f281aa16f44d8f6ec2c180a0685ff27e04e714/common/sampling.cpp#L82

compilade avatar Jul 21 '24 07:07 compilade

so what was the seed when not specified? 0?

When not specified, the sampling seed is random.

https://github.com/ggerganov/llama.cpp/blob/22f281aa16f44d8f6ec2c180a0685ff27e04e714/common/sampling.cpp#L82

@compilade so.. I don't understand: what was happening before? why the seed printed when it was random didn't work?

AHA! The sampling seed in params.sparams.seed is set by --seed, but not when choosing a default seed in main.cpp.

so why did it work the second time? luck?

0wwafa avatar Jul 21 '24 07:07 0wwafa

@JohannesGaessler

The CUDA backend is deterministic as in the results for the same input parameters will have the same output logits. However, if you use >1 slots or prompt caching on the server then the input parameters can vary and thus the outputs will vary too

I tried to figure out why using >1 slot does not produce deterministic results when doing parallel requests. Do you know why it is not possible to get deterministic output when making parallel requests?

SharifIsmail avatar Jul 24 '24 16:07 SharifIsmail

Because floating point arithmetic is not commutative. You only get bit-for-bit identical results if you do the exact same operations in the exact same order. But the whole reason why >1 slots is faster is that you do not do that but instead change the kernels depending on how many slots are currently in use. Also the positions of individual sequences within the unified KV cache will be different.

JohannesGaessler avatar Jul 24 '24 16:07 JohannesGaessler

I tried to figure out why using >1 slot does not produce deterministic results when doing parallel requests. Do you know why it is not possible to get deterministic output when making parallel requests?

See also https://github.com/ggerganov/whisper.cpp/issues/1941#issuecomment-1986923227.

But when the order is exactly the same, the output between runs can still be exactly the same, even with parallel sequences, as I've seen in https://github.com/ggerganov/llama.cpp/pull/6122#discussion_r1531405574.

compilade avatar Jul 24 '24 16:07 compilade

I see. Thanks @compilade @JohannesGaessler

So, running higher-precision models with a higher-precision KV cache would alleviate this effect, right?

SharifIsmail avatar Jul 24 '24 16:07 SharifIsmail

No, even with 16 bit precision you will still run into this issue because the condition numbers of the weight matrices can be arbitrarily large.

JohannesGaessler avatar Jul 24 '24 16:07 JohannesGaessler

I did some quick tests for the sake of curiosity with "Phi-3-mini-4k-instruct-fp16.gguf" vs "Phi-3-mini-4k-instruct-q4.gguf".

Bottom Line: As you stated, JohannesGaessler, both are nondeterministic for the vast majority of cases. Even with cherry-picked settings attempting to minimize non-determinism (i.e., "-b 1 -ub 1 -nocb" with cache_prompt=false), I only managed to get a few prompts on the fp16 model to return deterministic output. I used "-np 10", i.e. 10 slots and 10 parallel requests.

SharifIsmail avatar Jul 24 '24 18:07 SharifIsmail

The CUDA version introduces some randomness even with the same seed.

The CUDA backend is deterministic as in the results for the same input parameters will have the same output logits. However, if you use >1 slots or prompt caching on the server then the input parameters can vary and thus the outputs will vary too.

Hey Johannes, is there any test cases for CUDA bit-exact determinism for the project?

yaleeyang avatar Sep 04 '24 05:09 yaleeyang

There are multiple in the server tests. But they're commented out since they're failing on master.

JohannesGaessler avatar Sep 04 '24 08:09 JohannesGaessler

This issue was closed because it has been inactive for 14 days since being marked as stale.

github-actions[bot] avatar Oct 19 '24 01:10 github-actions[bot]