Robert Sinclair
Robert Sinclair
the second time I ran llama.cpp with the same seed it told me the same story. so I don't understand why, when I did not specify the seed, the log...
> The CUDA version introduces some randomness even with the same seed. I am using CPU ONLY.
@compilade > It seems like BOTH of theses guesses were true after all. :D so what was the seed when not specified? 0?
> > so what was the seed when not specified? 0? > > When not specified, the sampling seed is random. > > https://github.com/ggerganov/llama.cpp/blob/22f281aa16f44d8f6ec2c180a0685ff27e04e714/common/sampling.cpp#L82 @compilade so.. I don't understand: what...
@ggerganov that would be very useful.
> `llama_model` interface does not allowing modifying tensors. It's a read-only representation of the loaded model. > > If you want to modify tensors, either using `gguf_*` functions provided by...
> What feels notable about this model is that not only are the weights open, but also the [dataset is included](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0). yep. but anything else is uninteresting compared to PHI-3...
> but true open source including all aspects is appreciated 👍 I agree.
There is also another minor problem, perhaps I should open a new Issue. If you scroll up and then type something, the scroll remeains where it is... in conhost the...
> I'm grateful that you're reporting issues, but if you continue filing issues without filling out the bug issue form, I'll start closing them and lock the conversation immediately. >...