llama-journey
llama-journey copied to clipboard
"ptrace: Operation not permitted." when I attempt to follow the first steps of playing
I'm following the following steps in the README.md:
Once you get the game running, try ordering an ale from the bartender:
- Up to move close to the bartender
- 1 to equip pence
- g to give the pence
Depending on the reaction, ask for an ale:
- t to talk
- type "one ale please" Enter
However, on step 3, when I press g, the game pauses for a moment then outputs the following:
-------------------------------------------------------------------------------------------------------------------------------------
You equipped pence
[ =========== ]
ptrace: Operation not permitted.
No stack.
The program is not being run.
I don't have more information at the moment. Is there a way I can debug this further?
I've updated llama.cpp to 5f6e0c0d and ran make again,
Not immediately obvious what's going on. How are you invoking game.py? By any chance are you using a debugger? The error messages make it seem like gdb or similar is involved.
In general, for issues originating from llama.cpp, error output might show up in the curses interface or after quitting the game. If not, it might require modifying InferenceProcess to send stderr to a file (stderr=open('main.log', 'w') perhaps?) and/or enabling llama.cpp's logging feature (may want that anyway).
There is also a wackier possibility, which is that the model can dream up errors (internally, it thinks it's running a Python REPL). This has happened to me, although the presence of the loading bar in your output makes that somewhat less likely.
Hey, I was looking into #4 and I discovered #4376, which makes me think this might be a recent regression in llama grammars. And in fact it appears to have been introduced in 5f6e0c0d. I've more or less reproduced the issue on master^ and it appears to be resolved on master. Do you mind updating llama.cpp and trying again?
I can give this a try again soon. I had updated my nvidia drivers and llama.cpp and started getting kernel panics due to a segfault in libc related to what nvcc was compiling with cublas. I was hoping that letting some time pass and pulling llama.cpp would fix it.