llama-journey icon indicating copy to clipboard operation
llama-journey copied to clipboard

"ptrace: Operation not permitted." when I attempt to follow the first steps of playing

Open InconsolableCellist opened this issue 1 year ago • 3 comments
trafficstars

I'm following the following steps in the README.md:


Once you get the game running, try ordering an ale from the bartender:

  1. Up to move close to the bartender
  2. 1 to equip pence
  3. g to give the pence

Depending on the reaction, ask for an ale:

  1. t to talk
  2. type "one ale please" Enter

However, on step 3, when I press g, the game pauses for a moment then outputs the following:

-------------------------------------------------------------------------------------------------------------------------------------
You equipped pence
[    ===========     ]                  
ptrace: Operation not permitted.
No stack.
The program is not being run.

I don't have more information at the moment. Is there a way I can debug this further?

I've updated llama.cpp to 5f6e0c0d and ran make again,

InconsolableCellist avatar Dec 05 '23 22:12 InconsolableCellist

Not immediately obvious what's going on. How are you invoking game.py? By any chance are you using a debugger? The error messages make it seem like gdb or similar is involved.

In general, for issues originating from llama.cpp, error output might show up in the curses interface or after quitting the game. If not, it might require modifying InferenceProcess to send stderr to a file (stderr=open('main.log', 'w') perhaps?) and/or enabling llama.cpp's logging feature (may want that anyway).

There is also a wackier possibility, which is that the model can dream up errors (internally, it thinks it's running a Python REPL). This has happened to me, although the presence of the loading bar in your output makes that somewhat less likely.

ejones avatar Dec 06 '23 02:12 ejones

Hey, I was looking into #4 and I discovered #4376, which makes me think this might be a recent regression in llama grammars. And in fact it appears to have been introduced in 5f6e0c0d. I've more or less reproduced the issue on master^ and it appears to be resolved on master. Do you mind updating llama.cpp and trying again?

ejones avatar Dec 10 '23 03:12 ejones

I can give this a try again soon. I had updated my nvidia drivers and llama.cpp and started getting kernel panics due to a segfault in libc related to what nvcc was compiling with cublas. I was hoping that letting some time pass and pulling llama.cpp would fix it.

InconsolableCellist avatar Dec 12 '23 00:12 InconsolableCellist