Send 1024 limit
Hello, overall it's a very nice program and I'm grateful for it. However, why is it artificially limited to 1024 bytes for a send command? This is inconvenient for the user to work around and option -limit does not work with -file since it gets a different meaning it that case. In addition, according to the manual -limit can only set on 1-1024 range which is not so reasonable. You know a simple vector data type could be used...
The 1024 send limit is a design choice.
The main purpose of Expect-like tools is to simulate user's manual interaction with CLI programs. I don't want users to assume it's safe to dump quite a lot of data in one go to the spawned program which, sometimes, may actually cause problems. And it's quite easy to split big chunk of data into small pieces. For example in Bash:
$ s=0123456789abcdef
$ batsize=5
$ loop=$(( (${#s} + batsize) / batsize ))
$ for ((i = 0; i < ${#s}; i += batsize)); do echo ${s:i:batsize}; done
01234
56789
abcde
f
$
The -fd FD and -file FILE are intended to be used to read sensitive data (like passwords) from files rather than directly put in command line which would be visible to, for example, ps command.
Anyway, what's your usecase to send more than 1024 chars in one go?
My specific use case would be to pump input into llama-cli which is used for LLM inference. The issue with llama-cli is that it doesn't support loading of data besides the initial system prompt. Using system prompt is not viable since that would cause prompt cache to get invalidated on every run. Although I believe the caching is a lot smarter now, I think it's still inconvenient. And there could be issue of system prompt getting ejected from model's context once it is maxed out despite the flag to keep it.
I believe there can be many other use cases where essentially unlimited size is required, not just llama-cli.
Also, if you're not so interested in implementing dynamic memory management, another solution can be to just write in 1024 chunks instead of giving up, but doing that in the C code, not bash. This way it is transparent to the user. Although, a vector based approach would yield the best performance since would be just 1 system call to pty_write or whatever that function is.
I'll take a look at it.
Findings/notes:
- On macOS,
write(pty_fd, buf, len)in non-blocking mode would always return1022iflen > 1022. Seems like the internal buffer is limited to 1024 bytes. Need to handle this with retry, also needs a timeout here or the server side would block here.- This indicates there's defect in current implementation when sending
> 1022chars. - Created #46 for this.
- This indicates there's defect in current implementation when sending
- How to tell the client how many data are successful written and how many are left?
- Introduce a new errcode for partial writes? Or just output leftover data to stdout so users can just write like this?
data=... while [[ -n $data ]]; do # WARNING: you may lose the trailing \n chars here. data=$( sexpect s -- "$data" ) done
- Introduce a new errcode for partial writes? Or just output leftover data to stdout so users can just write like this?