feat(`cast keccak`): add `--file` arg to read data from file
Component
Cast
Have you ensured that all of these are up to date?
- [x] Foundry
- [x] Foundryup
What version of Foundry are you on?
forge Version: 1.2.1-nightly
What version of Foundryup are you on?
foundryup: 1.0.1
What command(s) is the bug in?
cast keccak
Operating System
Linux
Describe the bug
Git clone my safe-tx-hashes-util. Then run:
./safe_hashes.sh --network zksync --address 0x9fb5F754f5222449F98b904a34494cB21AADFdf8 --nonce 12
It will print:
./safe_hashes.sh: line 517: /home/pcaversaccio/.foundry/bin/cast: Argument list too long
Error: Could not ABI encode the function and arguments. Did you pass in the right types?
Error
parser error:
^
expected hex digits or the `0x` prefix for an empty hex string
The line that throws is here:
local data_hashed=$(cast keccak "$data")
The data that is hashed is very long, so I assume there is some cast keccak size limit issue here: data.txt.
Hmm, I was unable to reproduce this on latest nightly by running the script using:
bash ./safe_hashes.sh --network zksync --address 0x9fb5F754f5222449F98b904a34494cB21AADFdf8 --nonce 12
I also downloaded data.txt and tested it in isolation
cast keccak $(cat data.txt)
cast keccak 0x..
Terminal does lag a bit..but seems to work.
Moreover, the error message displayed is not emitted by the cast keccak, it is emitted by the cast abi-encode cmd here: https://github.com/pcaversaccio/safe-tx-hashes-util/blob/36109aaeb0c376a9ca8ac9464f136fff5371c6ee/safe_hashes.sh#L527
Hmm, I was unable to reproduce this on latest nightly by running the script using:
bash ./safe_hashes.sh --network zksync --address 0x9fb5F754f5222449F98b904a34494cB21AADFdf8 --nonce 12I also downloaded data.txt and tested it in isolation
cast keccak $(cat data.txt) cast keccak 0x..Terminal does lag a bit..but seems to work.
Are you saying it works for you properly? The abi-encode triggers later since the keccak hash is not working like intended (Argument list too long).
Are you saying it works for you properly
Yes it does
@pcaversaccio It's related to kernel space. See: https://unix.stackexchange.com/questions/45583/argument-list-too-long-how-do-i-deal-with-it-without-changing-my-command
Can you run ulimit? https://ss64.com/bash/ulimit.html
Mine is:
ulimit
unlimited
Maybe try increasing it
Note: I'm running MacOS
gxargs --show-limits
Your environment variables take up 1784 bytes
POSIX upper limit on argument length (this system): 1044744
POSIX smallest allowable upper limit on argument length (all systems): 4096
Maximum length of command we could actually use: 1042960
Size of command buffer we are actually using: 131072
Maximum parallelism (--max-procs must be no greater): 2147483647
Can you run
ulimit? https://ss64.com/bash/ulimit.htmlMine is:
ulimit unlimitedMaybe try increasing it
Hmm, this doesn't fix this issue because this error, based on my understanding after digging into the details of it, is caused by the system kernel's fixed limit (ARG_MAX) on the total size of CLI args and env variables. This limit is independent of the shell resource limits like stack size or open files, which ulimit controls (correct me if am wrong here). The only way to avoid exceeding ARG_MAX is by reducing the argument length, i.e. by passing large input via stdin instead of as CLI args. This is exactly what I did here now, and it patches it.
use stdin, as done in https://github.com/pcaversaccio/safe-tx-hashes-util/pull/35