mythril icon indicating copy to clipboard operation
mythril copied to clipboard

--execution-timeout fails

Open Danc2050 opened this issue 5 years ago • 11 comments

Description

Using the --execution-timeout flag fails under large loads and/or on an unknown set of addresses. I am running mythril on a large set of addresses with a execution-timeout flag of 900. However, under top, there are two running instances of mythril that have been running for 861:55:86+ and 1580:41+ (minutes:seconds.hundredths).

How to Reproduce

I'm unsure of how to reproduce this as I'm unable to determine the addresses they failed on, or the true root cause. After I kill the processes, then maybe there may be a stack trace, but I don't think that this will reveal what addresses they failed on.

Expected behavior

Mythril should stop trying to symbolically execute after 900 seconds has elapsed.

Screenshots

execution-timeout_issue

EDIT: I actually am recording what addresses give what output (e.g., success, ERROR, CRITICAL, etc.), so I can determine if this bug is due to a specific contract, the volume of contracts I am scanning, or the method (i.e., multiprocessing with docker run commands) by checking my original file full of addresses and seeing what contract did not complete (i.e., did not get written to any of files/"bins" above).

Danc2050 avatar Oct 10 '19 21:10 Danc2050

is the 900 seconds cpu time? Mythril mostly uses pc time but i think z3-solver might be using CPU time which makes the timeout this way. should look into it. Thanks for the report.

norhh avatar Oct 12 '19 02:10 norhh

@norhh It is CPU Time. No problem. I'll try and reattempt the failing addresses and see if it is a specific address problem.

Danc2050 avatar Oct 13 '19 01:10 Danc2050

I believe this is actually an issue with Docker, not mythril.

Below is optional reading I originally was cleaning up old containers with the command docker system prune in a crontab. However, what I believe ended up happening was the crontab would run when a Docker container was being started and marked as inactive, but then it would start, but not be able to run since it was marked for deletion. Still haven't proven this, but I get an error message that seems to hint at this being hte issue. Since using the --rm command to remove the container instead of the crontab, I haven't had any issues of a myth process running past the execution-timeout specified. I'll bring this to the Docker people instead of here.

Danc2050 avatar Nov 05 '19 01:11 Danc2050

Hey, it's me again.

Sorry, I did not know enough about Docker to make such a claim (though finding the --rm flag was useful to me).

The problem still persists. I used the docker inspect command and also looked at the logs for a docker container that had issues. Here are those in case that is helpful in triaging this bug. I couldn't find anything to cause the containers to run without ceasing when the --exception-timeout {s} condition is reached.

edc74e782dc8813f18e0e12062129b2c8e54d132fc4b2ae146075ca22f04a9fd-json.log docker_inspect.log

Danc2050 avatar Nov 09 '19 03:11 Danc2050

@norhh I have a question if you have the time and didn't think this deserved opening a new issue.

You said Mythril mostly uses pc time but i think z3-solver might be using CPU time. I'm still a little confused as to how this works.

So, when I start a docker container and a myth process starts inside of it, is the --execution timeout {s} counting the docker or the myth process? And is this time pc time (number of wall-clock seconds I am assuming this term means) or CPU time (time that the process is run in the CPU).

The reason I ask is because I am running multiple threads and am likely not truly getting the {s} I am wanting my process to have in the CPU if a) the docker container running is being counted b) this time is pc time rather than CPU time.

Danc2050 avatar Nov 21 '19 17:11 Danc2050

what happens is that Mythril can't kill Z3 solver library execution, so we had to set a timeout for that as solver.set_timeout(time_left), Mythril's time should be accurate in docker container because of the way it's handled but I am unsure of how Z3 handles its timeout(Plus its timeout is inaccurate sometimes)

norhh avatar Nov 21 '19 19:11 norhh

Okay, thank you for explaining that.

More on this issue:

It seems to coincide that the bytecode or recursion is huge. For example, I went from 11GB of used swap space to hold all the contracts (plus this one) I was operating on to then 3.7GB when I forcibly stopped the container (note: this has happened more than once). Perhaps mythril can never put the bytecode into RAM quick enough to actually work on it with my competing threads and perhaps that fact is not making the timer count down.

This is the output of forcibly stopping the docker container (Notice the weird 35072 at the beginning instead of the canonical 0, which is probably a result of me forcibly stopping it?):

35072mythril.mythril.mythril_config [INFO]: Creating mythril data directory
mythril.mythril.mythril_config [INFO]: No config file found. Creating default: /root/.mythril/config.ini
mythril.mythril.mythril_config [INFO]: Using RPC settings: ('mainnet.infura.io', 443, True)
mythril.support.signatures [INFO]: Using signature database at /root/.mythril/signatures.db
mythril.analysis.security [INFO]: Found 0 detection modules
mythril.laser.ethereum.svm [INFO]: LASER EVM initialized with dynamic loader: <mythril.support.loader.DynLoader object at 0x7f20ce20e8d0>
mythril.laser.ethereum.strategy.extensions.bounded_loops [INFO]: Loaded search strategy extension: Loop bounds (limit = 3)
mythril.laser.ethereum.plugins.plugin_loader [INFO]: Loading plugin: <mythril.laser.ethereum.plugins.implementations.mutation_pruner.MutationPruner object at 0x7f20ce2460b8>
mythril.laser.ethereum.plugins.plugin_loader [INFO]: Loading plugin: <mythril.laser.ethereum.plugins.implementations.coverage.coverage_plugin.InstructionCoveragePlugin object at 0x7f20ce2463c8>
mythril.laser.ethereum.plugins.plugin_loader [INFO]: Loading plugin: <mythril.laser.ethereum.plugins.implementations.dependency_pruner.DependencyPruner object at 0x7f20ce246358>
mythril.analysis.security [INFO]: Found 14 detection modules
mythril.analysis.security [INFO]: Found 14 detection modules
mythril.laser.ethereum.svm [INFO]: Starting message call transaction to 1227051932991841386785447176515203176073149126974
mythril.laser.ethereum.svm [INFO]: Starting message call transaction, iteration: 0, 1 initial states

Danc2050 avatar Dec 22 '19 02:12 Danc2050

Description

Using the --execution-timeout flag fails under large loads and/or on an unknown set of addresses. I am running mythril on a large set of addresses with a execution-timeout flag of 900. However, under top, there are two running instances of mythril that have been running for 861:55:86+ and 1580:41+ (minutes:seconds.hundredths).

How to Reproduce

I'm unsure of how to reproduce this as I'm unable to determine the addresses they failed on, or the true root cause. After I kill the processes, then maybe there may be a stack trace, but I don't think that this will reveal what addresses they failed on.

Expected behavior

Mythril should stop trying to symbolically execute after 900 seconds has elapsed.

Screenshots

execution-timeout_issue

EDIT: I actually am recording what addresses give what output (e.g., success, ERROR, CRITICAL, etc.), so I can determine if this bug is due to a specific contract, the volume of contracts I am scanning, or the method (i.e., multiprocessing with docker run commands) by checking my original file full of addresses and seeing what contract did not complete (i.e., did not get written to any of files/"bins" above).

😭😢 This happened when I made an withdrawal request from binance and they said the coins left the binance wallet, but i didn't get the USDTs. please help me if you have information. thank you Tell me how to solve it 😢😭

chetansaini1234 avatar Feb 14 '21 17:02 chetansaini1234

@chetansaini1234 can you elaborate on the issue.

norhh avatar Feb 15 '21 06:02 norhh

Is there any solution to this? I'm using Mythril to scan a large number of contracts using ThreadPoolExecutor, and even after specifying --execution-timeout 600, some processes just kept running for hours. I wonder if I did something wrong, or is there any other options I can use to make the analyze stop early? Thx

ghost avatar Feb 24 '23 12:02 ghost

@yuweb3 can you check now? It should likely work with v0.23.20. If it doesn't, I'll SIGKILL the solver thread.

norhh avatar Apr 20 '23 01:04 norhh