[Feature Request] Add token usage budget limit
First of all, thanks for open-sourcing Strix – it’s a very cool project. However, I’m seeing extremely high token usage when running it, especially when using X.ai api and similar web apps.
What happened
I configured Strix with an LLM provider via STRIX_LLM and LLM_API_KEY (x-ai , pay-as-you-go).
I ran Strix with mostly default settings against a target on X and a few other URLs.
A single run ended up consuming around 120 USD worth of LLM tokens according to my provider dashboard.
I wasn’t expecting this level of cost from one run, and there was no clear indication in the CLI about how many tokens were being spent or any budget/limit being enforced.
Hey @bubble666-ai!
Thanks a lot for using Strix and your feedback!
Strix uses a multi-agent architecture that does a lot of parallel reasoning, planning and verification to get better coverage, which naturally comes with higher token usage, especially on complex apps and when you scan multiple URLs in a single run. On simpler targets it’s usually much cheaper, but as the attack surface and depth of exploration grow, costs can ramp up like what you saw. For the next release, we’re planning to add better cost controls: a hard token/price budget flag (e.g. --max-tokens / --max-cost) plus warnings as you get close to the limit, and a real-time reporting of token usage so this is more transparent and predictable.