aligned_layer icon indicating copy to clipboard operation
aligned_layer copied to clipboard

bug: lack of minimum max_fee can lead to DoS for the batcher

Open Oppen opened this issue 11 months ago • 0 comments

Reported by cantina#66.

Transcript from the report:

Description

When submitting a proof to the batcher users can specify the max_fee they are willing to pay for the proof to be included in the next batch, the issue is that an attacker can specify 0 as max_fee and submit an arbitrary number of large VerificationData (limited by tungstenite-rs default config of 64mb per message and 16mb per frame) that will occupy memory in the batcher's queue indefinitely while the attacker only needs to lock 1 wei in the BatcherPaymentService contract, the attacker risks almost nothing because proofs with 0 max_fee will always be discarded in try_build_batch() and will never be submitted.

The root cause of the issue is that when a proof is submitted the check for the user's balance is done in verify_user_has_enough_balance() by comparing current user balance in the BatcherPaymentService contract to the max_fee specified by the user in the websocket message + the accumulated fee which is the sum of max_fee in previous submitted proofs (which could be 0) instead of comparing it to the minimum fee required for a proof to be submitted on-chain as calculated in calculate_fee_per_proof:

    fn verify_user_has_enough_balance(
        &self,
        user_balance: U256,
        user_accumulated_fee: U256,
        new_msg_max_fee: U256,
    ) -> bool {
        let required_balance: U256 = user_accumulated_fee + new_msg_max_fee;
        user_balance >= required_balance
    }
fn calculate_fee_per_proof(batch_len: usize, gas_price: U256) -> U256 {
    let gas_per_proof = (crate::CONSTANT_GAS_COST
        + crate::ADDITIONAL_SUBMISSION_GAS_COST_PER_PROOF * batch_len as u128)
        / batch_len as u128;

    U256::from(gas_per_proof) * gas_price
}

This allows a malicious user to submit an infinite amount of large proofs (max is 64mb) to the batcher while depositing 1 wei in the contract, those proofs will never be submitted on-chain but are never discarded from the batcher queue and will eventually exhaust the batcher's memory. The attacker only has to keep sending a max sized proof with 0 max_fee in an infinite loop through websocket to the batcher.

Proof of Concept

To demonstrate the issue we will be using the same 16mb payload used in #59 by inserting junk data in vm_program, also we need to generate a new keystore because the tests in Makefile use a non-paying address, the issue with non-paying addresses is that max_fee is hardcoded to 0.0013 eth in the batcher, but that should only be for testing purposes as mentioned in this comment.

Make sure to deposit 1 wei in batcher contract to the newly created address as described in 0_submitting_proofs.md to pass the locking check.

add the following to Makefile, make sure to replace $(KEYSTORE_PATH) with the path the keystore created in previous step or supply at as an env variable:

batcher_dos_fee: batcher/target/release/aligned
	@echo "[+] starting poc"
	@cd batcher/aligned/ && python3 -c 'print("A" * 8387764, end="")' > junk_data && cargo run --release -- submit \
		--proving_system Groth16Bn254 \
		--proof ../../scripts/test_files/gnark_groth16_bn254_infinite_script/infinite_proofs/ineq_1_groth16.proof \
		--vm_program ./junk_data \
		--public_input ../../scripts/test_files/gnark_groth16_bn254_infinite_script/infinite_proofs/ineq_1_groth16.pub \
		--vk ../../scripts/test_files/gnark_groth16_bn254_infinite_script/infinite_proofs/ineq_1_groth16.vk \
		--proof_generator_addr 0x66f9664f97F2b50F62D13eA064982f936dE76657 \
		--rpc_url $(RPC_URL) \
		--network $(NETWORK) \
		--repetitions 99 \
		--max_fee 0ether \
        --keystore_path $(KEYSTORE_PATH)

this will send 99 proofs that are 16mb, to avoid OOM in aligned cli this should be run multiple times. run make batcher_dos_fee multiple times while observing aligned-batcher memory usage increasing with top (ideally htop).

End of transcript

The takeaway is we need to define a reasonable minimum for the max_fee (and other maximum fees in other places) and enforce it across the system.

Oppen avatar Jan 20 '25 20:01 Oppen