aiida-core icon indicating copy to clipboard operation
aiida-core copied to clipboard

Implement dynamic safe_interval for TransportQueue to reduce wait times

Open Copilot opened this issue 3 months ago • 1 comments

Quick calculations (e.g., 10s runtime) experienced 120s overhead due to fixed 30s safe_interval waits before each transport operation (upload, submit, retrieve, final retrieve). This resulted in 4×30s = 120s of unnecessary waiting.

Changes

  • Track last close time per authinfo in TransportQueue._last_close_times dict
  • Calculate dynamic wait intervals:
    • First request: immediate (wait = 0)
    • Subsequent requests: wait = max(0, safe_interval - time_since_close)
  • Record close time when transport is closed to enable next calculation

Implementation

# In TransportQueue.request_transport():
last_close_time = self._last_close_times.get(authinfo.pk, None)
current_time = time.time()

if last_close_time is None:
    wait_interval = 0  # First request: immediate
else:
    time_since_last_close = current_time - last_close_time
    if time_since_last_close >= safe_open_interval:
        wait_interval = 0  # Enough time passed: immediate
    else:
        wait_interval = safe_open_interval - time_since_last_close  # Partial wait

Impact

For a 10s calculation with default safe_interval=30:

  • Before: 130s total (10s execution + 4×30s waits)
  • After: ~90s total (10s execution + reduced waits)
  • Improvement: 30% faster, ~40s saved

State is maintained per daemon worker (not in database) to avoid multi-worker synchronization overhead. Backward compatible: existing behavior preserved when requests occur within safe_interval.

Original prompt

This section details on the original issue you should resolve

<issue_title>Make safe_interval more dynamic for quick transport tasks</issue_title> <issue_description>As realized together with @giovannipizzi while debugging things for our new cluster at PSI: When submitting a simple calculation (execution takes about 10s) for testing purposes, with the default safe_interval=30 in the Computer configuration, one has to wait an additional 90s until the job is done (30s for the upload, submit, and retrieve tasks, each). This is to be expected, of course, and one could just reduce the safe_interval (albeit increasing the risk of SSH overloads).

However, the upload task in that case is truly the first Transport task that is being executed by the daemon worker, so it could, in principle, enter immediately (the same if jobs were run previously, but longer ago than the safe_interval). I locally implemented a first version (thanks to @giovannipizzi's input) that does this, by adding a last_close_time attribute (currently added to the authinfo metadata for a first PoC). In the request_transport method of the TransportQueue, the time difference between the current time and the last_close_time is then checked, and if it is larger than safe_interval, the Transport is opened immediately via:

open_callback_handle = self._loop.call_later(0, do_open, context=contextvars.Context())  # or use 1 for safety?

bypassing the safe_interval (or safe_open_interval as it is called in transports.py).

In addition, the waiting times for the submit and retrieve tasks could also be reduced. It seems like currently, the safe_interval is imposed on all of them, even if they finish very quickly (I assume as all open a transport connection via SSH). So we were thinking if it's possible to make this a bit more sophisticated, e.g. by adding special transport requests, that could make use of the open transport, and keep a transport of which the task has finished open for a short time longer (also quickly discussed with @mbercx). Of course, one would still need to make sure SSH doesn't get overloaded, the implementation works with heavy loads (not just individual testing calculations), and one would also have to consider how this all works with multiple daemon workers. Again with @giovannipizzi, I had a quick look, but it seems like the implementation would be a bit more involved. So wondering what the others think, if this is feasible and worth investigating more time into. Pinging @khsrali who has looked a bit more into transports.</issue_description>

Comments on the Issue (you are @copilot in this section)

@giovannipizzi Thanks for the nice write-up @GeigerJ2 ! Just some minor additional comments/clarifications
  • actually even for a millisecond run, the time to wait is 120s (or generally 4 times the safe interval), rather than 3x (90s):

    • [30s] initial wait to upload
    • [30s] time to wait to submit
    • [30s] time for the first check of the queue (that, for jobs do not get queued, running immediately and run for < 30s, would already return that the job finished)
    • [30s] to retrieve the calculations and set the calculation as finished

    (and I guess one adds another 30s if there is also stashing involved)

  • The initial implementation keeps the time in the metadata of the authinfo, but already while discussing with Julian, we realized it's better not to put it there, as this is shared by all daemon workers, and could lead to wrong results, collisions and exceptions when multiple write the same DB row, etc. - better to just keep in another local attribute self.last_close_time, parallel to self._transport_requests. On the other hand, I just realize that if you are running from a local interpreter, and maybe submitting run() from a bash "for" loop (e.g. of verdi run commands), this might bypass the limit as all of them will think that nothing was submitted before. But probably this is OK with the current implementation? Fixing it properly would require making the whole concept of a safe_interval not specific to a worker, but global to a AiiDA profile.

  • In the implementation discussed above, in addition to setting the first parameter of call_later to zero if more than safe_interval seconds passed from the last call, I would also set the waiting time to the difference current_time - last_close_time, so e.g. you only wait 10 seconds if you closed the transport 20 seconds ago.

  • The points above solve the waiting of the first 30 seconds. For the other 3x30 seconds, the idea is that probably in this case the connection was just closed less than a second before, i.e. the time for AiiDA to change state. If we could keep the connection open for a configurable time after the last command (say with a default of 5 or 10 seconds), a full single submission could go down to probably just < ...

  • Fixes aiidateam/aiida-core#6544

💬 We'd love your input! Share your thoughts on Copilot coding agent in our 2 minute survey.

Copilot avatar Dec 09 '25 15:12 Copilot

Codecov Report

:x: Patch coverage is 16.66667% with 10 lines in your changes missing coverage. Please review. :white_check_mark: Project coverage is 29.18%. Comparing base (a770d7a) to head (c2e43e1).

Files with missing lines Patch % Lines
src/aiida/engine/transports.py 16.67% 10 Missing :warning:
Additional details and impacted files
@@             Coverage Diff             @@
##             main    #7145       +/-   ##
===========================================
- Coverage   79.61%   29.18%   -50.42%     
===========================================
  Files         566      566               
  Lines       43572    43537       -35     
===========================================
- Hits        34684    12703    -21981     
- Misses       8888    30834    +21946     

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

:rocket: New features to boost your workflow:
  • :snowflake: Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

codecov[bot] avatar Dec 09 '25 16:12 codecov[bot]