phoenix
phoenix copied to clipboard
feat(main): ability to specify concurrency in run_experiment and evaluate_experiment
(Arize-AI#4186)
CLA Assistant Lite bot All contributors have signed the CLA ✍️ ✅
I have read the CLA Document and I hereby sign the CLA
This MR might be misguided as I'm not really observing any concurrency in the current implementation 🤔 .. Seems like tasks are always evaluated in sequence? Curious to get your guidance
Hi @anton164, this PR does correctly wire up configuring the concurrency of our experiment runners—if the tasks that are being run are async. We generate a sequence of tasks to run (concurrently if possible) then submit them to our executor.
Would it be possible to rewrite your task as a coroutine function instead and report back if the default level of concurrency works for you?
Thanks @anticorrelator -- concurrency works in my experiments with a coroutine task, but the level of concurrency is not sufficient. I’d like to use ~20 concurrency for my batch experiments.
I’ve updated the PR to fix a typo and add docs about the task needing to be a coroutine
To fully wire this up we need to add a one more thing:
On line 416 we should pass the concurrency parameter to
evaluate_experiment.
Good catch @anticorrelator, updated! Thanks for the quick reviews
Thanks for your contribution @anton164 ! 💯
@anton164 Looks like there's a line length linting issue in the docstring before we can merge
@anticorrelator argh, sorry about that. Hadn't set up linting in my dev environment. I set it up now and confirmed that linting for this file no longer fails after fixing the line ending.