Parallelize classical value in nonlocal games
Description
Adding parallelism using multiprocessing python library. Added a new function process_iteration that provides reusability of code. the parallelization is done when num_alice_outputs**num_bob_inputs is more than 1000.
#Resolves #12
Changes
- [x] Added a helper method to invoke multiprocessing when needed.
Checklist
Before marking your PR ready for review, make sure you checked the following locally. If this is your first PR, you might be notified of some workflow failures after a maintainer has approved the workflow jobs to be run on your PR.
Additional information is available in the documentation.
- [x] Use
rufffor errors related to code style and formatting. - [x] Verify all previous and newly added unit tests pass in
pytest. - [x] Check the documentation build does not lead to any failures.
Sphinxbuild can be checked locally for any failures related to your PR - [x] Use
linkcheckto check for broken links in the documentation - [x] Use
doctestto verify the examples in the function docstrings work as expected.
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 97.9%. Comparing base (
c32edae) to head (c66cc8f). Report is 233 commits behind head on master.
Additional details and impacted files
@@ Coverage Diff @@
## master #876 +/- ##
======================================
Coverage 97.9% 97.9%
======================================
Files 165 165
Lines 3253 3262 +9
Branches 713 714 +1
======================================
+ Hits 3185 3194 +9
Misses 44 44
Partials 24 24
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
🚀 New features to boost your workflow:
- ❄ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
Out of curiosity, is there any indication as to how much faster the parallel version is compared to the single-core version?
For this function, my answer would be no. But I used multiprocessing library to run this Python notebook on my system.
For the single core version, the time taken was about 25 seconds, while the parallel version took about 21 seconds.
Out of curiosity, is there any indication as to how much faster the parallel version is compared to the single-core version?
For this function, my answer would be no. But I used
multiprocessinglibrary to run this Python notebook on my system.For the single core version, the time taken was about 25 seconds, while the parallel version took about 21 seconds.
Interesting. I would have expected that the total time would have been subdivided by the number of cores on the machine, but perhaps 80 percent or so of the time is spent on the non parallelized parts of the process.
Still, since it's faster and not much boilerplate for that boost, I'll still consider that a win!
Thanks for your recent adaptations to the code, @Shivansh20128 !
This LGTM, but I'll defer to @purva-thakre for the final word. Thank you again for your contributions!
Thanks again, @Shivansh20128 !