pythonocc-core
pythonocc-core copied to clipboard
Parallel BRepAlgoAPI_Cut hangs on more threads
I'm running a script in which I use the function BRepAlgoAPI_Cut, with the SetRunParallel(True) option. The tool is a list of shapes. The problem is that on my personal pc, with 24 threads, it runs fine. However, on the work server, with 192 threads, it hangs when calculating the Build() function (in the task manager the cpu activity increases then decreases but the execution doesn't advance). The only solution I found is to reduce the number of tools, in order to probably reduce the number of active threads. However it seems strange to have to reduce the number of tools, considering that the server has much more threads. What could be the problem and the solution? May the problem be related to NUMA cores? (in this case how could I limit the number of threads?)
Did you compile occt with Intel TBB support?
Did you compile occt with Intel TBB support?
I used the default installation with anaconda from conda-forge. How can I install it with tbb?
If you make an intensive use of parallel computations, you should try tbb. Unfortunately, the conda-forge occt version is not compiled against tbb, so you will have to compile occt by yourself on your server. Maybe that could be worth changing the occt conda-forge recipe to add tbb, ping @looooo
so you will have to compile occt by yourself on your server
Sorry but I'm not very expert. Do you have a guide on how to do that?
P.S. on my computer I'm using Ubuntu WSL2, while on the server there's Anaconda on Windows Server 2019
If you make an intensive use of parallel computations, you should try tbb. Unfortunately, the conda-forge occt version is not compiled against tbb, so you will have to compile occt by yourself on your server. Maybe that could be worth changing the occt conda-forge recipe to add tbb, ping @looooo
Iirc we removed TBB because without it should be faster. There is an intern alternative which should be faster.
I'm not sure that such a use-case (up to 192 threads) has been benchmarked, that would be interesting.