rdfind icon indicating copy to clipboard operation
rdfind copied to clipboard

Ways to speed up rdfind runs: Parallelization

Open PoC-dev opened this issue 2 years ago • 3 comments

Besides other issues mentioning the wish about caching of checksum data, I see the possibility to speed up rdfind checks by either introducing threads, or (probably easier to implement) forking a child process for each non-option argument given on the command line.

With that, modern multi-core CPUs, and fast SSDs can be made more useful for rdfind. Trade-off is increased complexity to spawn sub-tasks, and get/merge result data from them in memory or temporary files.

This won't help for one-tree runs of rdfind, though. A probable alternate approach might be:

  • generate a simple object (files, links, whatever) list from all non-option command line arguments; just path names
  • do less time consuming exclusions in the main process (like it's done currently)
  • spawn at most n threads/child processes, each of them given an nth chunk of the prepared objects list to do each of the more time consuming checks in parallel
  • for the previous step to be most effective, checks are run "grouped". That is, another child/thread is not allowed to start a new class of check before the prior class has completed on each child. So the to-consider list can be made as small as possible (within the main process) before running the next check class. Classes being first-byte, last-byte, and checksumming
  • simply ending the threads/children when their current chunk of work for a given class is done, and start/fork them anew for the next class with an adjusted chunk of the main objects list remaining might prove to be easier to implement

n might be a command line parameter -numcpu *n*, defaulting to the count of CPU cores being detected. On OS flavors where this isn't possible (or unknown how to do), a sensible default might be 1, effectively reverting to the current behavior, but effectively preventing unnecessary slowdown from excessive CPU cache thrashing, if there is just a single CPU.

Opinions?

PoC-dev avatar Aug 22 '22 23:08 PoC-dev

Personally for storage I'm still using rotative disks. In this case parallelism on disks would increase the seeks which is not that good. But there's another possibility to use parallelism, separate disk activities and computation activities. Read from one thread and compute hashes in another thread.

Even if you are using SSD, are you sure the bottleneck is CPU and not disk speed? Maybe a different way to read files could help in some cases? It seems absurd but in some cases (like large directories and files) the usage of O_DIRECT (so no cache basically) could help.

But I think these optimizations without multiple use cases and lot of benchmarking are mainly speculations.

fziglio avatar Aug 23 '22 17:08 fziglio

About rotating disks: I suggest(ed) to have a command line parameter to (re)set parallelism to 1 for single-disk "optimization".

I'm not so sure about the possibility of disk- and computation parallelism. Computation needs the result of disk I/O. The easiest way to compute is to wait until each and every class of checks is finished and then apply the results to the remaining files in the big list. There's a tradeoff about ease of implementation and clear code vs. rather complex algorithms to enhance parallelism even more.

I think that implementing any means of parallelism would make rdfind finish its duty faster than currently. Further optimizations might be considered when real world data shows how a parallelizing rdfind behaves in the different environments. You have a single disk environment, I'm running it often on big server environments with many CPU cores, a lot of RAM and disk arrays allowing many I/Os per second. Some spinning rust, some SSD. All in all, serializing programs like rm -r or rdfind simply can't generate I/Os fast enough to utilize the performance possibilities of modern storage in a server environment.

Rdfind has multiple stages of work. Some are clearly I/O bound, some are more CPU bound. My recommendations aren't only about I/O, but also to run more threads/child tasks to utilize parallelism for CPU bound work, such as full checksum calculation. The bottleneck shifts, depending on the class of check being done.

Thus I don't understand how you come to the conclusion that my suggestions are moot? You are the one assuming a single disk as sole use-case, apparently. If parallelism is moot, why is it so ubiquitous in many applications? Think of unix |. Think of Apache. And many more

PoC-dev avatar Aug 23 '22 18:08 PoC-dev

I think you got my comments in a wrong way.

I don't get how you understood I concluded parallelism is moot, I even suggested a different way to implement it in rdfind. I develop in a company implementing network programs dealing with hundreds of connections so parallelism is a must. But that does not mean it's the solution to every problem.

In the initial message you spoke about "multicore CPUs and fast SSDs" but this apply to also any recent laptop, your use case with plenty of disks and cores is far from a laptop and obviously need different approaches. Personally I would retain a default similar to the current behaviour and add option to increase resource usages instead of changing default and adding an option to get the old behaviour. Not all people has dozens of cores and disks. There's also a -sleep option in rdfind to slow down the scan meaning that some other people have different use cases. I suppose not all people have the first target as run as fast as possible but maybe they want to work while rdfind is running.

If, in your use cases, one thread/core can't reach the full speed and you want rdfind to finish as fast as possible then yes, parallelism is the way to go. And possibly if you have a very powerful machine any parallelism will make the program faster. But that does not mean it's the best solution. What I tried to say is to try to understand which could be the bottleneck, even in your case. Using parallelism increase the performance as long as the different tasks do not starve too much the resources, than performances start to decrease. So understanding if it's more a CPU issue or a disk one would help understanding how to divide the tasks and resources usage and not starving a specific resource.

About separating disk and CPU activities I don't think it's pretty complicated, mostly a consumer/producer with some buffer pool, pretty standard implementations you can find plenty of examples.

fziglio avatar Aug 24 '22 11:08 fziglio

Apparently the project has no interest to introduce parallelization in rdfind to provide faster results. No comments despite one person constantly questioning my arguments. Closing.

PoC-dev avatar Aug 05 '23 20:08 PoC-dev