kikuchipy icon indicating copy to clipboard operation
kikuchipy copied to clipboard

Add PCA Dictionary Indexing

Open ZacharyVarley opened this issue 1 year ago • 5 comments

Description of the change

Add fast PCA dictionary indexing

Progress of the PR

For reviewers

  • [ ] The PR title is short, concise, and will make sense 1 year later.
  • [ ] New functions are imported in corresponding __init__.py.
  • [ ] New features, API changes, and deprecations are mentioned in the unreleased section in CHANGELOG.rst.
  • [ ] New contributors are added to release.py, .zenodo.json and .all-contributorsrc with the table regenerated.

ZacharyVarley avatar May 13 '23 13:05 ZacharyVarley

Hi @ZacharyVarley,

thank you for opening this PR! I modified your added tutorial to compare DI with PCA DI on my 6 yr old laptop. PCA DI is about 2x faster than standard DI, although it uses much more memory, which I hope we can look at.

Please let me know when you'd like feedback on the PR!

hakonanes avatar May 21 '23 15:05 hakonanes

Please let me also know whether you have any questions regarding tests, docs, dependency control etc. You might already be aware of our contributing guide.

hakonanes avatar May 21 '23 15:05 hakonanes

Hi @ZacharyVarley,

thank you for opening this PR! I modified your added tutorial to compare DI with PCA DI on my 6 yr old laptop. PCA DI is about 2x faster than standard DI, although it uses much more memory, which I hope we can look at.

Please let me know when you'd like feedback on the PR!

@hakonanes

The memory footprint is expected to be 4x larger than holding the uint8 dictionary patterns, as the dictionary patterns must be cast to 32 bit floats in order to calculate the covariance matrix using LAPACK for subsequent eigenvector decomposition. I realize now that this should be avoided by chunking the dictionary covariance matrix calculation. Further, this approach is most beneficial when the dictionary and/or pattern dimensions are large (roughly > 100,000 and > 60x60). This might be why only a 2x speedup was observed. I am hoping to have time to patch up everything in the PR this week.

ZacharyVarley avatar May 21 '23 15:05 ZacharyVarley

I realize now that this should be avoided by chunking the dictionary covariance matrix calculation.

I haven't looked in detail at the implementation, but just assumed that the dictionary had to be in memory for PCA DI. If not, that's great! You should be able to map the calculation across chunks of the dictionary using dask.array.map_blocks(). Using Dask gives the user the option to control the number of workers and available memory they want to use for this parallelized computation outside of kikuchipy, which is one of the greatest benefits of using Dask (kikuchipy don't have to consider these things).

From my experience, it is impossible to find the optimum between speed and memory use of DI across different platforms and sizes of dataset and dictionary. We therefore give the user the option to control how many experimental (n_per_iteration) and dictionary patterns (the chunk size of the dictionary if a Dask array, controlled by chunk_shape=n in calls to get_patterns()) are compared in each iteration. It would be good if similar options could be given to the user for PCA DI as well. Larger chunks = faster, but higher memory use.

hakonanes avatar May 21 '23 17:05 hakonanes

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB