autodiff icon indicating copy to clipboard operation
autodiff copied to clipboard

Eigenvalues derivatives and status of the project

Open giacomo-b opened this issue 3 years ago • 11 comments

First of all, keep up the great work! I think this library is awesome and I'd like to contribute one day.

I have a matrix M whose entries depend on one or more input parameters, and I am computing the eigenvalues of M using spectra. What I would like to achieve using autodiff is the computation of the derivatives of these eigenvalues w.r.t. the input parameters (basically, their sensitivities). Since spectra supports custom types I thought it might be possible to have it work seamlessly with autodiff.

The problem I am facing is that I don't know:

  1. if this is possible using autodiff
  2. assuming it can be done, which types I should use to achieve this.

I went through the tutorials but didn't find anything similar.

The steps are:

  1. Choose a parameter p
  2. Compute a matrix M as a function of p
  3. Compute the eigenvalues eigs of M (done using spectra)
  4. Compute the derivatives of eigs w.r.t. p (which of course go through the computation of M and the eigensolver)

Simple example for one input parameter and a 2x2 matrix:

// Given a parameter p, compute the M(p)
// Should probably return an autodiff type (MatrixXreal?) and take a const ref as argument
Eigen::MatrixXd M(double p)
{
    Eigen::MatrixXd output;
    output << 2 * p,  p * p,
              4 * p,    p;
    return output;
}

// Use spectra to compute the eigenvalues
Eigen::VectorXd eigs = ... // The eigenvalues should probably be saved in a VectorXreal

// Compute sensitivities using autodiff (d(eigs) / dp)
// ?

Do you think this is possible? If so, could you please show me how or point me to some related examples?

Also, quick question: what is the current status of the project? I noticed that the release version is not 1.0 yet, so I was wondering if there are any specific features in the pipeline that are currently not supported but you think are worth mentioning.

Thanks!

giacomo-b avatar Oct 29 '21 03:10 giacomo-b

Hi @giacomo-b ,

This should be possible. I've started creating an example here, but noticed that some things will need to be implemented (e.g., Eigen uses std::isfinite on the number type of the matrix when evaluating eigenvalues, and this is causing compilation errors).

Please try the idea below (it does not compile, but I think you should be able to implement missing parts to get it working):

// C++ includes
#include <iostream>
#include <complex>
using namespace std;

// Eigen includes
#include <Eigen/Eigenvalues>

// autodiff include
#include <autodiff/forward/dual.hpp>
#include <autodiff/forward/dual/eigen.hpp>
using namespace autodiff;

using cxdual = complex<dual>;

MatrixXdual assembleMatrix(dual p, dual q)
{
    return MatrixXdual({
        { 2.0*p, p*q },
        { 4.0*q, p*q }
    });
}

int main()
{
    dual p = 1.0;
    dual q = 1.0;

    MatrixXdual M;

    seed(p);
    M = assembleMatrix(p, q);
    unseed(p);

    auto lambdas_p = M.eigenvalues(); // for each complex a + ib in lambdas_p, a.grad and b.grad are ∂a/∂p and ∂b/∂p

    seed(q);
    M = assembleMatrix(p, q);
    unseed(q);

    auto lambdas_q = M.eigenvalues(); // for each complex a + ib in lambdas_q, a.grad and b.grad are ∂a/∂q and ∂b/∂q
}

As to v1.0, this should be happening soon!

allanleal avatar Oct 29 '21 10:10 allanleal

Hi @allanleal,

Thanks a lot for the detailed answer, I will try to implement this asap and will get back to you!

giacomo-b avatar Oct 29 '21 16:10 giacomo-b

Great, keep me posted!

On Fri, Oct 29, 2021 at 6:14 PM giacomo-b @.***> wrote:

Hi @allanleal https://github.com/allanleal,

Thanks a lot for the detailed answer, I will try to implement this asap and will get back to you!

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/autodiff/autodiff/issues/191#issuecomment-954871833, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABMOING3AFJGCVN3KBIMHZ3UJLB7TANCNFSM5G6LLNJA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

allanleal avatar Oct 29 '21 19:10 allanleal

Hi @allanleal,

thumbs up for autodiff, great piece of software!

This should be possible. I've started creating an example here, but noticed that some things will need to be implemented (e.g., Eigen uses std::isfinite on the number type of the matrix when evaluating eigenvalues, and this is causing compilation errors).

I added a T cast operator (acting as a double cast usually) to the Real class (see https://github.com/mattarroz/autodiff/commit/3a711222cb727d961fc9696a67d5a39ff88dc1a5). With that operator std::isfinite works on Real. In consequence Eigenvalue/Eigenvector (forward) derivative computation using Eigen is also working. If you think it is a good idea, I can create a PR.

Edit: sorry, forgot to push the latest version

mattarroz avatar Nov 17 '21 14:11 mattarroz

Hi @mattarroz , a PR for this would be great! Thanks a lot for this.

Does the same change you did for Real also fixes eigenvalue/eigenvector computations with Dual?

allanleal avatar Nov 17 '21 14:11 allanleal

For the Dual-version, I did this change. For the program you posted to work, I needed to change assembleMatrix to

MatrixXdual assembleMatrix(dual p, dual q)
{
  MatrixXdual ret(2,2);
  ret <<  2.0*p, p*q , 4.0*q, p*q;
  return ret;
}

It compiles and runs, however I didn't do any checks if the numbers are correct. For the Real version, I'm using the derivatives already for a Quasi-Newton algorithm, so it should be fine. Created https://github.com/autodiff/autodiff/pull/193.

Edit: Pushed the commit instead of just writing about it.

mattarroz avatar Nov 17 '21 14:11 mattarroz

Hey @mattarroz, I cloned your fork but this is still not working for me, even using Real.

The following doesn't compile:

#include <Eigen/Eigenvalues>
#include <autodiff/forward/real.hpp>
#include <autodiff/forward/real/eigen.hpp>

using namespace autodiff;

MatrixXreal assembleMatrix(real p) {
    return MatrixXreal({{2.0 * p, p * p},
                        {4.0 * p, p}});
}

int main() {
    real p = 1.0;

    MatrixXreal M;

    seed(p);
    M = assembleMatrix(p);
    unseed(p);

    auto lambdas_p = M.eigenvalues();
}

The error is due to the eigensolver: error: no type named ‘__type’ in ‘struct __gnu_cxx::__enable_if<false, bool>’.

Am I missing something? Thanks!

giacomo-b avatar Nov 18 '21 20:11 giacomo-b

The error is due to the eigensolver: error: no type named ‘__type’ in ‘struct __gnu_cxx::__enable_if<false, bool>’.

Am I missing something? Thanks!

Upon adding the cast operator and changing the assembleMatrix method as described in the previous post, it compiles for me for both real and dual. I am using eigen 3.3.9, tried clang 11.0.1-2 and gcc 10.2.1 both using the C++17 standard. Can you maybe post the full compiler output? For your convenience, I pushed the changes necessary to make it compile and run with dual too to a new branch: https://github.com/mattarroz/autodiff/tree/feature/RealAndDualEigenvalues. Remember this is a hack, I am not sure if it is correct for dual, as said before.

Here is the full program I compiled for real:

// C++ includes
#include <iostream>

#include <Eigen/Eigenvalues>
#include <autodiff/forward/real.hpp>
#include <autodiff/forward/real/eigen.hpp>

using namespace autodiff;

MatrixXreal assembleMatrix(real p) {
  MatrixXreal ret(2,2);
  ret << 2.0 * p, p * p, 4.0 * p, p;
  return ret;
}

MatrixXreal assembleMatrixDiagonal(real p) {
  MatrixXreal ret(2,2);
  ret << p, 0, 0, p;
  return ret;
}

int main() {
  real p = 1.0;

  MatrixXreal M;

  seed(p);
  M = assembleMatrix(p);
  unseed(p);

  auto lambdas_p = M.eigenvalues();
  std::cout << "Lambdas_p:\n" << lambdas_p << std::endl;


  real x = 3.141;
  real u;
  auto grad = ::gradient ([](const real &p_x) {
    const MatrixXreal M = assembleMatrix (p_x);
    return M.eigenvalues ()(0,0).real();
  }, wrt(x), at(x), u);

  std::cout << "Lambdas_p(0,0):\n" << u << std::endl;
  std::cout << "dLambdas_p(0,0)/dp: \n" << grad << std::endl;

  grad = ::gradient ([](const real &p_x) {
    const MatrixXreal M = assembleMatrixDiagonal(p_x);
    return M.eigenvalues ()(0,0).real();
  }, wrt(x), at(x), u);

  std::cout << "Lambdas_p(0,0):\n" << u << std::endl;
  std::cout << "dLambdas_p(0,0)/dp: \n" << grad << std::endl;
}

Output:

Lambdas_p:
  (3.56155,0)
(-0.561553,0)
Lambdas_p(0,0):
15.9552
dLambdas_p(0,0)/dp: 
6.83458
Lambdas_p(0,0):
3.141
dLambdas_p(0,0)/dp: 
1

Edit: Cleaned up the lambda expression a little

mattarroz avatar Nov 19 '21 00:11 mattarroz

@mattarroz thank you for taking the time, this is very helpful! It is now working, I am not sure what was going on before (did you change anything on the new branch?)

You mentioned that you haven't validated the results, right? If so, I can proceed and do that.

giacomo-b avatar Nov 19 '21 01:11 giacomo-b

@mattarroz thank you for taking the time, this is very helpful! It is now working, I am not sure what was going on before (did you change anything on the new branch?)

No big deal, you're welcome! I didn't change anything except for removing the explicit keyword from the cast operator in Dual.

You mentioned that you haven't validated the results, right? If so, I can proceed and do that.

That would be great!

mattarroz avatar Nov 19 '21 14:11 mattarroz

Awesome, I'll post an update here as soon as I get that done. Thanks again!

giacomo-b avatar Nov 21 '21 00:11 giacomo-b