dem-bones icon indicating copy to clipboard operation
dem-bones copied to clipboard

Question about your convexLS solver

Open pramanc opened this issue 3 years ago • 7 comments

Hi, It's me again!

I met a problem when calculate the gradient ( solveP function) .

when calculate z by solve the linear system A * z = b

some times A is zero matrix-> cause the calculated z is nan.

I am reading the code of convexLS solver, but can't find any reference.

Would you please tell me some reference of the convexLS solver?

Thanks!

pramanc avatar Apr 15 '21 11:04 pramanc

Hi @pramanc,

A good reference that I would recommend is the book "Numerical Methods for Least Squares Problems" by Ake Bjorck. You can check two chapters:

  • Constrained Least Squares Problems --> Linear Equality Constraints
  • Constrained Least Squares Problems --> Linear Inequality Constraints

I wrote a little bit (not very completed) about my implementation here: https://uh-ir.tdl.org/handle/10657/1824 (in chapter 3.7 Skinning Weights Update)

binhhuyle avatar Apr 20 '21 08:04 binhhuyle

Re: A is zero matrix problem

A in solveP function should not be zero. It is positive definite due to the construction, it is actually A^T*A (A transposed * A).

binhhuyle avatar Apr 20 '21 08:04 binhhuyle

@binhhuyle Thank you for your reply!

zero matrix problem actually happens in

solveP function: z=q2[np-2]*( //Re-project (q2[np-2].transpose()*indexing_row_col(aTa, idx.head(np), idx.head(np))q2[np-2]).colPivHouseholderQr().solve( //A q2[np-2].transpose()(indexing_vector(aTb, idx.head(np))-indexing_row(aTa, idx.head(np))*x) )); //b

the matrix (q2[np-2].transpose()*indexing_row_col(aTa, idx.head(np), idx.head(np))*q2[np-2]) sometimes will be zero matrix

======================================================= I will read the reference to see why it happens

Thank you again! Really amazing work!

pramanc avatar Apr 23 '21 07:04 pramanc

Hello, hope both of you are doing well. I think I'm running into the same problem and wanted to double check to see if either of you have found a solution or the cause of the problem.

After solving p it contains nan values and zeros. Which results in iMin being -1. _Scalar eps=std::abs(x(idx[iMin]));

Just debugged the values after the solve and I am doing something terribly wrong. aTa: -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) aTb: -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) aTa: -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) aTb: -nan(ind) -nan(ind) -nan(ind) -nan(ind) -nan(ind) x: 0.876707 0 0 0.123293 0 np: 2 affine: 1 p: -nan(ind) 0 0 -nan(ind) 0

robertjoosten avatar Sep 02 '21 16:09 robertjoosten

@robertjoosten Hi, Can you supply the data(aTa, aTb, x, idx, np, affine, p) passed in solveP?

pramanc avatar Sep 03 '21 05:09 pramanc

Hello @pramanc

I've narrowed it down a bit further. I think it goes wrong because of nan values introduced in m during the computeTransformations step.

m looks fine when it comes in ( identity matrix ) When computing vuT nan values make their way into the vuT which propagate to m ( I am assuming, in over my head with this level of math )

Here are some debug values for values in the SparseMatrix::InnerIterator in compute_vuT The height it value coming from the weights makes me thing that might be where the problem lies.

it row 5 it value 6.15434e+255 v 0.987688 -0.156434 0 u 0.987688 -0.156434 0 1 temp 0.975528 -0.154508 0 0.987688 -0.154508 0.0244717 -0 -0.156434 0 -0 0 0 0.987688 -0.156434 0 1 vuT 6.00373e+255 -9.50897e+254 0 6.07857e+255 -9.50897e+254 1.50607e+254 -0 -9.6275e+254 0 -0 0 0 6.07857e+255 -9.6275e+254 0 6.15434e+255 vutP inf -inf -nan(ind) inf -inf inf -nan(ind) -inf -nan(ind) -nan(ind) -nan(ind) -nan(ind) inf -inf -nan(ind) inf

I really doubt this is related to the original issue anymore, happy to take the discussion away from this thread Edit: this was me populating the weights incorrectly causing the matrices to flip out... sorry for the inconvenience

robertjoosten avatar Sep 03 '21 10:09 robertjoosten

6.15434e+255

well, definitely your sparse matrix w is wrong to make it value(6.15434e+255),you can check the fbxreader part to see if you get right weight matrix w

pramanc avatar Sep 18 '21 07:09 pramanc