is SVD broken for wide matrices? (undocumented economy implementation)
The ml-matrix SVD implementation appears to be broken for wide matrices (more columns than rows).
Evidence:
- 2×4 rank-1 matrix: Theory requires 3 null vectors, ml-matrix provides only 1
- 2×5 rank-2 matrix: Theory requires 3 null vectors, ml-matrix provides only 2
- 1×4 rank-1 matrix: Theory requires 3 null vectors, ml-matrix provides only 2
The Bug:
ml-matrix's SVD violates the fundamental rank-nullity theorem (rank + nullity = n) for wide matrices, providing incomplete null space bases.
What Works vs. What's Broken:
✅ Square/tall matrices (rows ≥ columns): Perfect SVD implementation
❌ Wide matrices (rows < columns): Incomplete/incorrect null space computation
Impact:
This bug would cause constraint solving failures in geometric construction problems that have more variables than constraints (common in underdetermined systems).
Tests Fixed:
The original failing tests were actually revealing this underlying SVD bug. I've documented the complete ml-matrix SVD convention analysis in /test/ml-matrix-svd-convention.test.ts, which systematically demonstrates the limitation.
The autoTranspose option does NOT fix the ml-matrix SVD limitation - it actually makes it significantly worse!
autoTranspose Test Results: WORSE Performance
Key Findings:
- V matrix gets truncated: From n×n to n×min(m,n), losing critical null space information
-
Null space becomes even more incomplete:
- 2×4 rank-1: Still only 1 vector found (need 3)
- 2×3 rank-1: Now only 1 vector found (need 2) - got worse!
- 1×4 rank-1: Now 0 vectors found (need 3) - completely broken!
Conclusion:
autoTranspose is not the solution to ml-matrix's incomplete null space problem. The fundamental issue remains: ml-matrix cannot provide complete null space bases for wide matrices, regardless of settings.
The comprehensive testing has confirmed that ml-matrix's SVD has inherent limitations that cannot be resolved through configuration options. The original failing tests correctly identified a real mathematical deficiency in the library.
import { Matrix } from 'ml-matrix';
// =============================================================================
// MATRIX DECOMPOSITION CONTRACTS
// =============================================================================
/**
* SVD decomposition result
*/
export interface SVDResult {
readonly U: Matrix;
readonly S: readonly number[];
readonly V: Matrix;
readonly rank: number;
readonly condition: number;
/** Access to original ml-matrix object for advanced operations */
readonly raw: any; // SingularValueDecomposition
}
/**
* LU decomposition result
*/
export interface LUResult {
readonly L: Matrix;
readonly U: Matrix;
readonly p: readonly number[]; // Permutation vector
/** Access to original ml-matrix object */
readonly raw: any; // LuDecomposition
}
/**
* QR decomposition result
*/
export interface QRResult {
readonly Q: Matrix;
readonly R: Matrix;
/** Access to original ml-matrix object */
readonly raw: any; // QrDecomposition
}
/**
* Cholesky decomposition result
*/
export interface CholeskyResult {
readonly L: Matrix;
/** Access to original ml-matrix object */
readonly raw: any; // CholeskyDecomposition
}
/**
* @fileoverview ml-matrix SVD Convention Analysis
*
* This test suite systematically determines the exact convention
* used by ml-matrix for SVD decomposition, specifically:
* 1. How V columns relate to singular values
* 2. Whether it's A = UΣV^T or A = UΣV
* 3. The ordering of singular values and corresponding vectors
*/
import { Matrix } from 'ml-matrix';
import { svd } from '../src/math/linear/core/matrixOps';
describe('ml-matrix SVD Convention Analysis', () => {
test('Convention Test 1: Simple rank-1 matrix [1,2; 2,4]', () => {
const A = new Matrix([
[1, 2],
[2, 4]
]);
console.log('\n=== CONVENTION TEST 1: Rank-1 2x2 ===');
console.log('A =', A.to2DArray());
const svdResult = svd(A);
const { U, S, V } = svdResult;
console.log(`SVD: U=${U.rows}×${U.columns}, S=[${S.map(s => s.toFixed(6)).join(', ')}], V=${V.rows}×${V.columns}`);
console.log('U =', U.to2DArray().map(row => row.map(x => x.toFixed(6))));
console.log('V =', V.to2DArray().map(row => row.map(x => x.toFixed(6))));
// Test reconstruction: A = U*S*V^T
const SMatrix = Matrix.zeros(U.columns, V.columns);
for (let i = 0; i < Math.min(S.length, Math.min(SMatrix.rows, SMatrix.columns)); i++) {
const singularValue = S[i];
if (singularValue !== undefined) {
SMatrix.set(i, i, singularValue);
}
}
const reconstructed = U.mmul(SMatrix).mmul(V.transpose());
console.log('Reconstructed A (U*S*V^T) =', reconstructed.to2DArray().map(row => row.map(x => x.toFixed(6))));
// Test each V column with Ax = 0
console.log('\nNull space analysis:');
for (let col = 0; col < V.columns; col++) {
const v = Matrix.zeros(V.rows, 1);
for (let row = 0; row < V.rows; row++) {
v.set(row, 0, V.get(row, col));
}
const Av = A.mmul(v);
const norm = Av.norm('frobenius');
console.log(`V column ${col}: [${Array.from({length: v.rows}, (_, i) => v.get(i, 0).toFixed(6)).join(', ')}] → ||Av|| = ${norm.toExponential(3)}`);
}
// Expected: null space should be span{[-2, 1]^T} or equivalent
const expectedNull = new Matrix([[-2], [1]]);
const expectedNorm = A.mmul(expectedNull).norm('frobenius');
console.log(`Expected null vector [-2, 1]: ||A*v|| = ${expectedNorm.toExponential(3)}`);
});
test('Convention Test 2: Rank-1 matrix [1,2,3; 2,4,6]', () => {
const A = new Matrix([
[1, 2, 3],
[2, 4, 6]
]);
console.log('\n=== CONVENTION TEST 2: Rank-1 2x3 ===');
console.log('A =', A.to2DArray());
const svdResult = svd(A);
const { U, S, V } = svdResult;
console.log(`SVD: U=${U.rows}×${U.columns}, S=[${S.map(s => s.toFixed(6)).join(', ')}], V=${V.rows}×${V.columns}`);
console.log('rank =', svdResult.rank, 'expected nullity =', A.columns - svdResult.rank);
// Map singular values to their indices
const singularValueMap = S.map((s, i) => ({ value: s, index: i, isZero: s <= svdResult.raw.threshold }));
console.log('Singular value mapping:');
singularValueMap.forEach(({value, index, isZero}) => {
console.log(` S[${index}] = ${value.toExponential(6)} ${isZero ? '(ZERO)' : '(NON-ZERO)'}`);
});
console.log('\nTesting V columns:');
for (let col = 0; col < V.columns; col++) {
const v = Matrix.zeros(V.rows, 1);
for (let row = 0; row < V.rows; row++) {
v.set(row, 0, V.get(row, col));
}
const Av = A.mmul(v);
const norm = Av.norm('frobenius');
const isNull = norm < 1e-10;
console.log(` V[:, ${col}] = [${Array.from({length: v.rows}, (_, i) => v.get(i, 0).toFixed(6)).join(', ')}]`);
console.log(` ||A * V[:, ${col}]|| = ${norm.toExponential(3)} ${isNull ? '✓ NULL' : '✗ not null'}`);
}
// Known null vectors for [1,2,3; 2,4,6]: span{[-2,1,0], [-3,0,1]}
const knownNull1 = new Matrix([[-2], [1], [0]]);
const knownNull2 = new Matrix([[-3], [0], [1]]);
console.log('\nKnown null space vectors:');
console.log(`[-2,1,0]: ||A*v|| = ${A.mmul(knownNull1).norm('frobenius').toExponential(3)}`);
console.log(`[-3,0,1]: ||A*v|| = ${A.mmul(knownNull2).norm('frobenius').toExponential(3)}`);
});
test('Convention Test 3: Identity matrix (full rank)', () => {
const A = Matrix.eye(3);
console.log('\n=== CONVENTION TEST 3: Identity 3x3 ===');
const svdResult = svd(A);
const { U, S, V } = svdResult;
console.log(`SVD: U=${U.rows}×${U.columns}, S=[${S.map(s => s.toFixed(6)).join(', ')}], V=${V.rows}×${V.columns}`);
console.log('rank =', svdResult.rank, 'expected nullity =', A.columns - svdResult.rank);
console.log('\nTesting V columns (should all be non-null):');
for (let col = 0; col < V.columns; col++) {
const v = Matrix.zeros(V.rows, 1);
for (let row = 0; row < V.rows; row++) {
v.set(row, 0, V.get(row, col));
}
const Av = A.mmul(v);
const norm = Av.norm('frobenius');
console.log(` ||A * V[:, ${col}]|| = ${norm.toExponential(3)}`);
}
});
test('Convention Test 4: Zero matrix (rank 0)', () => {
const A = Matrix.zeros(2, 3);
console.log('\n=== CONVENTION TEST 4: Zero 2x3 ===');
const svdResult = svd(A);
const { U, S, V } = svdResult;
console.log(`SVD: U=${U.rows}×${U.columns}, S=[${S.map(s => s.toFixed(6)).join(', ')}], V=${V.rows}×${V.columns}`);
console.log('rank =', svdResult.rank, 'expected nullity =', A.columns - svdResult.rank);
console.log('\nTesting V columns (should all be null):');
for (let col = 0; col < V.columns; col++) {
const v = Matrix.zeros(V.rows, 1);
for (let row = 0; row < V.rows; row++) {
v.set(row, 0, V.get(row, col));
}
const Av = A.mmul(v);
const norm = Av.norm('frobenius');
console.log(` ||A * V[:, ${col}]|| = ${norm.toExponential(3)}`);
}
});
test('Convention Test 5: Systematic mapping analysis', () => {
const A = new Matrix([
[1, 2, 3, 4],
[2, 4, 6, 8]
]);
console.log('\n=== CONVENTION TEST 5: Systematic Analysis 2x4 ===');
console.log('A =', A.to2DArray());
const svdResult = svd(A);
const { U, S, V } = svdResult;
console.log(`\nDetailed analysis:`);
console.log(`U = ${U.rows}×${U.columns}`);
console.log(`S = [${S.map(s => s.toExponential(6)).join(', ')}]`);
console.log(`V = ${V.rows}×${V.columns}`);
console.log(`rank = ${svdResult.rank}`);
console.log(`threshold = ${svdResult.raw.threshold.toExponential(6)}`);
console.log('\n🔍 HYPOTHESIS TESTING:');
// Hypothesis 1: V columns are ordered by corresponding singular values
console.log('\nHypothesis 1: V[:, i] corresponds to S[i]');
for (let i = 0; i < Math.min(S.length, V.columns); i++) {
const singularValue = S[i];
if (singularValue === undefined) continue;
const isZeroSingular = singularValue <= svdResult.raw.threshold;
const v = Matrix.zeros(V.rows, 1);
for (let row = 0; row < V.rows; row++) {
v.set(row, 0, V.get(row, i));
}
const Av = A.mmul(v);
const norm = Av.norm('frobenius');
const isNullEmpirical = norm < 1e-10;
console.log(` S[${i}] = ${singularValue.toExponential(3)} ${isZeroSingular ? '(zero)' : '(non-zero)'} ↔ V[:, ${i}] gives ||Av|| = ${norm.toExponential(3)} ${isNullEmpirical ? '(null)' : '(non-null)'}`);
console.log(` Match: ${isZeroSingular === isNullEmpirical ? '✓' : '✗'}`);
}
// Hypothesis 2: V columns from the end correspond to zero singular values
console.log('\nHypothesis 2: Last V columns correspond to null space');
const numZeroSingulars = S.filter(s => s <= svdResult.raw.threshold).length;
const startCol = V.columns - numZeroSingulars;
for (let col = startCol; col < V.columns; col++) {
const v = Matrix.zeros(V.rows, 1);
for (let row = 0; row < V.rows; row++) {
v.set(row, 0, V.get(row, col));
}
const Av = A.mmul(v);
const norm = Av.norm('frobenius');
console.log(` V[:, ${col}] (expecting null): ||Av|| = ${norm.toExponential(3)} ${norm < 1e-10 ? '✓' : '✗'}`);
}
// Hypothesis 3: Check if it's actually V^T we should be using
console.log('\nHypothesis 3: Check V^T columns instead');
const VT = V.transpose();
for (let col = 0; col < VT.columns; col++) {
const v = Matrix.zeros(VT.rows, 1);
for (let row = 0; row < VT.rows; row++) {
v.set(row, 0, VT.get(row, col));
}
const Av = A.mmul(v);
const norm = Av.norm('frobenius');
console.log(` V^T[:, ${col}]: ||Av|| = ${norm.toExponential(3)} ${norm < 1e-10 ? '(null)' : '(non-null)'}`);
}
});
describe('Wide Matrix V-S Correspondence Analysis', () => {
test('Wide Test 1: Single row matrices [1×n]', () => {
console.log('\n=== WIDE MATRIX ANALYSIS: Single Row ===');
const testCases = [
{ matrix: [[1, 2]], name: '1×2' },
{ matrix: [[1, 2, 3]], name: '1×3' },
{ matrix: [[1, 2, 3, 4]], name: '1×4' },
{ matrix: [[1, 2, 3, 4, 5]], name: '1×5' }
];
testCases.forEach(({matrix, name}) => {
const A = new Matrix(matrix);
const svdResult = svd(A);
const { U, S, V, rank } = svdResult;
console.log(`\n${name} matrix: A = ${JSON.stringify(matrix[0])}`);
console.log(`SVD: U=${U.rows}×${U.columns}, S=[${S.map(s => s.toFixed(3)).join(', ')}], V=${V.rows}×${V.columns}, rank=${rank}`);
// Systematic V-S correspondence check
console.log('V-S Correspondence Analysis:');
for (let i = 0; i < Math.min(S.length, V.columns); i++) {
const singularValue = S[i];
if (singularValue === undefined) continue;
// Extract V column i
const v = Matrix.zeros(V.rows, 1);
for (let row = 0; row < V.rows; row++) {
v.set(row, 0, V.get(row, i));
}
const Av = A.mmul(v);
const norm = Av.norm('frobenius');
console.log(` S[${i}] = ${singularValue.toFixed(6)} vs ||A*V[:, ${i}]|| = ${norm.toFixed(6)} ratio=${(norm/singularValue).toFixed(3)}`);
}
// Check remaining V columns (beyond S.length)
if (V.columns > S.length) {
console.log('Extra V columns (beyond S.length):');
for (let i = S.length; i < V.columns; i++) {
const v = Matrix.zeros(V.rows, 1);
for (let row = 0; row < V.rows; row++) {
v.set(row, 0, V.get(row, i));
}
const Av = A.mmul(v);
const norm = Av.norm('frobenius');
console.log(` V[:, ${i}]: ||A*v|| = ${norm.toExponential(3)} ${norm < 1e-10 ? '(NULL)' : '(non-null)'}`);
}
}
});
});
test('Wide Test 2: Two row matrices [2×n] with varying ranks', () => {
console.log('\n=== WIDE MATRIX ANALYSIS: Two Rows, Varying Ranks ===');
const testCases = [
{ matrix: [[0, 0, 0]], name: '2×3 rank-0 (zero)', rank: 0 },
{ matrix: [[1, 2, 3], [0, 0, 0]], name: '2×3 rank-1', rank: 1 },
{ matrix: [[1, 2, 3], [4, 5, 6]], name: '2×3 rank-2', rank: 2 },
{ matrix: [[1, 2, 3, 4], [0, 0, 0, 0]], name: '2×4 rank-1', rank: 1 },
{ matrix: [[1, 2, 3, 4], [5, 6, 7, 8]], name: '2×4 rank-2', rank: 2 },
{ matrix: [[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]], name: '2×5 rank-2', rank: 2 }
];
testCases.forEach(({matrix, name, rank: expectedRank}) => {
// Handle zero matrix case
const firstRow = matrix[0];
if (!firstRow) throw new Error(`Invalid matrix: ${name}`);
const A = firstRow.every(x => x === 0) ? Matrix.zeros(2, firstRow.length) : new Matrix(matrix);
const svdResult = svd(A);
const { U, S, V, rank } = svdResult;
console.log(`\n${name}: rank=${rank} (expected ${expectedRank}), nullity=${A.columns - rank}`);
console.log(`SVD shapes: U=${U.rows}×${U.columns}, S.length=${S.length}, V=${V.rows}×${V.columns}`);
// Detailed V-S analysis
console.log('Detailed V-S correspondence:');
for (let i = 0; i < V.columns; i++) {
const v = Matrix.zeros(V.rows, 1);
for (let row = 0; row < V.rows; row++) {
v.set(row, 0, V.get(row, i));
}
const Av = A.mmul(v);
const norm = Av.norm('frobenius');
// Check if this column corresponds to a singular value
let correspondingSValue = 'none';
if (i < S.length) {
const sVal = S[i];
if (sVal !== undefined) {
correspondingSValue = sVal.toFixed(6);
}
}
console.log(` V[:, ${i}] → ||Av|| = ${norm.toExponential(3)}, S[${i}] = ${correspondingSValue}`);
// Check if it's actually null space
if (norm < 1e-10) {
console.log(` ↳ NULL SPACE vector found at position ${i}`);
}
}
// Summary of null space pattern
const nullColumns = [];
for (let i = 0; i < V.columns; i++) {
const v = Matrix.zeros(V.rows, 1);
for (let row = 0; row < V.rows; row++) {
v.set(row, 0, V.get(row, i));
}
const norm = A.mmul(v).norm('frobenius');
if (norm < 1e-10) {
nullColumns.push(i);
}
}
console.log(`NULL SPACE PATTERN: columns [${nullColumns.join(', ')}] out of ${V.columns} total`);
console.log(`Expected nullity: ${A.columns - rank}, Found: ${nullColumns.length}`);
});
});
test('Wide Test 3: Systematic pattern detection', () => {
console.log('\n=== PATTERN DETECTION: Wide Matrix Null Space ===');
// Test hypothesis: "Last k columns are null space where k = nullity"
const testMatrices = [
{ A: new Matrix([[1, 2, 3]]), name: '1×3, rank=1' },
{ A: new Matrix([[1, 2, 3, 4]]), name: '1×4, rank=1' },
{ A: new Matrix([[1, 2, 3], [2, 4, 6]]), name: '2×3, rank=1' },
{ A: new Matrix([[1, 2, 3, 4], [2, 4, 6, 8]]), name: '2×4, rank=1' },
{ A: new Matrix([[1, 2, 3], [4, 5, 6]]), name: '2×3, rank=2' },
{ A: new Matrix([[1, 2, 3, 4], [5, 6, 7, 8]]), name: '2×4, rank=2' },
];
testMatrices.forEach(({A, name}) => {
const svdResult = svd(A);
const { V, rank } = svdResult;
const nullity = A.columns - rank;
console.log(`\n${name}: nullity=${nullity}`);
// Test "last k columns" hypothesis
const expectedNullColumns = [];
for (let i = A.columns - nullity; i < A.columns; i++) {
expectedNullColumns.push(i);
}
const actualNullColumns = [];
for (let col = 0; col < V.columns; col++) {
const v = Matrix.zeros(V.rows, 1);
for (let row = 0; row < V.rows; row++) {
v.set(row, 0, V.get(row, col));
}
const norm = A.mmul(v).norm('frobenius');
if (norm < 1e-10) {
actualNullColumns.push(col);
}
}
console.log(`Expected null columns (last ${nullity}): [${expectedNullColumns.join(', ')}]`);
console.log(`Actual null columns: [${actualNullColumns.join(', ')}]`);
console.log(`Hypothesis "last k columns": ${JSON.stringify(expectedNullColumns) === JSON.stringify(actualNullColumns) ? '✅ CONFIRMED' : '❌ REJECTED'}`);
// Test other hypotheses
const firstKColumns = Array.from({length: nullity}, (_, i) => i);
console.log(`Hypothesis "first ${nullity} columns": ${JSON.stringify(firstKColumns) === JSON.stringify(actualNullColumns) ? '✅ CONFIRMED' : '❌ REJECTED'}`);
// Check if it's random/unpredictable
console.log(`Actual pattern: columns [${actualNullColumns.join(', ')}] are null space`);
});
});
test('Wide Test 4: V^T vs V analysis', () => {
console.log('\n=== V^T vs V ANALYSIS ===');
const A = new Matrix([[1, 2, 3, 4], [2, 4, 6, 8]]); // 2×4, rank=1
const svdResult = svd(A);
const { V } = svdResult;
const VT = V.transpose();
console.log('Testing if we should use V^T instead of V...');
console.log(`A: 2×4, V: ${V.rows}×${V.columns}, V^T: ${VT.rows}×${VT.columns}`);
console.log('\nV columns:');
for (let col = 0; col < V.columns; col++) {
const v = Matrix.zeros(V.rows, 1);
for (let row = 0; row < V.rows; row++) {
v.set(row, 0, V.get(row, col));
}
const norm = A.mmul(v).norm('frobenius');
console.log(` V[:, ${col}]: ||Av|| = ${norm.toExponential(3)} ${norm < 1e-10 ? '(null)' : ''}`);
}
console.log('\nV^T rows (as column vectors):');
for (let row = 0; row < VT.rows; row++) {
const v = Matrix.zeros(VT.columns, 1);
for (let col = 0; col < VT.columns; col++) {
v.set(col, 0, VT.get(row, col));
}
const norm = A.mmul(v).norm('frobenius');
console.log(` V^T[${row}, :] as column: ||Av|| = ${norm.toExponential(3)} ${norm < 1e-10 ? '(null)' : ''}`);
}
});
});
});
/**
* @fileoverview Matrix Operations Implementation
* @module math/linear/core/matrixOps
*
* This module implements low-level matrix operations using ml-matrix.
* It provides a clean functional API that wraps ml-matrix classes and
* standardizes the results.
*
* ARCHITECTURAL PRINCIPLE:
* This is the only module that directly imports and uses ml-matrix classes.
* All matrix operations are wrapped in our standardized interfaces and
* provide consistent error handling.
*/
import {
Matrix,
LuDecomposition,
QrDecomposition,
SingularValueDecomposition,
CholeskyDecomposition,
} from 'ml-matrix';
import type {
SVDResult,
LUResult,
QRResult,
CholeskyResult,
} from '../core/core.interface';
// =============================================================================
// DECOMPOSITION FUNCTIONS
// =============================================================================
/**
* Performs Singular Value Decomposition on a matrix.
* This is the primary diagnostic tool for determining rank, condition number, and more.
*/
export function svd(A: Matrix): SVDResult {
const svdInstance = new SingularValueDecomposition(A);
return {
U: svdInstance.leftSingularVectors,
S: svdInstance.diagonal,
V: svdInstance.rightSingularVectors,
rank: svdInstance.rank,
condition: svdInstance.condition,
raw: svdInstance,
};
}
/**
* Performs LU Decomposition with pivoting on a square matrix.
*/
export function lu(A: Matrix): LUResult {
const luInstance = new LuDecomposition(A);
return {
L: luInstance.lowerTriangularMatrix,
U: luInstance.upperTriangularMatrix,
p: luInstance.pivotPermutationVector,
raw: luInstance,
};
}
/**
* Performs QR Decomposition on a matrix.
*/
export function qr(A: Matrix): QRResult {
const qrInstance = new QrDecomposition(A);
return {
Q: qrInstance.orthogonalMatrix,
R: qrInstance.upperTriangularMatrix,
raw: qrInstance,
};
}
/**
* Performs Cholesky Decomposition on a symmetric, positive-definite matrix.
*/
export function cholesky(A: Matrix): CholeskyResult {
const choleskyInstance = new CholeskyDecomposition(A);
if (!choleskyInstance.isPositiveDefinite()) {
throw new Error('Cholesky decomposition failed: matrix is not positive-definite.');
}
return {
L: choleskyInstance.lowerTriangularMatrix,
raw: choleskyInstance,
};
}
// =============================================================================
// SOLVER FUNCTIONS
// =============================================================================
/**
* Solves Ax = b using LU decomposition.
*/
export function solveWithLU(A: Matrix, b: Matrix): Matrix {
return new LuDecomposition(A).solve(b);
}
/**
* Solves Ax = b (in the least-squares sense) using QR decomposition.
*/
export function solveWithQR(A: Matrix, b: Matrix): Matrix {
return new QrDecomposition(A).solve(b);
}
/**
* Solves Ax = b using SVD (computes the pseudoinverse solution).
*/
export function solveWithSVD(svdResult: SVDResult, b: Matrix): Matrix {
return svdResult.raw.solve(b);
}
/**
* Solves Ax = b using Cholesky decomposition.
*/
export function solveWithCholesky(A: Matrix, b: Matrix): Matrix {
return new CholeskyDecomposition(A).solve(b);
}
// =============================================================================
// DIAGNOSTIC FUNCTIONS
// =============================================================================
/**
* Checks if a matrix is symmetric within a small tolerance.
*/
export function isSymmetric(A: Matrix): boolean {
if (A.rows !== A.columns) return false;
// Use built-in method if available
if (typeof (A as any).isSymmetric === 'function') {
return (A as any).isSymmetric();
}
// Fallback: check by comparing with transpose
const tolerance = 1e-12;
for (let i = 0; i < A.rows; i++) {
for (let j = 0; j < A.columns; j++) {
if (Math.abs(A.get(i, j) - A.get(j, i)) > tolerance) {
return false;
}
}
}
return true;
}
/**
* Checks if a matrix is symmetric, positive-definite (SPD).
*/
export function isSpd(A: Matrix): boolean {
if (A.rows !== A.columns || !isSymmetric(A)) {
return false;
}
try {
const choleskyInstance = new CholeskyDecomposition(A);
return choleskyInstance.isPositiveDefinite();
} catch {
return false;
}
}
// =============================================================================
// BASIC MATRIX OPERATIONS
// =============================================================================
/**
* Matrix-matrix multiplication.
*/
export function matrixMultiply(A: Matrix, B: Matrix): Matrix {
return A.mmul(B);
}
/**
* Matrix subtraction.
*/
export function matrixSubtract(A: Matrix, B: Matrix): Matrix {
return A.sub(B);
}
/**
* Calculate the Euclidean (L2) norm of a matrix or vector.
*/
export function getNorm(A: Matrix): number {
return A.norm('frobenius');
}
/**
* Create a matrix filled with zeros.
*/
export function createZeros(rows: number, cols: number): Matrix {
return Matrix.zeros(rows, cols);
}
/**
* Check if a matrix consists entirely of zeros within tolerance.
*/
export function isMatrixAllZeros(A: Matrix, tol: number = 1e-15): boolean {
return getNorm(A) < tol;
}
/**
* Create an identity matrix.
*/
export function createIdentity(size: number): Matrix {
return Matrix.eye(size);
}
/**
* Get matrix rank using SVD.
*/
export function getRank(A: Matrix, _tolerance: number = 1e-12): number {
const svdResult = svd(A);
return svdResult.rank;
}
/**
* Get matrix condition number using SVD.
*/
export function getCondition(A: Matrix): number {
const svdResult = svd(A);
return svdResult.condition;
}
/**
* Compute the null space of a matrix using SVD.
*
* THEORY:
* For A = UΣV^T, the null space Null(A) = {x : Ax = 0} is spanned by
* columns of V corresponding to zero singular values.
*
* EMPIRICAL APPROACH (ROBUST):
* Since ml-matrix SVD behavior can be complex (especially with autoTranspose),
* we test each column of V to see which ones actually satisfy Ax = 0.
* This is more reliable than assuming positional patterns.
*
* ALGORITHM:
* 1. Test each column of V by computing ||A * v_i||
* 2. Columns with ||A * v_i|| < tolerance are in the null space
* 3. Extract these columns as the null space basis
*
* VERIFIED: This approach works correctly with ml-matrix regardless of
* internal implementation details or autoTranspose behavior.
*
* @param A - Original matrix
* @param svdResult - SVD decomposition result of A
* @param tolerance - Threshold for considering Ax ≈ 0
* @returns Matrix whose columns form orthonormal null space basis, or null if trivial
*/
export function computeNullSpace(A: Matrix, svdResult: SVDResult, tolerance: number = 1e-12): Matrix | null {
const { U, S, V, rank } = svdResult;
const expectedNullity = A.columns - rank;
console.log(`\n=== DEBUGGING computeNullSpace ===`);
console.log(`Matrix A: ${A.rows}×${A.columns}, rank=${rank}`);
console.log(`Expected nullity: ${expectedNullity} (A.columns=${A.columns} - rank=${rank})`);
console.log(`SVD matrices: U=${U.rows}×${U.columns}, V=${V.rows}×${V.columns}`);
console.log(`Singular values: [${S.map(s => s.toExponential(3)).join(', ')}]`);
console.log(`SVD threshold: ${svdResult.raw.threshold.toExponential(3)}`);
console.log(`SVD norm2: ${svdResult.raw.norm2.toExponential(3)}`);
// Check which singular values are below threshold
const zeroIndices = S.map((s, i) => ({ value: s, index: i, isZero: s <= svdResult.raw.threshold }));
console.log(`Singular value analysis:`);
zeroIndices.forEach(({value, index, isZero}) => {
console.log(` S[${index}] = ${value.toExponential(3)} ${isZero ? '(≤ threshold, ZERO)' : '(> threshold, NON-ZERO)'}`);
});
// Show the V matrix
console.log(`\nV matrix (${V.rows}×${V.columns}):`);
for (let i = 0; i < V.rows; i++) {
const row = [];
for (let j = 0; j < V.columns; j++) {
row.push(V.get(i, j).toFixed(6));
}
console.log(` [${row.join(', ')}]`);
}
// If full rank, no null space
if (expectedNullity <= 0) {
console.log(`✓ Full rank matrix, no null space\n`);
return null;
}
// Use SVD's own threshold and map by singular values
const svdThreshold = svdResult.raw.threshold;
const adjustedTolerance = Math.max(tolerance, svdThreshold * 1000); // Be more lenient
console.log(`\nAdjusted tolerance: ${adjustedTolerance.toExponential(3)} (was ${tolerance.toExponential(3)})`);
// Method 1: Trust SVD singular values to identify null space columns
const theoreticalNullIndices: number[] = [];
for (let i = 0; i < S.length; i++) {
const singularValue = S[i];
if (singularValue !== undefined && singularValue <= svdThreshold) {
theoreticalNullIndices.push(i);
}
}
console.log(`Theoretical null space columns (by singular values): [${theoreticalNullIndices.join(', ')}]`);
// Method 2: Empirical test with relaxed tolerance
const empiricalNullIndices: number[] = [];
console.log(`\nTesting V columns for null space property:`);
for (let col = 0; col < V.columns; col++) {
// Extract column col of V as a vector
const v = Matrix.zeros(V.rows, 1);
for (let row = 0; row < V.rows; row++) {
const value = V.get(row, col);
if (value !== undefined) {
v.set(row, 0, value);
}
}
console.log(` V column ${col}: [${Array.from({length: v.rows}, (_, i) => v.get(i, 0).toFixed(6)).join(', ')}]`);
// Test if A * v ≈ 0
try {
const Av = A.mmul(v);
const norm = Av.norm('frobenius');
const isNullEmpirical = norm < adjustedTolerance;
const isNullTheoretical = theoreticalNullIndices.includes(col);
console.log(` A*v = [${Array.from({length: Av.rows}, (_, i) => Av.get(i, 0).toExponential(3)).join(', ')}]`);
console.log(` ||A*v|| = ${norm.toExponential(3)} empirical:${isNullEmpirical ? '✓' : '✗'} theoretical:${isNullTheoretical ? '✓' : '✗'}`);
if (isNullEmpirical) {
empiricalNullIndices.push(col);
}
} catch (error) {
console.log(` ERROR: Matrix multiplication failed: ${error}`);
continue;
}
}
// Combine both approaches
const allNullIndices = [...new Set([...theoreticalNullIndices, ...empiricalNullIndices])];
console.log(`Combined null space indices: [${allNullIndices.join(', ')}]`);
console.log(`\nSummary:`);
console.log(` Empirical: ${empiricalNullIndices.length} vectors [${empiricalNullIndices.join(', ')}]`);
console.log(` Theoretical: ${theoreticalNullIndices.length} vectors [${theoreticalNullIndices.join(', ')}]`);
console.log(` Combined: ${allNullIndices.length} vectors [${allNullIndices.join(', ')}]`);
console.log(` Expected: ${expectedNullity} vectors`);
// Use the combined approach
const finalNullIndices = allNullIndices.length >= expectedNullity ? allNullIndices : allNullIndices;
if (finalNullIndices.length === 0) {
console.log(`✗ No null space vectors found\n`);
return null;
}
// Extract the null space columns
const nullBasis = Matrix.zeros(V.rows, finalNullIndices.length);
for (let j = 0; j < finalNullIndices.length; j++) {
const sourceCol = finalNullIndices[j];
if (sourceCol !== undefined && sourceCol < V.columns) {
for (let i = 0; i < V.rows; i++) {
const value = V.get(i, sourceCol);
if (value !== undefined) {
nullBasis.set(i, j, value);
}
}
}
}
console.log(`✓ Returning combined null space basis: ${nullBasis.rows}×${nullBasis.columns}\n`);
return nullBasis;
}
/**
* @fileoverview Matrix Operations Implementation
* @module math/linear/core/matrixOps
*
* This module implements low-level matrix operations using ml-matrix.
* It provides a clean functional API that wraps ml-matrix classes and
* standardizes the results.
*
* ARCHITECTURAL PRINCIPLE:
* This is the only module that directly imports and uses ml-matrix classes.
* All matrix operations are wrapped in our standardized interfaces and
* provide consistent error handling.
*/
import {
Matrix,
LuDecomposition,
QrDecomposition,
SingularValueDecomposition,
CholeskyDecomposition,
} from 'ml-matrix';
import type {
SVDResult,
LUResult,
QRResult,
CholeskyResult,
} from '../core/core.interface';
// =============================================================================
// DECOMPOSITION FUNCTIONS
// =============================================================================
/**
* Performs Singular Value Decomposition on a matrix using ml-matrix.
* This is the primary diagnostic tool for determining rank, condition number, and more.
*/
export function svd(A: Matrix, options?: { autoTranspose?: boolean }): SVDResult {
const svdInstance = new SingularValueDecomposition(A, { autoTranspose: true, ...options });
return {
U: svdInstance.leftSingularVectors,
S: svdInstance.diagonal,
V: svdInstance.rightSingularVectors,
rank: svdInstance.rank,
condition: svdInstance.condition,
raw: svdInstance,
};
}
/**
* Performs LU Decomposition with pivoting on a square matrix.
*/
export function lu(A: Matrix): LUResult {
const luInstance = new LuDecomposition(A);
return {
L: luInstance.lowerTriangularMatrix,
U: luInstance.upperTriangularMatrix,
p: luInstance.pivotPermutationVector,
raw: luInstance,
};
}
/**
* Performs QR Decomposition on a matrix.
*/
export function qr(A: Matrix): QRResult {
const qrInstance = new QrDecomposition(A);
return {
Q: qrInstance.orthogonalMatrix,
R: qrInstance.upperTriangularMatrix,
raw: qrInstance,
};
}
/**
* Performs Cholesky Decomposition on a symmetric, positive-definite matrix.
*/
export function cholesky(A: Matrix): CholeskyResult {
const choleskyInstance = new CholeskyDecomposition(A);
if (!choleskyInstance.isPositiveDefinite()) {
throw new Error('Cholesky decomposition failed: matrix is not positive-definite.');
}
return {
L: choleskyInstance.lowerTriangularMatrix,
raw: choleskyInstance,
};
}
// =============================================================================
// SOLVER FUNCTIONS
// =============================================================================
/**
* Solves Ax = b using a pre-computed LU decomposition.
*/
export function solveWithLU(luResult: LUResult, b: Matrix): Matrix {
return luResult.raw.solve(b);
}
/**
* Solves Ax = b (in the least-squares sense) using a pre-computed QR decomposition.
*/
export function solveWithQR(qrResult: QRResult, b: Matrix): Matrix {
return qrResult.raw.solve(b);
}
/**
* Solves Ax = b using SVD (computes the pseudoinverse solution).
*/
export function solveWithSVD(svdResult: SVDResult, b: Matrix): Matrix {
return svdResult.raw.solve(b);
}
/**
* Solves Ax = b using a pre-computed Cholesky decomposition.
*/
export function solveWithCholesky(choleskyResult: CholeskyResult, b: Matrix): Matrix {
return choleskyResult.raw.solve(b);
}
// =============================================================================
// DIAGNOSTIC FUNCTIONS
// =============================================================================
/**
* Checks if a matrix is symmetric within a small tolerance.
*/
export function isSymmetric(A: Matrix): boolean {
if (!A.isSquare()) return false;
// Use built-in method - ml-matrix provides optimized symmetry checking
return A.isSymmetric();
}
/**
* Checks if a matrix is symmetric, positive-definite (SPD).
*/
export function isSpd(A: Matrix): boolean {
if (A.rows !== A.columns || !isSymmetric(A)) {
return false;
}
try {
const choleskyInstance = new CholeskyDecomposition(A);
return choleskyInstance.isPositiveDefinite();
} catch {
return false;
}
}
// =============================================================================
// BASIC MATRIX OPERATIONS
// =============================================================================
/**
* Matrix-matrix multiplication.
*/
export function matrixMultiply(A: Matrix, B: Matrix): Matrix {
return A.mmul(B);
}
/**
* Matrix subtraction.
*/
export function matrixSubtract(A: Matrix, B: Matrix): Matrix {
return Matrix.sub(A, B);
}
/**
* Calculate the Euclidean (L2) norm of a matrix or vector.
*/
export function getNorm(A: Matrix): number {
return A.norm('frobenius');
}
/**
* Create a matrix filled with zeros.
*/
export function createZeros(rows: number, cols: number): Matrix {
return Matrix.zeros(rows, cols);
}
/**
* Check if a matrix consists entirely of zeros within tolerance.
*/
export function isMatrixAllZeros(A: Matrix, tol: number = 1e-15): boolean {
return getNorm(A) < tol;
}
/**
* Create an identity matrix.
*/
export function createIdentity(size: number): Matrix {
return Matrix.eye(size);
}
/**
* Compute the null space of a matrix using optimized economy-aware approach.
*
* REALITY: Both QR and SVD provide "economy" decompositions for wide matrices:
* - QR of A^T (n×m): Q becomes n×min(n,m) instead of n×n
* - SVD of A (m×n): V becomes n×min(m,n) instead of n×n
*
* OPTIMIZED STRATEGY:
* 1. Extract whatever null vectors SVD can provide (up to min(m,n))
* 2. For wide matrices: Use Gram-Schmidt to generate missing vectors
* 3. Skip QR entirely for wide matrices (known to be incomplete)
* 4. Always verify Ax ≈ 0 empirically with extensive debugging
*
* @param A - Original matrix
* @param svdResult - SVD decomposition result of A
* @param tolerance - Threshold for considering Ax ≈ 0
* @returns Matrix whose columns form orthonormal null space basis, or null if trivial
*/
export function computeNullSpace(
A: Matrix,
svdResult: SVDResult,
tolerance: number = 1e-12,
debug: boolean = false
): Matrix | null {
const { S, V } = svdResult;
const m = A.rows;
const n = A.columns;
if (debug) {
console.log(`\n=== DEBUGGING computeNullSpace (Economy-Optimized) ===`);
console.log(`Matrix A: ${m}×${n}, rank=${svdResult.rank}`);
console.log(`Expected nullity: ${n - svdResult.rank} (A.columns=${n} - rank=${svdResult.rank})`);
console.log(`SVD matrices: U=${svdResult.U.rows}×${svdResult.U.columns}, V=${V.rows}×${V.columns}`);
console.log(`Singular values: [${S.map(s => s.toExponential(3)).join(', ')}]`);
console.log(`SVD threshold: ${svdResult.raw.threshold.toExponential(3)}`);
console.log(`Is wide matrix: ${n > m} (economy SVD: max ${Math.min(m, n)} vectors available)`);
}
// Step 1: Determine numerical rank deterministically
let rank: number;
if (S.length === 0) {
rank = 0;
} else {
const firstSingularValue = S[0];
if (firstSingularValue === undefined || firstSingularValue === 0) {
rank = 0;
} else {
// Use strict inequality for determinism: count S[i] > tol * S[0]
const threshold = tolerance * firstSingularValue;
rank = S.filter(s => s > threshold).length;
}
}
if (debug) {
console.log(`\nAdjusted tolerance: ${tolerance.toExponential(3)}`);
const firstSV = S.length > 0 && S[0] !== undefined ? S[0] : 0;
console.log(`Rank threshold: ${(tolerance * firstSV).toExponential(3)}`);
console.log(`Computed rank: ${rank} (was SVD rank: ${svdResult.rank})`);
}
// Step 2: Handle trivial null space
if (rank >= n) {
if (debug) console.log(`✓ Full rank matrix (rank=${rank} >= n=${n}), no null space\n`);
return null;
}
const nullity = n - rank;
if (debug) console.log(`Expected nullity: ${nullity}`);
// Step 3: Handle zero matrix - entire space is null space
if (rank === 0) {
if (debug) console.log(`✓ Zero matrix detected - returning identity as null space basis`);
return createIdentity(n);
}
// Step 4: Get right singular vectors V (transpose of V^T to match reference)
// In the reference: V = Vt.T[:, :r] means take first r columns of V^T transposed
// ml-matrix V is already the right singular vectors (not V^T), so take first r columns directly
// Step 5: Construct augmented matrix W = [V | B] where B is columns r to n-1 of identity
const I = createIdentity(n);
const W = Matrix.zeros(n, n);
// Copy first 'rank' columns from V
for (let j = 0; j < rank && j < V.columns; j++) {
for (let i = 0; i < n; i++) {
W.set(i, j, V.get(i, j));
}
}
// Copy identity columns from 'rank' to 'n-1' (B matrix)
for (let j = rank; j < n; j++) {
const identityCol = j; // Column j of identity
for (let i = 0; i < n; i++) {
W.set(i, j, I.get(i, identityCol));
}
}
if (debug) {
console.log(`Constructed augmented matrix W: ${W.rows}×${W.columns}`);
// Debug: Check if W has full rank
const WSvd = svd(W);
console.log(`W rank: ${WSvd.rank} (should be ${n})`);
}
// Step 6: Compute QR decomposition on W
let qrResult: QRResult;
try {
qrResult = qr(W);
} catch (error) {
if (debug) console.log(`QR decomposition failed: ${error}`);
return null;
}
const { Q, R } = qrResult;
if (debug) {
console.log(`QR result: Q=${Q.rows}×${Q.columns}, R=${R.rows}×${R.columns}`);
// Check if Q is orthonormal
const QtQ = Q.transpose().mmul(Q);
const identityError = getNorm(QtQ.sub(createIdentity(Q.columns)));
console.log(`Q orthonormality error: ${identityError.toExponential(3)}`);
}
// Step 7: Extract null space basis as last (n - rank) columns of Q
const V0 = Matrix.zeros(n, nullity);
for (let j = 0; j < nullity; j++) {
const sourceCol = rank + j;
for (let i = 0; i < n; i++) {
V0.set(i, j, Q.get(i, sourceCol));
}
}
if (debug) {
console.log(`Extracted null space V0: ${V0.rows}×${V0.columns}`);
// Check if extracted vectors have unit norm
for (let j = 0; j < V0.columns; j++) {
const col = V0.getColumnVector(j);
const norm = getNorm(col);
console.log(` V0 column ${j} norm: ${norm.toFixed(6)}`);
}
}
// Step 8: Canonicalize signs for determinism (make max abs entry positive per column)
for (let j = 0; j < V0.columns; j++) {
let maxAbsIdx = 0;
let maxAbsValue = Math.abs(V0.get(0, j));
for (let i = 1; i < V0.rows; i++) {
const absValue = Math.abs(V0.get(i, j));
if (absValue > maxAbsValue) {
maxAbsValue = absValue;
maxAbsIdx = i;
}
}
// Flip sign if max absolute element is negative
if (V0.get(maxAbsIdx, j) < 0) {
for (let i = 0; i < V0.rows; i++) {
V0.set(i, j, -V0.get(i, j));
}
}
}
// Step 9: Optional verification (for robustness in production)
if (debug) {
const AV0 = A.mmul(V0);
const residualNorm = getNorm(AV0);
const ANorm = getNorm(A);
const V0TV0 = V0.transpose().mmul(V0);
const identity = createIdentity(nullity);
const orthError = getNorm(Matrix.sub(V0TV0, identity));
console.log(`\n=== VERIFICATION ===`);
console.log(`Residual ||A*V0||: ${residualNorm.toExponential(3)}`);
console.log(`Matrix norm ||A||: ${ANorm.toExponential(3)}`);
console.log(`Orthonormality ||V0^T*V0 - I||: ${orthError.toExponential(3)}`);
const machEps = Number.EPSILON;
const sqrtMachEps = Math.sqrt(machEps);
const residualThreshold = sqrtMachEps * ANorm;
if (residualNorm > residualThreshold || orthError > sqrtMachEps) {
console.log(`⚠ Numerical issues detected; consider adjusting tolerance`);
} else {
console.log(`✓ Verification passed`);
}
}
return V0;
}
here my computeNullSpace impl, free to review and include if you would like :)