Skip to content

An Approximate, Randomized 'Eigen' Algorithm

In the world of numerical algorithms, finding efficient solutions for matrix decomposition is a cornerstone of many scientific and engineering applications. Eigenvalue decomposition, in particular, plays a crucial role in fields ranging from signal processing to machine learning. One interesting challenge is developing approaches that are both computationally efficient and suitable for environments with constraints, such as embedded systems or real-time applications.

In this post, we explore an approximate, randomized algorithm for eigenvalue decomposition that leverages the "Follow the Perturbed Leader" (FPL) framework. This approach is particularly useful in scenarios where traditional sparse matrix packages may be too cumbersome to implement in constrained environments. By combining randomization techniques with iterative eigenvector computation methods, we aim to simplify the problem without sacrificing accuracy.

Background

In its linear, discrete-time form, Kalman filters can be described by the following equations:

xt+1=Axt+wy=Cxt+v(wv)N(0,(Qw00Rv))

In an ideal situation, covariance matrices Qw and Rv are known. However, when empirical errors are not easily observable, as in cases with sensor calibration challenges, alternative methods must be employed, such as maximum likelihood estimation. This leads to a matrix optimization problem:

minRw,Qw(lndetPRw,Qw+YPRw,Qw1Y)s.t. Qw,Rw0PRw,Qw=i=1N+K1OiQwOi+j=1NIjRvIj

One solution is to decompose P, either through Cholesky or other methods, keeping in mind that P is often sparse.

The Algorithm

We propose a randomized matrix algorithm to approximate eigenvalue decomposition. In many cases, sparse matrix libraries are too large to fit into embedded systems, leading to implementation challenges. Our algorithm offers an alternative, balancing efficiency and accuracy.

This approach leverages the "Follow the Perturbed Leader" (FPL) oracle-based eigenvector computation in an online setting.

We assume we have a distribution D over Sn, the set of symmetric matrices, and we express the matrix A, whose eigenvectors we seek, as a sum of (potentially sparse or random) matrices Aτ. Rescaling by a constant does not affect the result.

The algorithm operates as follows:

Algorithm:Follow the Perturbed Leader (FPL)1:Sample a matrix ND.2:x1EV(N).3:fort=1,2,do4:Play xt and observe At.5:xt+1EV(τ=1tAτ+N).6:endfor

An effective way of evaluating ND is by drawing a vector v of i.i.d. normals and computing N=cvv, with c being a constant defined as c=Tnmax(1,lnTn).

Computing Approximate Eigenvectors

Assuming we wish to compute approximate eigenvectors and values of A, the algorithm proceeds as follows:

1:fori=1,,Ndo2:Guess approximate leading eigenvector li and eigenvalue λi of Avia the FPL algorithm.(Power method with orthonormalization steps).3:AAλilili.4:endfor

Conclusion

This randomized approach provides a practical solution for scenarios where traditional eigenvalue algorithms are computationally prohibitive or impractical for embedded applications. By applying FPL techniques, we can iteratively compute eigenvectors in an efficient, online setting.

References