Arls.py

A Python Module for Automatic Regularization of Ill-Conditioned Linear Systems

 

What this is:

Arls.py is a complete package for AUTOMATICALLY solving difficult linear algebra systems (or “matrix problems”). The methods here are based on my Ph.D. dissertion quite some years ago. That work has been revised and improved over several generations as I have had time to do that work. These methods have been available in C++ for several years, but we recently decided to migrate them to Python. Please see the Tutorial below for a full presentation on this method. These methods, at their core, rely on detailed analysis of the Picard Condition. Basically, these methods completely free the user who has a difficult, usually ill-conditioned, problem from worrying about :

·         choosing a rank to use

·         choosing a level of singular value to zero when using a Generalized Inverse

·         estimating the error in the matrix coefficients

·         estimating the error in the right-hand side

·         choosing the Tikhonov regularization parameter, lambda

You may download our Python module here:  arls.py.

How to Use arls. We assume you have some equations to solve, and can use Python. Put the coefficients in rows of matrix A, and put the right hand side values in a column, b. It does not matter whether the number of rows equals the number of unknowns, or is larger or smaller than the number of unknowns, as arls’s routines handle all cases.  Singularity or ill-conditioning are expected and considered normal behavior by these methods.

·         If the number of rows is less than or equal to the number of unknowns and the data is consistent and accurate, then the solution will typically be exact except for round-off error, as any good standard linear equation solver would do.

·         If the number of rows is larger than the number of unknowns, or the data is significantly in error, then a best fit, or “least-squares” solution will be found.

·         If the system of equations had serious problems such as unintended dependency between rows, or “singularities”, or ill-conditioning, then arls will usually be able to address those issues automatically and still return a reasonable solution. This is the interesting case for arls, and the reason for its existence. Arls was developed to automatically solve so-called “ill-conditioned” systems which often otherwise require special handling or are considered impossible to solve. Of course, when arls’ algorithms produce a solution of such a difficult problem the residual error (that is, the values in Ax-b, or the single value, norm(Ax-b)) will of necessity be larger than what the residual would have been if a standard least-squares solver had been able to handle the problem. So always check the result for reasonableness and appropriateness to your technical needs.

Case 1. Once you have created your matrix, A, and the corresponding right-hand-side, b, we have the problem

     A x = b

To solve it, import arls’s solvers and call arls:

from arls import arls, arlseq, arlsgt, arlsnn

x = arls(A,b,True)[0]

Note: When calling arls (but not the other solvers) you can provide a matrix for b, rather than a single column. Arls then solves each separate problem represented by each column and returns a matrix with the same number of columns as b has. See Note below.

Note: If you delete the third argument, “True”, then arls() will do a quick test to see if the problem is  very ill-conditioned, and if not, then Scipy’s lstsq() will called to solve the problem. This feature is for users for whom speed is very important.

Case 2. If you have a special situation, such as needing the solution to add to, say, 100.0 percent, or the first value to start at 50., or any such exact requirement (not just more regular “least-squares” equations) then put those requirements as rows in a separate set of equations:

                E= matrix of equality constraint coefficients

                f = right hand side values

Then you can solve the dual system,

                A x = b  (regular equations)

                E x == h (exact constraint constraints)

by calling our “equality constraint” solver:

from arls import arls, arlseq, arlsgt, arlsnn

x = arlseq(A,b,E,f)[0]

Case 3. If you have some special need for certain unknowns to satisfy given bounds, then put those requirements in a separate set of equations, Gx>=h. Be sure to express them all in “greater than” form, such as x[3]>0.0. If you need to use a “less than” equation, such as x[0] < 2.0, then multiply the equation on both side by -1 and you will get a “greater than” constraint instead of a “less than” constraint”: -x[0]>-2.0. The effect is the same.

So we then have

                G = matrix of “greater than” inequality constraint coefficients

                h = right hand side constraints

and you can solve the dual system,

                A x = b (regular equations)

                Gx >= h (“greater than” constraints)

 by calling

from arls import arls, arlseq, arlsgt, arlsnn

 x = arlsgt(A,b,G,h)[0]

Case 4. Lastly, if your system of equations results in negative values, and that is not acceptable, then call our non-negative solver:

from arls import arls, arlseq, arlsgt, arlsnn

x = arlsnn(A,b)[0]

 

Extra Information: In each case above you can ask for extra information to be returned, like this:

x, nr, ur, sigma, lambda = arls(A,b)

In this case the extra parameters have these meanings when b contains only one column:

  nr : (int) is the numerical rank of the matrix A. This measure of rank ignores “singular values” that are near zero.

  ur : (int) is the computed “usable rank”, which is a function of both A and b, and tells the solver at what point the error in A and b combine to ruin the use of more than ur rows of the orthogonalized version of the system of equations. In other words, ur is the breakpoint in the orthogonalized version of the system of equation between rows in which the error in b is acceptably small, and rows in which the error in b is uselessly large.

  msigma : (float) is the estimated estimated right-hand-side root-mean-square error. This means that our analysis reveals the actual error in b – a unique feature!

  mlambda : (float) is the Tikhonov regularization parameter that goes with the estimated right hand side error, sigma.

 

Note: When calling arls, but not the other solvers, you can provide a matrix for b, rather than a single column. Arls then solves each separate problem represented by each column and returns a matrix with the same number of columns as b has. The meaning of these extra parameters is therefore a little different than in the single column case:

  nr : is the LEAST numerical rank found among the set of problems

  ur : is the LEAST usable rank found among the set of problems

  msigma : is the LARGEST estimated right-hand-side error found among the set of problems.

  Mlambda : is the LARGEST Tikhonov parameter found among the set of problems.

 

Resources:

 

You may download our Python module here:  arls.py. Then copy it to your Python work area.

This file is updated with algorithmic improvements regularly. Last update: October 28, 2020,

 

To run a short demo download demo1.py and run it in Python.  In this demo we set up a classic difficult matrix problem (with a Hilbert matrix) and solve it with:

1.       Python’s standard least-squares solver, lstsq

2.       Python’s regularizing solver, lsmr

3.       Arls

This is just one example. Your problem may behave differently than the problem we have chosen as an illustration. But we have found that arls usually provides a usable solution for ill-conditioned problems than lsmr.

 

These fully worked small examples do not require you  to run anything:

Demo F: Garden Fertilizer Calculation  

Demo L: A Problem from Oil-Well Logging

Demo S: A Problem with Stucco color

Demo M: Solving Equations to Play MineSweeper

 

Here is a tutorial for beginners on why linear equation can be so hard to solve.

 

Lastly: if you have a special need, such as a solver that incorporates two or more of the above four cases, or a need to minimize some measure other than the sum of the squares of the errors of the solution, or whatever, please send us a note. We may be able to provide a special solver.

 

Contact us:

Rondall E. Jones,

rejones7@msn.com

Sandia Laboratories, Albuquerque, 1967-2011

Publications: Please Google “Rondall Jones Sandia” and ignore results not containing the exact spelling of “Rondall”

Ph.D., University of New Mexico, 1985.  Dissertation: Solving Linear Algebraic Systems Arising in the Solution of Integral Equations of the First Kind; Advisor: Cleve B. Moler, creator of MATLAB and co-founder of MathWorks. My thanks to Richard Hanson (deceased), co-author of Solving Least Squares Problems, for additional guidance for this work.