**What is Hard about Solving Linear Algebraic Equations?**

A variety of difficulties arise in solving some systems of linear algebraic equations (also known as “matrix problems”.) We will present several such difficulties (not all of them) here in the simplest possible form: two equations in two unknowns.

1. A typical system with no difficulties.

1*x + 1* y = 2

1*x – 1*Y = 0

This is the kind of situation one sees in High School Algebra classes. Easy to solve.

2.
A *variable
may not actually be involved* in the solution. For example,

1*x + 0*y = 1

2*x + 0*y = 2

In this case, y simply isn’t
involved, and can’t be solved for.
Solving for x is easy with either equation (x=1) but when you substitute
x=1 into the other equation you just get 0 = 0, which is true (!) but not helpful.
Of course, you can see that this could happen with a larger system of equation
also. So, can this problem be solved? Yes, x=1, y=anything is a solution. But there is no *unique* solution unless you specify some rule for picking one.

3.
O*ne equation
may not really be present*. For
example,

1*x + 1*y = 2

0*x + 0*y = 0

In this case the second equation is completely useless. On the other hand, any x and y we pick would satisfy this equation! So we can come up with solutions for the set of equations easily: x=1 and y=1, for example. Of course, such equations present (minor) problems to solvers such as the ones we offer here. More on that later.

4.
The equations may be *dependent*. That is, one or
more of the equations may be combinations of the other equations. For example,

1*x + 1*y = 2

2*x + 2*y = 4

In this case the second equation is just 2 times the first. When you attempt to eliminate the x term in the second equation, by subtracting the first equation from it twice, look what you get:

1*x + 1*y = 2

0*x + 0*y = 0

This last equation is, again, true, but
useless. You can’t solve it for x or for
y. (But as in the above case, any pair
of values will satisfy it.) So, can this
problem be solved? Yes, in fact, there
are many solutions: x=1 and y=1; or x=2 and y=0; etc. But there is no *unique* solution unless you specify some rule for picking one.

5.
The equations may not be *consistent*. In the case just
above, the equations were *dependent*,
but at least *agreed*. What if the equations are *dependent* again, but also *disagreed*? For example,

1*x + 1*y = 2

2*x + 2*y = 6

When we subtract the first equation times 2 from the second, we get

1*x + 1*y = 2

0*x + 0*y = 2

and the second equation cannot be satisfied*.*
This system cannot be solved exactly.
However, an approximate solution can be computed.

6.
The equations may be independent, and
consistent, but result in an unreasonable answer. Often this is due to what we call an *ill-conditioned system of equations*. For example, consider these two:

1*x + 1*y = 2

1*x + 1.01*y = 3

When we solve this by the usual elimination
and substitution method, we get x= -98 and y=100. This is a surprising answer since by looking
at each equation alone we would have guessed that the solution values would be
near 1. And a negative answer such as
-98 may be completely unacceptable. For
example, if x and y represent amounts of physical entities such as light, then
a negative number is plain wrong. An
approximate solution can be computed by “*regularization*” of the problem.

7.
Sometimes a system of equations is simply *under-determined.* That is, there is not enough information
present to compute a particular solution.
For example:

1x + 2y = 2

Yes, that is just one equation, with two unknowns. Sometimes it really happens that we need to solve say 10 equations in 20 unknowns. So what do we do? We have to add some rule for picking an appropriate solution among all the possible solutions. The usual way to do this is to pick the solution with the smallest sum of squares. That is, the smallest possible value of x*x + y*y. In this case that turns out to be x=0.4, y = 0.8.

8.
Sometimes a system of equations is simply *over-determined*. That is, there are more equations than need
to determine a solution, and we have no reason to eliminate any particular
equation. We would like a solution that
somehow “comes closest” to solving all of the equations. For example,

1*x + 2 *y = 15

2*x + 2*y = 16

-1*x + 1*y = 6

The solution to this system is easy: x=1, y=7. And, we are in luck: that solution satisfies all three equations exactly! But that is not the typical case. More typically, we would have something like

1*x + 2 *y = 15.1

2*x + 2*y = 15.9

-1*x + 1*y = 6.5

And if we try x=1, y=7 in this set of equation, we will not quite solve the equations. Instead we will get 0.1 too little for the first equation, 0.1 too much for the first equation, and 0.5 too much for the first equation. We can do better than that. In particular, we can compute a solution which minimizes the sum of the square of these errors. Such solutions are called “least squares” solutions.

When we have larger systems of equations we may have two or
more of the above difficulties in the one set of equations. For example, it
would not be surprising for a set of equations to be *underdetermined (case 6), and not independent (case 3) and
ill-conditioned (case 5). *That’s
what makes developing solvers for these interesting.