Thank you for visiting our site! You landed on this page because you entered a search term similar to this:

__solving third order linear equations__, here's the result:

Even though computers are becoming faster and more powerful, eachcomputation (an addition or a multiplication, for example) that a computer performs takes a finite amount of time.If the solution of a particular problem requires too many of theseindividual computations, the computer time needed to solve it may very well exceed the time the user has available to work on the problem -- and in some cases the time required may exceed theuser's lifetime. The solution of a linear system is one of the most frequentlyperformed calculations in computational mathematics.Some problems can be modeled directly as a linear system.Others have a different form, but require the solution oflinear systems as a part of their solution.Solving some of these problems requires solution of thousands oflinear systems; and each linear system may contain thousands ofequations. In order to compare the four methods defined in Chapter When we discuss the
We will determine the cost of Gaussian Elimination by counting the number of addition and multiplication operations that must be performed. Forward elimination and back substitution will be considered separately. Let's consider first a 3 x 3 example. The steps involved in Gaussian Elimination are shown below: First, we will count the number of arithmetic operations required to eliminate all of the zeros below the first element in the first column of the coefficient matrix. In order to introduce a zero in the first position of the second row, we had to compute the constant by which we would multiply the first row. We did this by dividing, but we will refer to the calculation as a multiplication when doing the operation count. Next we multiplied the second, third, and right-hand-side elements of row one by and added them to the second, third, and right-hand-side elements of row two. This required 3 multiplications and 3 additions. Note that we do not multiply the first element in row one by and add it to the first element of row two. Since we chose as the multiplier in order to make the first element in row two zero after the calculations, we do not actually perform the operations that result in the zero. The operation count so far is 4 multiplications and 3 additions. To complete the first step of Gaussian Elimination, we calculated a new constant ( ), multiplied the second, third, and right-hand-side elements of row one by the constant, and added the results to the second, third, and right-hand-side elements of row three. This required an additional 4 multiplications and 3 additions. Hence, the first step required 4(2) multiplications and 3(2) additions. In the second step, we used the second row to eliminate the second element in the third row. We multiplied the third and right-hand-side elements of row 2 by and added the result to the third and right-hand-side elements of row 3. This required 2 multiplications and 2 additions, and we also must include an additional multiplication for the calculation of . Notice that in performing the second step, we did not involve the first row at all, and we did not perform any operations using the zeros that we introduced in step one. This completes the forward elimination phase. The operation count so far is:
Now let's consider the back substitution phase.Starting from the bottom of the final augmented matrix, we compute the value of Finally, substituting both In summary, the steps in back substitution for our 3 x 3 system required
1 + 2 + 3 = multiplications and
0 + 1 + 2 = 3 additions The complete count for Gaussian Elimination on this system is thus 28 operations: 17 multiplications and 11 additions. Nothing in this count depended on the particular coefficient matrix or right-hand side appearing in the problem. Gaussian Elimination for any 3 x 3 linear system will require the same number of operations. Now we would like to extend this process to linear systems of other sizes. Since we don't want to have to handle each possible problem size individually, we will consider a general In the first step of Gaussian elimination, we introduce zeros in all the positions in the first column below the diagonal element, a_{11}. To do this, we first compute the constant by which we need tomultiply the first row in order to introduce a zero in thefirst element of row 2.Then we multiply the last n elements in the first row by a this constant and then add the result to the second row. This requires n+1 multiplications and n additions.Remember that we do not actually compute the zeros below the diagonal, so we do not have to consider any of the elements in column one when doing the count.Notice that one multiplication and one addition came from operations on the right-hand-side element of the row.Later, we will be interested in modifying this number in orderto determine the operation count for problems with multipleright-hand sides. Introducing zeros in the first position of any of the lower rows will require multiplication by a different constant, but the number of multiplications and additions will be the same as for row two. Hence the first step of Gaussian Elimination requires At the end of the first step, the elements of rows below the first will have changed. We indicate this by denoting the elements by Now that we have zeros in the first column, we wish to introduce zeros below the diagonal element, Considering the first two steps together, we get ( n-1)(n+1) + (n-2)(n) multiplications and (n-1)(n) + (n-2)(n-1) additions. We continue eliminating elements below the diagonals for columns three through After the last or ( The total operation count for the forward elimination phase ofGaussian Elimination is
Since there is a pattern in these expressions, we suspect that there might be a more compact way to write them. First, let's consider the number of multiplications. Each term consists of two factors multiplied together, and the second factor is two larger than the first. That is, each term has the form Using some of the properties of sums, we can write Expressions for the sum of the first n integers and the sum of thesquares of the first n integers are known:and So we will rewrite our expressions to take advantage of this information.
and
Then our expression for the number of multiplications required forforward elimination is Using similar arguments, we can write the number of additions required for forward elimination as Now let's consider back substitution. The first step (row
Using the expression for the sum of the first Gaussian Elimination for a |