10993

Solution of Linear Algebraic Equations

Лекция

Математика и математический анализ

Lesson 6 3. Solution of Linear Algebraic Equations 3.1. Introduction A set of linear algebraic equations looks like this: 3.1 Here the n unknowns xj j = 1 2 n are related by m equations. The coefficients aij with i = 1 2 m and j = 1 2 n are known numbers as are the righthand side quantities bi i = 1 2 m. If n = m then there are as many equations as unknowns and there is a good chance of solving for a unique solution...

Английский

2013-04-03

132.5 KB

1 чел.

Lesson 6

3. Solution of Linear Algebraic Equations

3.1. Introduction

A set of linear algebraic equations looks like this:

  (3.1)

Here the n unknowns xj, j = 1; 2;; n are related by m equations. The coefficients aij with i = 1; 2;…; m and j = 1; 2;; n are known numbers, as are the right-hand side quantities bi , i = 1; 2;; m.

If n = m then there are as many equations as unknowns, and there is a good chance of solving for a unique solution set of xj’s. Analytically, there can fail to be a unique solution if one or more of the m equations is a linear combination of the others, a condition called row degeneracy, or if all equations contain certain variables only in exactly the same linear combination, called column degeneracy. (For square matrices, row degeneracy implies column degeneracy, and vice versa.)

Equations (3.1) can be written in matrix form as

     (3.2)

Here the raised dot denotes matrix multiplication, A is the matrix of coefficients, and b is the right-hand side written as a column vector,

 (3.3)

By convention, the first index on an element aij denotes its row, the second index its column.

If in matrix equation (3.2)

  (3.4)

then we have:

   (3.5)

Example 1

Consider the linear system:

Solve this system by formula (3.5):

This system may be solved by another direct method of linear algebra by Cramer’s Rule:

    (3.6)

In this case  is is determinant of a square matrix A in which
i - column is changed by a column vector b.

Example 2

This is Cramer's Rule applied to the above linear system:

We will consider the following tasks of computational linear algebra:

Solution of the matrix equation  for an unknown vector x, where A is a square matrix of coefficients, raised dot denotes matrix multiplication, and b is a known right-hand side vector.

Calculation of the matrix A-1 which is the matrix inverse of a square matrix A, i.e., A  A-1 = A-1· A = E, where E is the identity matrix (all zeros except for ones on the diagonal). This task is equivalent, for an matrix A, to the previous task with N different bj ’s (j = 1, 2,…, N), namely the unit vectors  (bj = all zero elements except for 1 in the j’th component). The corresponding x’s are then the columns of the matrix inverse of A.

Calculation of the determinant of a square matrix A.

There is also a great watershed dividing routines that are direct (i.e., execute in a predictable number of operations) from routines that are iterative (i.e., attempt to converge to the desired answer in however many steps are necessary). Iterative methods become preferable when the battle against loss of significance is in danger of being lost, either due to large N. We will treat iterative methods only incompletely in this book. These methods are important, but mostly beyond our scope. We will, however, discuss in detail a technique, which is on the borderline between direct and iterative methods, namely the iterative improvement of a solution that has been obtained by direct methods.

3.2. Gaussian Elimination with Backsubstitution

The usefulness of Gaussian elimination with backsubstitution is primarily pedagogical. It stands between full elimination schemes such as Gauss-Jordan, and triangular decomposition schemes. Gaussian elimination reduces a matrix not all the way to the identity matrix, but only halfway, to a matrix whose components on the diagonal and above (say) remain nontrivial. Let us now see what advantages accrues.

In Gaussian elimination, zeros are introduced below the diagonal element. Solve the following system of linear equations using Gauss elimination. For clarity, we will write out equations only for the case of four equations and four unknowns:

  (3.7)

The pivot element is a11 (a11 ≠ 0). The first equation is divided by the element a11 (a11 ≠ 0). This being a trivial linear combination of the first equation:

  (3.8)

where  .

We need to multiply the first equation (3.8) by a21, a31 and a41 and subtract from 2nd, 3rd and 4th equations so those zeros are introduced in the first position everywhere except the first equation. The modified system of linear equations is:

 (3.9)

In this system (3.9) the as do not have their original numerical values, but have been modified by all the equation operations in the elimination to this point:

 (3.10)

Now the pivot element is , then the first equation in (3.9) is divided by the pivot element:

 (3.11)

 (j =3, 4, 5).

Now we need to multiply the first equation (3.11) by  and  and subtract from 2nd and 3rd equations (3.9) so those zeros are introduced in the second position. The new linear system consists of the two equations:

 (3.12)

The pivot element of the new system is  If , the first equation (3.12) is divided by :

, (3.13)

,   (j = 4, 5).

Now multiply equation (3.13) by  and subtract from 2nd equation of system (3.12):

 (3.14)

 (j = 4, 5).

Then, when we have done this for all the pivots, we will be left with a reduced system of equations that looks like this:

 (3.15)

The procedure up to this point is termed Gaussian elimination.

But how do we solve for the xs? The last x (x4 in this example) is already isolated, namely

(3.16)

With the last x known we can move to the penultimate x,

(3.17)

and then proceed with the x before that one. The typical step is

(3.18)

The procedure defined by equation (3.18) is called backsubstitution. The combination of Gaussian elimination and backsubstitution yields a solution to the set of equations.

Example

Solve the following system of linear equations using Gaussian elimination and backsubstitution:

The pivot element is . Now divide the first equation by 2.0 to introduce 1 in the pivot position. We need to multiply the first equation by 0.4, 0.3 and 1.0 and subtract from 2nd, 3rd and 4th equations so those zeros are introduced in the first column everywhere except the first equation. The augmented matrix becomes:

.

The pivot element is . Hence we can use this element to introduce zeros in the 2nd column below 2nd row. Now divide the second row by 0.3 to introduce 1 in the pivot position. After that we multiply the second row by –1.15, and –0.3 and subtract from 3rd and 4th rows so those zeros are introduced in the second column everywhere except the first and second rows:

At the 3rd step the pivot element is . And after calculations the augmented matrix becomes:

Now divide the 4th row by 1.1199786 and begin backsubstitution:

6


 

А также другие работы, которые могут Вас заинтересовать

48914. Технико-экономическое обоснование инвестиционного проекта 1.01 MB
  Расчет себестоимости 1 тонны алюминия и всего объема увеличивается растворимость и потери алюминия. Количество технологического алюминия характеризуется уровнем металла в ванне. Уровень металла в силу высокой теплопроводности алюминия позволяет регулировать теплоотдачу электролизера: чем выше этот уровень тем больше тепла отводится через боковые поверхности электролизера.
48920. Проектирование основания и фундамента 13 этажного жилого дома в городе Великий Устюг 6.71 MB
  Глубина заложения подошвы фундамента мелкого заложения равна: d = dв hs hcf hц = 22 03 02 06 = 21 м где dв= 22м размер от чистого пола подвала до пола 1го этажа hs =03м – величина заглубления подошвы фундамента от низа пола подвала hcf = 02 высота пола подвала hц = 06 – высота цокольной части здания 3. Великий Устюг нормативная глубина сезонного промерзания do= 03м для песка Kn = 06 жилое здание Вывод: Глубина залегания фундамента равна 21м т. Глава IV Определение размеров фундамента мелкого заложения на...