How to find the inverse determinant. Inverse matrix. Finding the inverse matrix using elementary transformations

Matrix A -1 is called the inverse matrix with respect to matrix A if A*A -1 = E, where E is the identity matrix of the nth order. inverse matrix can only exist for square matrices.

Purpose of the service. Using this service online you can find algebraic complements, transposed matrix A T, allied matrix and inverse matrix. The decision is carried out directly on the website (online) and is free. The calculation results are presented in a report in Word and Excel format (i.e., it is possible to check the solution). see design example.

Instructions. To obtain a solution, it is necessary to specify the dimension of the matrix. Next, fill out matrix A in the new dialog box.

See also Inverse matrix using the Jordano-Gauss method

Algorithm for finding the inverse matrix

  1. Finding the transposed matrix A T .
  2. Definition algebraic additions. Replace each element of the matrix with its algebraic complement.
  3. Compiling an inverse matrix from algebraic additions: each element of the resulting matrix is ​​divided by the determinant of the original matrix. The resulting matrix is ​​the inverse of the original matrix.
Next algorithm for finding the inverse matrix similar to the previous one except for some steps: first the algebraic complements are calculated, and then the allied matrix C is determined.
  1. Determine whether the matrix is ​​square. If not, then there is no inverse matrix for it.
  2. Calculation of the determinant of the matrix A. If he doesn't equal to zero, we continue the solution, otherwise the inverse matrix does not exist.
  3. Definition of algebraic complements.
  4. Filling out the union (mutual, adjoint) matrix C .
  5. Compiling an inverse matrix from algebraic additions: each element of the adjoint matrix C is divided by the determinant of the original matrix. The resulting matrix is ​​the inverse of the original matrix.
  6. They do a check: they multiply the original and the resulting matrices. The result should be an identity matrix.

Example No. 1. Let's write the matrix in the form:

Algebraic additions. ∆ 1.2 = -(2·4-(-2·(-2))) = -4 ∆ 2.1 = -(2 4-5 3) = 7 ∆ 2.3 = -(-1 5-(-2 2)) = 1 ∆ 3.2 = -(-1·(-2)-2·3) = 4
A -1 =
0,6 -0,4 0,8
0,7 0,2 0,1
-0,1 0,4 -0,3

Another algorithm for finding the inverse matrix

Let us present another scheme for finding the inverse matrix.
  1. Find the determinant of a given square matrix A.
  2. We find algebraic complements to all elements of the matrix A.
  3. We write algebraic additions of row elements to columns (transposition).
  4. We divide each element of the resulting matrix by the determinant of the matrix A.
As we see, the transposition operation can be applied both at the beginning, on the original matrix, and at the end, on the resulting algebraic additions.

A special case: The inverse of the identity matrix E is the identity matrix E.

Let there be a square matrix of nth order

Matrix A -1 is called inverse matrix in relation to matrix A, if A*A -1 = E, where E is the identity matrix of the nth order.

Identity matrix- such a square matrix in which all the elements along the main diagonal, passing from the upper left corner to the lower right corner, are ones, and the rest are zeros, for example:

inverse matrix may exist only for square matrices those. for those matrices in which the number of rows and columns coincide.

Theorem for the existence condition of an inverse matrix

In order for a matrix to have an inverse matrix, it is necessary and sufficient that it be non-singular.

The matrix A = (A1, A2,...A n) is called non-degenerate, if the column vectors are linearly independent. The number of linearly independent column vectors of a matrix is ​​called the rank of the matrix. Therefore, we can say that in order for an inverse matrix to exist, it is necessary and sufficient that the rank of the matrix is ​​equal to its dimension, i.e. r = n.

Algorithm for finding the inverse matrix

  1. Write matrix A into the table for solving systems of equations using the Gaussian method and assign matrix E to it on the right (in place of the right-hand sides of the equations).
  2. Using Jordan transformations, reduce matrix A to a matrix consisting of unit columns; in this case, it is necessary to simultaneously transform the matrix E.
  3. If necessary, rearrange the rows (equations) of the last table so that under the matrix A of the original table you get the identity matrix E.
  4. Write down the inverse matrix A -1, which is located in the last table under the matrix E of the original table.
Example 1

For matrix A, find the inverse matrix A -1

Solution: We write matrix A and assign the identity matrix E to the right. Using Jordan transformations, we reduce matrix A to the identity matrix E. The calculations are given in Table 31.1.

Let's check the correctness of the calculations by multiplying the original matrix A and the inverse matrix A -1.

As a result of matrix multiplication, the identity matrix was obtained. Therefore, the calculations were made correctly.

Answer:

Solving matrix equations

Matrix equations can look like:

AX = B, HA = B, AXB = C,

where A, B, C are the specified matrices, X is the desired matrix.

Matrix equations are solved by multiplying the equation by inverse matrices.

For example, to find the matrix from the equation, you need to multiply this equation by on the left.

Therefore, to find a solution to the equation, you need to find the inverse matrix and multiply it by the matrix on the right side of the equation.

Other equations are solved similarly.

Example 2

Solve the equation AX = B if

Solution: Since the inverse matrix is ​​equal to (see example 1)

Matrix method in economic analysis

Along with others, they are also used matrix methods . These methods are based on linear and vector-matrix algebra. Such methods are used for the purposes of analyzing complex and multidimensional economic phenomena. Most often, these methods are used when it is necessary to make a comparative assessment of the functioning of organizations and their structural divisions.

In the process of applying matrix analysis methods, several stages can be distinguished.

At the first stage a system of economic indicators is being formed and on its basis a matrix of initial data is compiled, which is a table in which system numbers are shown in its individual rows (i = 1,2,....,n), and in vertical columns - numbers of indicators (j = 1,2,....,m).

At the second stage For each vertical column, the largest of the available indicator values ​​is identified, which is taken as one.

After this, all amounts reflected in this column are divided by highest value and a matrix of standardized coefficients is formed.

At the third stage all components of the matrix are squared. If they have different significance, then each matrix indicator is assigned a certain weight coefficient k. The value of the latter is determined by expert opinion.

On the last one, fourth stage found rating values R j are grouped in order of their increase or decrease.

The matrix methods outlined should be used, for example, when comparative analysis various investment projects, as well as when assessing other economic indicators of organizations.

The matrix $A^(-1)$ is called the inverse of the square matrix $A$ if the condition $A^(-1)\cdot A=A\cdot A^(-1)=E$ is satisfied, where $E $ is the identity matrix, the order of which is equal to the order of the matrix $A$.

A non-singular matrix is ​​a matrix whose determinant is not equal to zero. Accordingly, a singular matrix is ​​one whose determinant is equal to zero.

The inverse matrix $A^(-1)$ exists if and only if the matrix $A$ is non-singular. If the inverse matrix $A^(-1)$ exists, then it is unique.

There are several ways to find the inverse of a matrix, and we will look at two of them. This page will discuss the adjoint matrix method, which is considered standard in most courses. higher mathematics. The second method of finding the inverse matrix (the method of elementary transformations), which involves using the Gauss method or the Gauss-Jordan method, is discussed in the second part.

Adjoint matrix method

Let the matrix $A_(n\times n)$ be given. In order to find the inverse matrix $A^(-1)$, three steps are required:

  1. Find the determinant of the matrix $A$ and make sure that $\Delta A\neq 0$, i.e. that matrix A is non-singular.
  2. Compose algebraic complements $A_(ij)$ of each element of the matrix $A$ and write the matrix $A_(n\times n)^(*)=\left(A_(ij) \right)$ from the found algebraic complements.
  3. Write the inverse matrix taking into account the formula $A^(-1)=\frac(1)(\Delta A)\cdot (A^(*))^T$.

The matrix $(A^(*))^T$ is often called adjoint (reciprocal, allied) to the matrix $A$.

If the solution is done manually, then the first method is good only for matrices of relatively small orders: second (), third (), fourth (). To find the inverse of a matrix higher order, other methods are used. For example, the Gaussian method, which is discussed in the second part.

Example No. 1

Find the inverse of matrix $A=\left(\begin(array) (cccc) 5 & -4 &1 & 0 \\ 12 &-11 &4 & 0 \\ -5 & 58 &4 & 0 \\ 3 & - 1 & -9 & 0 \end(array) \right)$.

Since all elements of the fourth column are equal to zero, then $\Delta A=0$ (i.e. the matrix $A$ is singular). Since $\Delta A=0$, there is no inverse matrix to matrix $A$.

Answer: matrix $A^(-1)$ does not exist.

Example No. 2

Find the inverse of matrix $A=\left(\begin(array) (cc) -5 & 7 \\ 9 & 8 \end(array)\right)$. Perform check.

We use the adjoint matrix method. First, let's find the determinant of the given matrix $A$:

$$ \Delta A=\left| \begin(array) (cc) -5 & 7\\ 9 & 8 \end(array)\right|=-5\cdot 8-7\cdot 9=-103. $$

Since $\Delta A \neq 0$, then the inverse matrix exists, therefore we will continue the solution. Finding algebraic complements

\begin(aligned) & A_(11)=(-1)^2\cdot 8=8; \; A_(12)=(-1)^3\cdot 9=-9;\\ & A_(21)=(-1)^3\cdot 7=-7; \; A_(22)=(-1)^4\cdot (-5)=-5.\\ \end(aligned)

We compose a matrix of algebraic additions: $A^(*)=\left(\begin(array) (cc) 8 & -9\\ -7 & -5 \end(array)\right)$.

We transpose the resulting matrix: $(A^(*))^T=\left(\begin(array) (cc) 8 & -7\\ -9 & -5 \end(array)\right)$ (the resulting matrix is ​​often is called the adjoint or allied matrix to the matrix $A$). Using the formula $A^(-1)=\frac(1)(\Delta A)\cdot (A^(*))^T$, we have:

$$ A^(-1)=\frac(1)(-103)\cdot \left(\begin(array) (cc) 8 & -7\\ -9 & -5 \end(array)\right) =\left(\begin(array) (cc) -8/103 & 7/103\\ 9/103 & 5/103 \end(array)\right) $$

So, the inverse matrix is ​​found: $A^(-1)=\left(\begin(array) (cc) -8/103 & 7/103\\ 9/103 & 5/103 \end(array)\right) $. To check the truth of the result, it is enough to check the truth of one of the equalities: $A^(-1)\cdot A=E$ or $A\cdot A^(-1)=E$. Let's check the equality $A^(-1)\cdot A=E$. In order to work less with fractions, we will substitute the matrix $A^(-1)$ not in the form $\left(\begin(array) (cc) -8/103 & 7/103\\ 9/103 & 5/103 \ end(array)\right)$, and in the form $-\frac(1)(103)\cdot \left(\begin(array) (cc) 8 & -7\\ -9 & -5 \end(array )\right)$:

$$ A^(-1)\cdot(A) =-\frac(1)(103)\cdot \left(\begin(array) (cc) 8 & -7\\ -9 & -5 \end( array)\right)\cdot\left(\begin(array) (cc) -5 & 7 \\ 9 & 8 \end(array)\right) =-\frac(1)(103)\cdot\left( \begin(array) (cc) -103 & 0 \\ 0 & -103 \end(array)\right) =\left(\begin(array) (cc) 1 & 0 \\ 0 & 1 \end(array )\right) =E $$

Answer: $A^(-1)=\left(\begin(array) (cc) -8/103 & 7/103\\ 9/103 & 5/103 \end(array)\right)$.

Example No. 3

Find the inverse matrix for the matrix $A=\left(\begin(array) (ccc) 1 & 7 & 3 \\ -4 & 9 & 4 \\ 0 & 3 & 2\end(array) \right)$. Perform check.

Let's start by calculating the determinant of the matrix $A$. So, the determinant of the matrix $A$ is:

$$ \Delta A=\left| \begin(array) (ccc) 1 & 7 & 3 \\ -4 & 9 & 4 \\ 0 & 3 & 2\end(array) \right| = 18-36+56-12=26. $$

Since $\Delta A\neq 0$, then the inverse matrix exists, therefore we will continue the solution. We find the algebraic complements of each element of a given matrix:

$$ \begin(aligned) & A_(11)=(-1)^(2)\cdot\left|\begin(array)(cc) 9 & 4\\ 3 & 2\end(array)\right| =6;\; A_(12)=(-1)^(3)\cdot\left|\begin(array)(cc) -4 &4 \\ 0 & 2\end(array)\right|=8;\; A_(13)=(-1)^(4)\cdot\left|\begin(array)(cc) -4 & 9\\ 0 & 3\end(array)\right|=-12;\\ & A_(21)=(-1)^(3)\cdot\left|\begin(array)(cc) 7 & 3\\ 3 & 2\end(array)\right|=-5;\; A_(22)=(-1)^(4)\cdot\left|\begin(array)(cc) 1 & 3\\ 0 & 2\end(array)\right|=2;\; A_(23)=(-1)^(5)\cdot\left|\begin(array)(cc) 1 & 7\\ 0 & 3\end(array)\right|=-3;\\ & A_ (31)=(-1)^(4)\cdot\left|\begin(array)(cc) 7 & 3\\ 9 & 4\end(array)\right|=1;\; A_(32)=(-1)^(5)\cdot\left|\begin(array)(cc) 1 & 3\\ -4 & 4\end(array)\right|=-16;\; A_(33)=(-1)^(6)\cdot\left|\begin(array)(cc) 1 & 7\\ -4 & 9\end(array)\right|=37. \end(aligned) $$

We compose a matrix of algebraic additions and transpose it:

$$ A^*=\left(\begin(array) (ccc) 6 & 8 & -12 \\ -5 & 2 & -3 \\ 1 & -16 & 37\end(array) \right); \; (A^*)^T=\left(\begin(array) (ccc) 6 & -5 & 1 \\ 8 & 2 & -16 \\ -12 & -3 & 37\end(array) \right) . $$

Using the formula $A^(-1)=\frac(1)(\Delta A)\cdot (A^(*))^T$, we get:

$$ A^(-1)=\frac(1)(26)\cdot \left(\begin(array) (ccc) 6 & -5 & 1 \\ 8 & 2 & -16 \\ -12 & - 3 & 37\end(array) \right)= \left(\begin(array) (ccc) 3/13 & -5/26 & 1/26 \\ 4/13 & 1/13 & -8/13 \ \ -6/13 & -3/26 & 37/26 \end(array) \right) $$

So $A^(-1)=\left(\begin(array) (ccc) 3/13 & -5/26 & 1/26 \\ 4/13 & 1/13 & -8/13 \\ - 6/13 & -3/26 & 37/26 \end(array) \right)$. To check the truth of the result, it is enough to check the truth of one of the equalities: $A^(-1)\cdot A=E$ or $A\cdot A^(-1)=E$. Let's check the equality $A\cdot A^(-1)=E$. In order to work less with fractions, we will substitute the matrix $A^(-1)$ not in the form $\left(\begin(array) (ccc) 3/13 & -5/26 & 1/26 \\ 4/13 & 1/13 & -8/13 \\ -6/13 & -3/26 & 37/26 \end(array) \right)$, and in the form $\frac(1)(26)\cdot \left( \begin(array) (ccc) 6 & -5 & 1 \\ 8 & 2 & -16 \\ -12 & -3 & 37\end(array) \right)$:

$$ A\cdot(A^(-1)) =\left(\begin(array)(ccc) 1 & 7 & 3 \\ -4 & 9 & 4\\ 0 & 3 & 2\end(array) \right)\cdot \frac(1)(26)\cdot \left(\begin(array) (ccc) 6 & -5 & 1 \\ 8 & 2 & -16 \\ -12 & -3 & 37\ end(array) \right) =\frac(1)(26)\cdot\left(\begin(array) (ccc) 26 & 0 & 0 \\ 0 & 26 & 0 \\ 0 & 0 & 26\end (array) \right) =\left(\begin(array) (ccc) 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1\end(array) \right) =E $$

The check was successful, the inverse matrix $A^(-1)$ was found correctly.

Answer: $A^(-1)=\left(\begin(array) (ccc) 3/13 & -5/26 & 1/26 \\ 4/13 & 1/13 & -8/13 \\ -6 /13 & -3/26 & 37/26 \end(array) \right)$.

Example No. 4

Find the matrix inverse of matrix $A=\left(\begin(array) (cccc) 6 & -5 & 8 & 4\\ 9 & 7 & 5 & 2 \\ 7 & 5 & 3 & 7\\ -4 & 8 & -8 & -3 \end(array) \right)$.

For a fourth-order matrix, finding the inverse matrix using algebraic additions is somewhat difficult. However, such examples in tests meet.

To find the inverse of a matrix, you first need to calculate the determinant of the matrix $A$. The best way to do this in this situation is by decomposing the determinant along a row (column). We select any row or column and find the algebraic complements of each element of the selected row or column.

For example, for the first line we get:

$$ A_(11)=\left|\begin(array)(ccc) 7 & 5 & 2\\ 5 & 3 & 7\\ 8 & -8 & -3 \end(array)\right|=556; \; A_(12)=-\left|\begin(array)(ccc) 9 & 5 & 2\\ 7 & 3 & 7 \\ -4 & -8 & -3 \end(array)\right|=-300 ; $$ $$ A_(13)=\left|\begin(array)(ccc) 9 & 7 & 2\\ 7 & 5 & 7\\ -4 & 8 & -3 \end(array)\right|= -536;\; A_(14)=-\left|\begin(array)(ccc) 9 & 7 & 5\\ 7 & 5 & 3\\ -4 & 8 & -8 \end(array)\right|=-112. $$

The determinant of the matrix $A$ is calculated using the following formula:

$$ \Delta(A)=a_(11)\cdot A_(11)+a_(12)\cdot A_(12)+a_(13)\cdot A_(13)+a_(14)\cdot A_(14 )=6\cdot 556+(-5)\cdot(-300)+8\cdot(-536)+4\cdot(-112)=100. $$

$$ \begin(aligned) & A_(21)=-77;\;A_(22)=50;\;A_(23)=87;\;A_(24)=4;\\ & A_(31) =-93;\;A_(32)=50;\;A_(33)=83;\;A_(34)=36;\\ & A_(41)=473;\;A_(42)=-250 ;\;A_(43)=-463;\;A_(44)=-96. \end(aligned) $$

Matrix of algebraic complements: $A^*=\left(\begin(array)(cccc) 556 & -300 & -536 & -112\\ -77 & 50 & 87 & 4 \\ -93 & 50 & 83 & 36\\ 473 & -250 & -463 & -96\end(array)\right)$.

Adjoint matrix: $(A^*)^T=\left(\begin(array) (cccc) 556 & -77 & -93 & 473\\ -300 & 50 & 50 & -250 \\ -536 & 87 & 83 & -463\\ -112 & 4 & 36 & -96\end(array)\right)$.

Inverse matrix:

$$ A^(-1)=\frac(1)(100)\cdot \left(\begin(array) (cccc) 556 & -77 & -93 & 473\\ -300 & 50 & 50 & -250 \\ -536 & 87 & 83 & -463\\ -112 & 4 & 36 & -96 \end(array) \right)= \left(\begin(array) (cccc) 139/25 & -77/100 & -93/100 & 473/100 \\ -3 & 1/2 & 1/2 & -5/2 \\ -134/25 & 87/100 & 83/100 & -463/100 \\ -28/ 25 & 1/25 & 9/25 & -24/25 \end(array) \right) $$

The check, if desired, can be done in the same way as in the previous examples.

Answer: $A^(-1)=\left(\begin(array) (cccc) 139/25 & -77/100 & -93/100 & 473/100 \\ -3 & 1/2 & 1/2 & -5/2 \\ -134/25 & 87/100 & 83/100 & -463/100 \\ -28/25 & 1/25 & 9/25 & -24/25 \end(array) \right) $.

In the second part, we will consider another way to find the inverse matrix, which involves the use of transformations of the Gaussian method or the Gauss-Jordan method.

This topic is one of the most hated among students. Worse, probably, are the qualifiers.

The trick is that the very concept of an inverse element (and I’m not just talking about matrices) refers us to the operation of multiplication. Even in school curriculum Multiplication is considered a complex operation, and multiplication of matrices is generally a separate topic, to which I have a whole paragraph and a video lesson dedicated.

Today we will not go into the details of matrix calculations. Let’s just remember: how matrices are designated, how they are multiplied, and what follows from this.

Review: Matrix Multiplication

First of all, let's agree on notation. A matrix $A$ of size $\left[ m\times n \right]$ is simply a table of numbers with exactly $m$ rows and $n$ columns:

\=\underbrace(\left[ \begin(matrix) ((a)_(11)) & ((a)_(12)) & ... & ((a)_(1n)) \\ (( a)_(21)) & ((a)_(22)) & ... & ((a)_(2n)) \\ ... & ... & ... & ... \\ ((a)_(m1)) & ((a)_(m2)) & ... & ((a)_(mn)) \\\end(matrix) \right])_(n)\]

To avoid accidentally mixing up rows and columns (believe me, in an exam you can confuse a one with a two, let alone some rows), just look at the picture:

Determining indices for matrix cells

What's happening? If you place the standard coordinate system $OXY$ in the left top corner and direct the axes so that they cover the entire matrix, then each cell of this matrix can be uniquely associated with coordinates $\left(x;y\right)$ - this will be the row number and column number.

Why is the coordinate system placed in the upper left corner? Yes, because it is from there that we begin to read any texts. It's very easy to remember.

Why is the $x$ axis directed downwards and not to the right? Again, it's simple: take a standard coordinate system (the $x$ axis goes to the right, the $y$ axis goes up) and rotate it so that it covers the matrix. This is a 90 degree clockwise rotation - we see the result in the picture.

In general, we have figured out how to determine the indices of matrix elements. Now let's look at multiplication.

Definition. Matrices $A=\left[ m\times n \right]$ and $B=\left[ n\times k \right]$, when the number of columns in the first coincides with the number of rows in the second, are called consistent.

Exactly in that order. One can be confused and say that the matrices $A$ and $B$ form an ordered pair $\left(A;B \right)$: if they are consistent in this order, then it is not at all necessary that $B$ and $A$ those. the pair $\left(B;A \right)$ is also consistent.

Only matched matrices can be multiplied.

Definition. The product of matched matrices $A=\left[ m\times n \right]$ and $B=\left[ n\times k \right]$ is the new matrix $C=\left[ m\times k \right]$ , the elements of which $((c)_(ij))$ are calculated according to the formula:

\[((c)_(ij))=\sum\limits_(k=1)^(n)(((a)_(ik)))\cdot ((b)_(kj))\]

In other words: to get the element $((c)_(ij))$ of the matrix $C=A\cdot B$, you need to take the $i$-row of the first matrix, the $j$-th column of the second matrix, and then multiply in pairs elements from this row and column. Add up the results.

Yes, that’s such a harsh definition. Several facts immediately follow from it:

  1. Matrix multiplication, generally speaking, is non-commutative: $A\cdot B\ne B\cdot A$;
  2. However, multiplication is associative: $\left(A\cdot B \right)\cdot C=A\cdot \left(B\cdot C \right)$;
  3. And even distributively: $\left(A+B \right)\cdot C=A\cdot C+B\cdot C$;
  4. And once again distributively: $A\cdot \left(B+C \right)=A\cdot B+A\cdot C$.

The distributivity of multiplication had to be described separately for the left and right sum factor precisely because of the non-commutativity of the multiplication operation.

If it turns out that $A\cdot B=B\cdot A$, such matrices are called commutative.

Among all the matrices that are multiplied by something there, there are special ones - those that, when multiplied by any matrix $A$, again give $A$:

Definition. A matrix $E$ is called identity if $A\cdot E=A$ or $E\cdot A=A$. In the case of a square matrix $A$ we can write:

The identity matrix is ​​a frequent guest when solving matrix equations. And in general, a frequent guest in the world of matrices. :)

And because of this $E$, someone came up with all the nonsense that will be written next.

What is an inverse matrix

Since matrix multiplication is a very labor-intensive operation (you have to multiply a bunch of rows and columns), the concept of an inverse matrix also turns out to be not the most trivial. And requiring some explanation.

Key Definition

Well, it's time to know the truth.

Definition. A matrix $B$ is called the inverse of a matrix $A$ if

The inverse matrix is ​​denoted by $((A)^(-1))$ (not to be confused with the degree!), so the definition can be rewritten as follows:

It would seem that everything is extremely simple and clear. But when analyzing this definition, several questions immediately arise:

  1. Does an inverse matrix always exist? And if not always, then how to determine: when it exists and when it does not?
  2. And who said that there is exactly one such matrix? What if for some initial matrix $A$ there is a whole crowd of inverses?
  3. What do all these “reverses” look like? And how, exactly, should we count them?

As for calculation algorithms, we will talk about this a little later. But we will answer the remaining questions right now. Let us formulate them in the form of separate statements-lemmas.

Basic properties

Let's start with how the matrix $A$ should, in principle, look in order for $((A)^(-1))$ to exist for it. Now we will make sure that both of these matrices must be square, and of the same size: $\left[ n\times n \right]$.

Lemma 1. Given a matrix $A$ and its inverse $((A)^(-1))$. Then both of these matrices are square, and of the same order $n$.

Proof. It's simple. Let the matrix $A=\left[ m\times n \right]$, $((A)^(-1))=\left[ a\times b \right]$. Since the product $A\cdot ((A)^(-1))=E$ exists by definition, the matrices $A$ and $((A)^(-1))$ are consistent in the order shown:

\[\begin(align) & \left[ m\times n \right]\cdot \left[ a\times b \right]=\left[ m\times b \right] \\ & n=a \end( align)\]

This is a direct consequence of the matrix multiplication algorithm: the coefficients $n$ and $a$ are “transit” and must be equal.

At the same time, the inverse multiplication is also defined: $((A)^(-1))\cdot A=E$, therefore the matrices $((A)^(-1))$ and $A$ are also consistent in the specified order:

\[\begin(align) & \left[ a\times b \right]\cdot \left[ m\times n \right]=\left[ a\times n \right] \\ & b=m \end( align)\]

Thus, without loss of generality, we can assume that $A=\left[ m\times n \right]$, $((A)^(-1))=\left[ n\times m \right]$. However, according to the definition of $A\cdot ((A)^(-1))=((A)^(-1))\cdot A$, therefore the sizes of the matrices strictly coincide:

\[\begin(align) & \left[ m\times n \right]=\left[ n\times m \right] \\ & m=n \end(align)\]

So it turns out that all three matrices - $A$, $((A)^(-1))$ and $E$ - are square matrices of size $\left[ n\times n \right]$. The lemma is proven.

Well, that's already good. We see that only square matrices are invertible. Now let's make sure that the inverse matrix is ​​always the same.

Lemma 2. Given a matrix $A$ and its inverse $((A)^(-1))$. Then this inverse matrix is ​​the only one.

Proof. Let's go by contradiction: let the matrix $A$ have at least two inverses - $B$ and $C$. Then, according to definition, the following equalities are true:

\[\begin(align) & A\cdot B=B\cdot A=E; \\ & A\cdot C=C\cdot A=E. \\ \end(align)\]

From Lemma 1 we conclude that all four matrices - $A$, $B$, $C$ and $E$ - are squares of the same order: $\left[ n\times n \right]$. Therefore, the product is defined:

Since matrix multiplication is associative (but not commutative!), we can write:

\[\begin(align) & B\cdot A\cdot C=\left(B\cdot A \right)\cdot C=E\cdot C=C; \\ & B\cdot A\cdot C=B\cdot \left(A\cdot C \right)=B\cdot E=B; \\ & B\cdot A\cdot C=C=B\Rightarrow B=C. \\ \end(align)\]

We received the only possible variant: two instances of the inverse matrix are equal. The lemma is proven.

The above arguments repeat almost verbatim the proof of the uniqueness of the inverse element for all real numbers$b\ne 0$. The only significant addition is taking into account the dimension of matrices.

However, we still do not know anything about whether every square matrix is ​​invertible. Here the determinant comes to our aid - this is a key characteristic for all square matrices.

Lemma 3. Given a matrix $A$. If its inverse matrix $((A)^(-1))$ exists, then the determinant of the original matrix is ​​nonzero:

\[\left| A\right|\ne 0\]

Proof. We already know that $A$ and $((A)^(-1))$ are square matrices of size $\left[ n\times n \right]$. Therefore, for each of them we can calculate the determinant: $\left| A\right|$ and $\left| ((A)^(-1)) \right|$. However, the determinant of a product is equal to the product of the determinants:

\[\left| A\cdot B \right|=\left| A \right|\cdot \left| B \right|\Rightarrow \left| A\cdot ((A)^(-1)) \right|=\left| A \right|\cdot \left| ((A)^(-1)) \right|\]

But according to the definition, $A\cdot ((A)^(-1))=E$, and the determinant of $E$ is always equal to 1, so

\[\begin(align) & A\cdot ((A)^(-1))=E; \\ & \left| A\cdot ((A)^(-1)) \right|=\left| E\right|; \\ & \left| A \right|\cdot \left| ((A)^(-1)) \right|=1. \\ \end(align)\]

The product of two numbers is equal to one only if each of these numbers is non-zero:

\[\left| A \right|\ne 0;\quad \left| ((A)^(-1)) \right|\ne 0.\]

So it turns out that $\left| A \right|\ne 0$. The lemma is proven.

In fact, this requirement is quite logical. Now we will analyze the algorithm for finding the inverse matrix - and it will become completely clear why, with a zero determinant, no inverse matrix in principle can exist.

But first, let’s formulate an “auxiliary” definition:

Definition. A singular matrix is ​​a square matrix of size $\left[ n\times n \right]$ whose determinant is zero.

Thus, we can claim that every invertible matrix is ​​non-singular.

How to find the inverse of a matrix

Now we will consider a universal algorithm for finding inverse matrices. In general, there are two generally accepted algorithms, and we will also consider the second one today.

The one that will be discussed now is very effective for matrices of size $\left[ 2\times 2 \right]$ and - partially - size $\left[ 3\times 3 \right]$. But starting from the size $\left[ 4\times 4 \right]$ it is better not to use it. Why - now you will understand everything yourself.

Algebraic additions

Get ready. Now there will be pain. No, don’t worry: a beautiful nurse in a skirt, stockings with lace will not come to you and give you an injection in the buttock. Everything is much more prosaic: algebraic additions and Her Majesty the “Union Matrix” come to you.

Let's start with the main thing. Let there be a square matrix of size $A=\left[ n\times n \right]$, whose elements are called $((a)_(ij))$. Then for each such element we can define an algebraic complement:

Definition. Algebraic complement $((A)_(ij))$ to the element $((a)_(ij))$ located in the $i$th row and $j$th column of the matrix $A=\left[ n \times n \right]$ is a construction of the form

\[((A)_(ij))=((\left(-1 \right))^(i+j))\cdot M_(ij)^(*)\]

Where $M_(ij)^(*)$ is the determinant of the matrix obtained from the original $A$ by deleting the same $i$th row and $j$th column.

Again. The algebraic complement to a matrix element with coordinates $\left(i;j \right)$ is denoted as $((A)_(ij))$ and is calculated according to the scheme:

  1. First, we delete the $i$-row and $j$-th column from the original matrix. We obtain a new square matrix, and we denote its determinant as $M_(ij)^(*)$.
  2. Then we multiply this determinant by $((\left(-1 \right))^(i+j))$ - at first this expression may seem mind-blowing, but in essence we are simply figuring out the sign in front of $M_(ij)^(*) $.
  3. We count and get a specific number. Those. the algebraic addition is precisely a number, and not some new matrix, etc.

The matrix $M_(ij)^(*)$ itself is called an additional minor to the element $((a)_(ij))$. And in this sense, the above definition of an algebraic complement is a special case of a more complex definition - what we looked at in the lesson about the determinant.

Important note. Actually, in “adult” mathematics, algebraic additions are defined as follows:

  1. We take $k$ rows and $k$ columns in a square matrix. At their intersection we get a matrix of size $\left[ k\times k \right]$ - its determinant is called a minor of order $k$ and is denoted $((M)_(k))$.
  2. Then we cross out these “selected” $k$ rows and $k$ columns. Once again you get a square matrix - its determinant is called an additional minor and is denoted $M_(k)^(*)$.
  3. Multiply $M_(k)^(*)$ by $((\left(-1 \right))^(t))$, where $t$ is (attention now!) the sum of the numbers of all selected rows and columns . This will be the algebraic addition.

Look at the third step: there is actually a sum of $2k$ terms! Another thing is that for $k=1$ we will get only 2 terms - these will be the same $i+j$ - the “coordinates” of the element $((a)_(ij))$ for which we are looking for an algebraic complement.

So today we're using a slightly simplified definition. But as we will see later, it will be more than enough. The following thing is much more important:

Definition. The allied matrix $S$ to the square matrix $A=\left[ n\times n \right]$ is a new matrix of size $\left[ n\times n \right]$, which is obtained from $A$ by replacing $(( a)_(ij))$ by algebraic additions $((A)_(ij))$:

\\Rightarrow S=\left[ \begin(matrix) ((A)_(11)) & ((A)_(12)) & ... & ((A)_(1n)) \\ (( A)_(21)) & ((A)_(22)) & ... & ((A)_(2n)) \\ ... & ... & ... & ... \\ ((A)_(n1)) & ((A)_(n2)) & ... & ((A)_(nn)) \\\end(matrix) \right]\]

The first thought that arises at the moment of realizing this definition is “how much will have to be counted!” Relax: you will have to count, but not that much. :)

Well, all this is very nice, but why is it necessary? But why.

Main theorem

Let's go back a little. Remember, in Lemma 3 it was stated that the invertible matrix $A$ is always non-singular (that is, its determinant is non-zero: $\left| A \right|\ne 0$).

So, the opposite is also true: if the matrix $A$ is not singular, then it is always invertible. And there is even a search scheme for $((A)^(-1))$. Check it out:

Inverse matrix theorem. Let a square matrix $A=\left[ n\times n \right]$ be given, and its determinant is nonzero: $\left| A \right|\ne 0$. Then the inverse matrix $((A)^(-1))$ exists and is calculated by the formula:

\[((A)^(-1))=\frac(1)(\left| A \right|)\cdot ((S)^(T))\]

And now - everything is the same, but in legible handwriting. To find the inverse matrix, you need:

  1. Calculate the determinant $\left| A \right|$ and make sure it is non-zero.
  2. Construct the union matrix $S$, i.e. count 100500 algebraic additions $((A)_(ij))$ and place them in place $((a)_(ij))$.
  3. Transpose this matrix $S$, and then multiply it by some number $q=(1)/(\left| A \right|)\;$.

That's all! The inverse matrix $((A)^(-1))$ has been found. Let's look at examples:

\[\left[ \begin(matrix) 3 & 1 \\ 5 & 2 \\\end(matrix) \right]\]

Solution. Let's check the reversibility. Let's calculate the determinant:

\[\left| A\right|=\left| \begin(matrix) 3 & 1 \\ 5 & 2 \\\end(matrix) \right|=3\cdot 2-1\cdot 5=6-5=1\]

The determinant is different from zero. This means the matrix is ​​invertible. Let's create a union matrix:

Let's calculate the algebraic additions:

\[\begin(align) & ((A)_(11))=((\left(-1 \right))^(1+1))\cdot \left| 2 \right|=2; \\ & ((A)_(12))=((\left(-1 \right))^(1+2))\cdot \left| 5 \right|=-5; \\ & ((A)_(21))=((\left(-1 \right))^(2+1))\cdot \left| 1 \right|=-1; \\ & ((A)_(22))=((\left(-1 \right))^(2+2))\cdot \left| 3\right|=3. \\ \end(align)\]

Please note: the determinants |2|, |5|, |1| and |3| are determinants of matrices of size $\left[ 1\times 1 \right]$, and not modules. Those. if the qualifiers included negative numbers, there is no need to remove the “minus”.

In total, our union matrix looks like this:

\[((A)^(-1))=\frac(1)(\left| A \right|)\cdot ((S)^(T))=\frac(1)(1)\cdot ( (\left[ \begin(array)(*(35)(r)) 2 & -5 \\ -1 & 3 \\\end(array) \right])^(T))=\left[ \begin (array)(*(35)(r)) 2 & -1 \\ -5 & 3 \\\end(array) \right]\]

OK it's all over Now. The problem is solved.

Answer. $\left[ \begin(array)(*(35)(r)) 2 & -1 \\ -5 & 3 \\\end(array) \right]$

Task. Find the inverse matrix:

\[\left[ \begin(array)(*(35)(r)) 1 & -1 & 2 \\ 0 & 2 & -1 \\ 1 & 0 & 1 \\\end(array) \right] \]

Solution. We calculate the determinant again:

\[\begin(align) & \left| \begin(array)(*(35)(r)) 1 & -1 & 2 \\ 0 & 2 & -1 \\ 1 & 0 & 1 \\\end(array) \right|=\begin(matrix ) \left(1\cdot 2\cdot 1+\left(-1 \right)\cdot \left(-1 \right)\cdot 1+2\cdot 0\cdot 0 \right)- \\ -\left (2\cdot 2\cdot 1+\left(-1 \right)\cdot 0\cdot 1+1\cdot \left(-1 \right)\cdot 0 \right) \\\end(matrix)= \ \ & =\left(2+1+0 \right)-\left(4+0+0 \right)=-1\ne 0. \\ \end(align)\]

The determinant is nonzero—the matrix is ​​invertible. But now it’s going to be really tough: we need to count as many as 9 (nine, motherfucker!) algebraic additions. And each of them will contain the determinant $\left[ 2\times 2 \right]$. Flew:

\[\begin(matrix) ((A)_(11))=((\left(-1 \right))^(1+1))\cdot \left| \begin(matrix) 2 & -1 \\ 0 & 1 \\\end(matrix) \right|=2; \\ ((A)_(12))=((\left(-1 \right))^(1+2))\cdot \left| \begin(matrix) 0 & -1 \\ 1 & 1 \\\end(matrix) \right|=-1; \\ ((A)_(13))=((\left(-1 \right))^(1+3))\cdot \left| \begin(matrix) 0 & 2 \\ 1 & 0 \\\end(matrix) \right|=-2; \\ ... \\ ((A)_(33))=((\left(-1 \right))^(3+3))\cdot \left| \begin(matrix) 1 & -1 \\ 0 & 2 \\\end(matrix) \right|=2; \\ \end(matrix)\]

In short, the union matrix will look like this:

Therefore, the inverse matrix will be:

\[((A)^(-1))=\frac(1)(-1)\cdot \left[ \begin(matrix) 2 & -1 & -2 \\ 1 & -1 & -1 \\ -3 & 1 & 2 \\\end(matrix) \right]=\left[ \begin(array)(*(35)(r))-2 & -1 & 3 \\ 1 & 1 & -1 \ \2 & 1 & -2 \\\end(array) \right]\]

That's it. Here is the answer.

Answer. $\left[ \begin(array)(*(35)(r)) -2 & -1 & 3 \\ 1 & 1 & -1 \\ 2 & 1 & -2 \\\end(array) \right ]$

As you can see, at the end of each example we performed a check. In this regard, an important note:

Don't be lazy to check. Multiply the original matrix by the found inverse matrix - you should get $E$.

Performing this check is much easier and faster than looking for an error in further calculations when, for example, you are solving a matrix equation.

Alternative way

As I said, the inverse matrix theorem works great for sizes $\left[ 2\times 2 \right]$ and $\left[ 3\times 3 \right]$ (in the latter case, it’s not so “great” "), but for larger matrices the sadness begins.

But don’t worry: there is an alternative algorithm with which you can calmly find the inverse even for the matrix $\left[ 10\times 10 \right]$. But, as often happens, to consider this algorithm we need a little theoretical background.

Elementary transformations

Among all possible matrix transformations, there are several special ones - they are called elementary. There are exactly three such transformations:

  1. Multiplication. You can take the $i$th row (column) and multiply it by any number $k\ne 0$;
  2. Addition. Add to the $i$-th row (column) any other $j$-th row (column), multiplied by any number $k\ne 0$ (you can, of course, do $k=0$, but what's the point? ? Nothing will change).
  3. Rearrangement. Take the $i$th and $j$th rows (columns) and swap places.

Why these transformations are called elementary (for large matrices they do not look so elementary) and why there are only three of them - these questions are beyond the scope of today's lesson. Therefore, we will not go into details.

Another thing is important: we have to perform all these perversions on the adjoint matrix. Yes, yes: you heard right. Now there will be one more definition - the last one in today's lesson.

Adjoint matrix

Surely at school you solved systems of equations using the addition method. Well, there, subtract another from one line, multiply some line by a number - that’s all.

So: now everything will be the same, but in an “adult” way. Ready?

Definition. Let a matrix $A=\left[ n\times n \right]$ and an identity matrix $E$ of the same size $n$ be given. Then the adjoint matrix $\left[ A\left| E\right. \right]$ is a new matrix of size $\left[ n\times 2n \right]$ that looks like this:

\[\left[ A\left| E\right. \right]=\left[ \begin(array)(rrrr|rrrr)((a)_(11)) & ((a)_(12)) & ... & ((a)_(1n)) & 1 & 0 & ... & 0 \\((a)_(21)) & ((a)_(22)) & ... & ((a)_(2n)) & 0 & 1 & ... & 0 \\... & ... & ... & ... & ... & ... & ... & ... \\((a)_(n1)) & ((a)_(n2)) & ... & ((a)_(nn)) & 0 & 0 & ... & 1 \\\end(array) \right]\]

In short, we take the matrix $A$, on the right we assign to it the identity matrix $E$ of the required size, we separate them with a vertical bar for beauty - here you have the adjoint. :)

What's the catch? Here's what:

Theorem. Let the matrix $A$ be invertible. Consider the adjoint matrix $\left[ A\left| E\right. \right]$. If using elementary string conversions bring it to the form $\left[ E\left| B\right. \right]$, i.e. by multiplying, subtracting and rearranging rows to obtain from $A$ the matrix $E$ on the right, then the matrix $B$ obtained on the left is the inverse of $A$:

\[\left[ A\left| E\right. \right]\to \left[ E\left| B\right. \right]\Rightarrow B=((A)^(-1))\]

It's that simple! In short, the algorithm for finding the inverse matrix looks like this:

  1. Write the adjoint matrix $\left[ A\left| E\right. \right]$;
  2. Perform elementary string conversions until $E$ appears instead of $A$;
  3. Of course, something will also appear on the left - a certain matrix $B$. This will be the opposite;
  4. PROFIT!:)

Of course, this is much easier said than done. So let's look at a couple of examples: for sizes $\left[ 3\times 3 \right]$ and $\left[ 4\times 4 \right]$.

Task. Find the inverse matrix:

\[\left[ \begin(array)(*(35)(r)) 1 & 5 & 1 \\ 3 & 2 & 1 \\ 6 & -2 & 1 \\\end(array) \right]\ ]

Solution. We create the adjoint matrix:

\[\left[ \begin(array)(rrr|rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 3 & 2 & 1 & 0 & 1 & 0 \\ 6 & -2 & 1 & 0 & 0 & 1 \\\end(array) \right]\]

Since the last column of the original matrix is ​​filled with ones, subtract the first row from the rest:

\[\begin(align) & \left[ \begin(array)(rrr|rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 3 & 2 & 1 & 0 & 1 & 0 \\ 6 & - 2 & 1 & 0 & 0 & 1 \\\end(array) \right]\begin(matrix) \downarrow \\ -1 \\ -1 \\\end(matrix)\to \\ & \to \left [ \begin(array)(rrr|rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 2 & -3 & 0 & -1 & 1 & 0 \\ 5 & -7 & 0 & -1 & 0 & 1 \\\end(array) \right] \\ \end(align)\]

There are no more units, except for the first line. But we don’t touch it, otherwise the newly removed units will begin to “multiply” in the third column.

But we can subtract the second line twice from the last - we get one in the lower left corner:

\[\begin(align) & \left[ \begin(array)(rrr|rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 2 & -3 & 0 & -1 & 1 & 0 \\ 5 & -7 & 0 & -1 & 0 & 1 \\\end(array) \right]\begin(matrix) \ \\ \downarrow \\ -2 \\\end(matrix)\to \\ & \left [ \begin(array)(rrr|rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 2 & -3 & 0 & -1 & 1 & 0 \\ 1 & -1 & 0 & 1 & -2 & 1 \\\end(array) \right] \\ \end(align)\]

Now we can subtract the last row from the first and twice from the second - this way we “zero” the first column:

\[\begin(align) & \left[ \begin(array)(rrr|rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 2 & -3 & 0 & -1 & 1 & 0 \\ 1 & -1 & 0 & 1 & -2 & 1 \\\end(array) \right]\begin(matrix) -1 \\ -2 \\ \uparrow \\\end(matrix)\to \\ & \ to \left[ \begin(array)(rrr|rrr) 0 & 6 & 1 & 0 & 2 & -1 \\ 0 & -1 & 0 & -3 & 5 & -2 \\ 1 & -1 & 0 & 1 & -2 & 1 \\\end(array) \right] \\ \end(align)\]

Multiply the second line by −1, and then subtract it 6 times from the first and add 1 time to the last:

\[\begin(align) & \left[ \begin(array)(rrr|rrr) 0 & 6 & 1 & 0 & 2 & -1 \\ 0 & -1 & 0 & -3 & 5 & -2 \ \ 1 & -1 & 0 & 1 & -2 & 1 \\\end(array) \right]\begin(matrix) \ \\ \left| \cdot \left(-1 \right) \right. \\ \ \\\end(matrix)\to \\ & \to \left[ \begin(array)(rrr|rrr) 0 & 6 & 1 & 0 & 2 & -1 \\ 0 & 1 & 0 & 3 & -5 & 2 \\ 1 & -1 & 0 & 1 & -2 & 1 \\\end(array) \right]\begin(matrix) -6 \\ \updownarrow \\ +1 \\\end (matrix)\to \\ & \to \left[ \begin(array)(rrr|rrr) 0 & 0 & 1 & -18 & 32 & -13 \\ 0 & 1 & 0 & 3 & -5 & 2 \\ 1 & 0 & 0 & 4 & -7 & 3 \\\end(array) \right] \\ \end(align)\]

All that remains is to swap lines 1 and 3:

\[\left[ \begin(array)(rrr|rrr) 1 & 0 & 0 & 4 & -7 & 3 \\ 0 & 1 & 0 & 3 & -5 & 2 \\ 0 & 0 & 1 & - 18 & 32 & -13 \\\end(array) \right]\]

Ready! On the right is the required inverse matrix.

Answer. $\left[ \begin(array)(*(35)(r))4 & -7 & 3 \\ 3 & -5 & 2 \\ -18 & 32 & -13 \\\end(array) \right ]$

Task. Find the inverse matrix:

\[\left[ \begin(matrix) 1 & 4 & 2 & 3 \\ 1 & -2 & 1 & -2 \\ 1 & -1 & 1 & 1 \\ 0 & -10 & -2 & -5 \\\end(matrix) \right]\]

Solution. We compose the adjoint again:

\[\left[ \begin(array)(rrrr|rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 1 & -2 & 1 & -2 & 0 & 1 & 0 & 0 \ \ 1 & -1 & 1 & 1 & 0 & 0 & 1 & 0 \\ 0 & -10 & -2 & -5 & 0 & 0 & 0 & 1 \\\end(array) \right]\]

Let's cry a little, be sad about how much we have to count now... and start counting. First, let’s “zero out” the first column by subtracting row 1 from rows 2 and 3:

\[\begin(align) & \left[ \begin(array)(rrrr|rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 1 & -2 & 1 & -2 & 0 & 1 & 0 & 0 \\ 1 & -1 & 1 & 1 & 0 & 0 & 1 & 0 \\ 0 & -10 & -2 & -5 & 0 & 0 & 0 & 1 \\\end(array) \right]\begin(matrix) \downarrow \\ -1 \\ -1 \\ \ \\\end(matrix)\to \\ & \to \left[ \begin(array)(rrrr|rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 0 & -6 & -1 & -5 & -1 & 1 & 0 & 0 \\ 0 & -5 & -1 & -2 & -1 & 0 & 1 & 0 \\ 0 & -10 & -2 & -5 & 0 & 0 & 0 & 1 \\\end(array) \right] \\ \end(align)\]

We see too many “cons” in lines 2-4. Multiply all three rows by −1, and then burn out the third column by subtracting row 3 from the rest:

\[\begin(align) & \left[ \begin(array)(rrrr|rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 0 & -6 & -1 & -5 & - 1 & 1 & 0 & 0 \\ 0 & -5 & -1 & -2 & -1 & 0 & 1 & 0 \\ 0 & -10 & -2 & -5 & 0 & 0 & 0 & 1 \\ \end(array) \right]\begin(matrix) \ \\ \left| \cdot \left(-1 \right) \right. \\ \left| \cdot \left(-1 \right) \right. \\ \left| \cdot \left(-1 \right) \right. \\\end(matrix)\to \\ & \to \left[ \begin(array)(rrrr|rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 0 & 6 & 1 & 5 & ​​1 & -1 & 0 & 0 \\ 0 & 5 & 1 & 2 & 1 & 0 & -1 & 0 \\ 0 & 10 & 2 & 5 & 0 & 0 & 0 & -1 \\\end (array) \right]\begin(matrix) -2 \\ -1 \\ \updownarrow \\ -2 \\\end(matrix)\to \\ & \to \left[ \begin(array)(rrrr| rrrr) 1 & -6 & 0 & -1 & -1 & 0 & 2 & 0 \\ 0 & 1 & 0 & 3 & 0 & -1 & 1 & 0 \\ 0 & 5 & 1 & 2 & 1 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\end(array) \right] \\ \end(align)\]

Now is the time to “fry” the last column of the original matrix: subtract row 4 from the rest:

\[\begin(align) & \left[ \begin(array)(rrrr|rrrr) 1 & -6 & 0 & -1 & -1 & 0 & 2 & 0 \\ 0 & 1 & 0 & 3 & 0 & -1 & 1 & 0 \\ 0 & 5 & 1 & 2 & 1 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\end(array ) \right]\begin(matrix) +1 \\ -3 \\ -2 \\ \uparrow \\\end(matrix)\to \\ & \to \left[ \begin(array)(rrrr|rrrr) 1 & -6 & 0 & 0 & -3 & 0 & 4 & -1 \\ 0 & 1 & 0 & 0 & 6 & -1 & -5 & 3 \\ 0 & 5 & 1 & 0 & 5 & 0 & -5 & 2 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\end(array) \right] \\ \end(align)\]

Final throw: “burn out” the second column by subtracting line 2 from lines 1 and 3:

\[\begin(align) & \left[ \begin(array)(rrrr|rrrr) 1 & -6 & 0 & 0 & -3 & 0 & 4 & -1 \\ 0 & 1 & 0 & 0 & 6 & -1 & -5 & 3 \\ 0 & 5 & 1 & 0 & 5 & 0 & -5 & 2 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\end( array) \right]\begin(matrix) 6 \\ \updownarrow \\ -5 \\ \ \\\end(matrix)\to \\ & \to \left[ \begin(array)(rrrr|rrrr) 1 & 0 & 0 & 0 & 33 & -6 & -26 & -17 \\ 0 & 1 & 0 & 0 & 6 & -1 & -5 & 3 \\ 0 & 0 & 1 & 0 & -25 & 5 & 20 & -13 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\end(array) \right] \\ \end(align)\]

And again the identity matrix is ​​on the left, which means the inverse is on the right. :)

Answer. $\left[ \begin(matrix) 33 & -6 & -26 & 17 \\ 6 & -1 & -5 & 3 \\ -25 & 5 & 20 & -13 \\ -2 & 0 & 2 & - 1 \\\end(matrix) \right]$

OK it's all over Now. Do the check yourself - I'm screwed. :)

1. Find the determinant of the original matrix. If , then the matrix is ​​singular and there is no inverse matrix. If, then a non-degenerate and inverse matrix exists.

2. Find the matrix transposed to.

3. Find the algebraic complements of the elements and compose the adjoint matrix from them.

4. We compose the inverse matrix using the formula.

5. We check the correctness of the calculation of the inverse matrix, based on its definition:.

Example. Find the matrix inverse of this: .

Solution.

1) Matrix determinant

.

2) Find the algebraic complements of the matrix elements and compose the adjoint matrix from them:

3) Calculate the inverse matrix:

,

4) Check:

№4Matrix rank. Linear independence of matrix rows

For solving and studying a number of mathematical and applied problems, the concept of matrix rank is important.

In a matrix of size, by deleting any rows and columns, you can isolate square submatrices of the th order, where. The determinants of such submatrices are called minors of the matrix order .

For example, from matrices you can obtain submatrices of 1st, 2nd and 3rd order.

Definition. The rank of a matrix is ​​the highest order of the nonzero minors of that matrix. Designation: or.

From the definition it follows:

1) The rank of the matrix does not exceed the smaller of its dimensions, i.e.

2) if and only if all elements of the matrix are equal to zero, i.e.

3) For a square matrix of nth order if and only if the matrix is ​​non-singular.

Since directly enumerating all possible minors of the matrix, starting with the largest size, is difficult (time-consuming), they use elementary matrix transformations that preserve the rank of the matrix.

Elementary matrix transformations:

1) Discarding the zero row (column).

2) Multiplying all elements of a row (column) by a number.

3) Changing the order of rows (columns) of the matrix.

4) Adding to each element of one row (column) the corresponding elements of another row (column), multiplied by any number.

5) Matrix transposition.

Definition. A matrix obtained from a matrix using elementary transformations is called equivalent and is denoted A IN.

Theorem. The rank of the matrix does not change during elementary matrix transformations.

Using elementary transformations, you can reduce the matrix to the so-called step form, when calculating its rank is not difficult.

A matrix is ​​called echelon if it has the form:

Obviously, the rank of a step matrix is ​​equal to the number of non-zero rows, since there is a minor order that is not equal to zero:

.

Example. Determine the rank of a matrix using elementary transformations.

The rank of the matrix is ​​equal to the number of non-zero rows, i.e. .

№5Linear independence of matrix rows

Given a size matrix

Let's denote the rows of the matrix as follows:

The two lines are called equal , if their corresponding elements are equal. .

Let us introduce the operations of multiplying a string by a number and adding strings as operations carried out element-by-element:

Definition. A row is called a linear combination of rows of a matrix if it is equal to the sum of the products of these rows by arbitrary real numbers (any numbers):

Definition. The rows of the matrix are called linearly dependent , if there are numbers that are not simultaneously equal to zero, such that a linear combination of matrix rows is equal to the zero row:

Where . (1.1)

Linear dependence of matrix rows means that at least 1 row of the matrix is ​​a linear combination of the rest.

Definition. If a linear combination of rows (1.1) is equal to zero if and only if all coefficients are , then the rows are called linearly independent .

Matrix rank theorem . The rank of a matrix is ​​equal to the maximum number of its linearly independent rows or columns through which all other rows (columns) are linearly expressed.

The theorem plays a fundamental role in matrix analysis, in particular, in the study of systems of linear equations.

№6Solving a system of linear equations with unknowns

Systems of linear equations are widely used in economics.

The system of linear equations with variables has the form:

,

where () are arbitrary numbers called coefficients for variables And free terms of the equations , respectively.

Brief entry: ().

Definition. The solution of the system is such a set of values ​​, upon substitution of which each equation of the system turns into a true equality.

1) The system of equations is called joint , if it has at least one solution, and non-joint, if it has no solutions.

2) The simultaneous system of equations is called certain , if it has a unique solution, and uncertain , if it has more than one solution.

3) Two systems of equations are called equivalent (equivalent ) , if they have the same set of solutions (for example, one solution).