Welcome back to the second part. If you’ve stumbled directly onto this part, then visit the first part here:

**Linear Algebra for Machine Learning – part 1/2**

So in the last article, we covered the basics of linear algebra like the coordinate systems, random variables, and linear equations.

Table of Contents

## Linear Algebra for Machine Learning – Working with Matrices

In this part, we’ll focus on the most important aspect of solving a collection of linear equations.

### System of Linear Equations

A **System of Linear Equations** is when we have two or more linear equations working together.

**Example**: Here are two linear equations:

`2x + y = 5`

`−x + y = 2`

Together they are a system of linear equations.

There can be many ways to solve linear equations. The **substitution method **is one of them:

From the first equation, we get: y = `5 - 2x`

.

Substituting that value in the second equation:

`-x + 5 - 2x = 2`

`3x = 3`

`x = 1`

But if the number of equations is>4, this method becomes really cumbersome. So we use the other method: **Matrix method**.

Any system of linear equations can be expressed in the form:

`Ax = B`

where,

**A = matrix of coefficients (/coefficient matrix)****x = unknown variable column matrix****B = constraint / constant matrix**

## Matrices and Matrix Operations

A rectangular array of `m x n`

numbers (real or complex) in the form of m horizontal lines (called rows) and n vertical lines (called columns), is called a matrix of order m by n, written as m x n matrix.

Such an array is enclosed by **[ ]** or** ( )**.

An m x n matrix is usually written as:

Consider the system:

which can be represented as :

**A x = B**

Once we solve this: x = 1/6, y = -29/6 and z = 25/6.

### 1. Matrix Addition/Subtraction

The **sum** of two matrices can only be found if both matrices have the same dimension. We add matrices by adding corresponding elements.

Similarly, the **difference **of two matrices can only be found if both matrices have the same dimension. We subtract matrices by subtracting corresponding elements.

### 2. Matrix Multiplication

To multiply two matrices, we’ll follow along with the below steps:

- In the first array, verify that the number of columns is equal to the number of rows in the second array. (If they are not identical, then these two matrices should not be multiplied.)
- Multiply element by element through the first matrix rows and down the second matrix columns and apply the products. (For more information, see the videos below)
- Verify that the output is an array with the same number of rows as the first array and the same number of columns as the second array.

Let’s take an example. Multiplying these matrices:

### 3. Determinant of a Matrix

Let the determinant of a square matrix A be |A|. For example, consider a matrix:

We find the determinant as:

`|A| =4(1×3×1+(−1)×1×3+3×(−3)×3−(3×3×3+3×1×1+1×(−3)×(−1)))`

`=4(3-3-27-(27+3+3))`

`=4×(-60)`

`= -240`

### 4. **Transpose** of a Matrix

The transpose operation is very important and is one of the easiest.

For any matrix A_{m x n}, the transpose is given by A^{T}_{n x m} and the elements are given by:

`A`

^{T} = [A_{ji}]_{n x m}

### 5. **Inverse** of a Matrix

For the inverse, let’s consider a matrix and I’ll walk you through the process.

And voila! Now you know 90% of the basics of linear algebra that you need to know for Machine learning. I would, however, suggest you go through the recommended books below.

## Books Recommended

- Schaum’s Outline of Linear Algebra, Sixth Edition (Schaum’s Outlines) (If you buy ONE book, this is definitely it.)
- Introduction to Linear Algebra, Fifth Edition (Gilbert Strang)
- Linear Algebra and Its Applications by David Lay, Steven Lay