We cover everything that’s remaining in time series math here.

Check out the first part of a whole tutorial from beginning to end on time series analysis:

**Time Series and Machine Learning- An introduction**

Check out the second part here :

**Time Series and Machine Learning – The mathematics beneath**

And welcome to the third part!

Table of Contents

## 1. Time Series Autocorrelation

**Definition.** Autocorrelation refers to the degree of association over two consecutive time periods of the same variables. It calculates how the lagged version of a variable ‘s value is related in a time series to the initial version of the variable.

Autocorrelation is also known as **serial association**. The autoregressive-moving-average model (ARMA) and the autoregressive-integrated-moving-average model (ARIMA) which we will be learning here commonly utilizes autocorrelation.

The autocorrelation value varies from -1 to 1. Negative autocorrelation represents a value between -1 and 0. Positive autocorrelation is expressed by a value between 0 and 1.

## 2. Heteroskedasticity

In statistics and economics, **heteroskedasticity **is a major phenomenon.

This extends to knowledge with unequal variation (scatter) through a set of the residual term or error term.

The opposite of which is homoscedasticity meaning “having the same dispersion.”

For comparison, here’s an example I drew of random points in homoscedasticity vs heteroskedasticity:

In regression modeling, **heteroskedasticity **is an important principle and regression models are used in the financial environment to describe the success of stocks and investment portfolios.

The Capital Asset Pricing Model (CAPM) is the most well-known of these, describing the price of a commodity in terms of its volatility relative to the market as a whole. Other predictor variables such as scale, momentum, efficiency, and style (value vs. growth) have been introduced by extensions of this model.

### Moving Average(1) process

The first order MA process is defined as:

where, θ(B) = (1+b_{1}B)

### Moving Average(2) process

The second order moving average process is defined as:

The MA(2) process is stationary for all values of β_{1} and β_{2} .

Conceptually, a **moving-average model** is a linear regression of the series’ current value against current and past (observed) white noise error terms or random shocks. At each point, the random shocks are presumed to be mutually independent and come from the same distribution, usually a regular distribution, with a constant scale and a zero position.

### Autoregressive(AR) process

Let {Z_{t}} be a purely random process with mean zero and variance σ_{Z}^{2} .

The process {X_{t}} where the current value of the process X_{t} expresses as a finite, linear aggregate of previous p values of the process, and a random component Z_{t} is called an **autoregressive process** of order p.

The model is analogous to a linear model of multiple regression. But X_{t} is regressed on past values of X_{t} rather than on separate predictor variables.

And that is exactly why it is called **autoregressive**.

AR | MA |

It is stationary when roots of Φ(B) = 0 lies outside the unit circle. | it is always stationary. |

It is always invertible. | Invertible when roots of Φ(B) = 0 lies outside the unit circle. |

Infinite | Finite |

**The different forecasting methods are summarized in the table below:**

## 3. ARIMA model

It is one of the most popular forecasting methods, devised in 1998.

The ARIMA model with two seasonal cycles was suggested for modeling emergency medical system calls.

There are three components to the ARIMA model:

- an
**AR(p) process** **Integrated**– the differencing of raw observations to allow for the time series to become stationary- an
**MA(q) process**in an optimized moving average autoregressive process, the data is differentiated in order to keep it stationary.

A paradigm that illustrates stationarity is one that illustrates that the data is stable over time. Trends are seen in most economic and market statistics, so the object of separation is to exclude any trends or seasonal structures.

## Ending Note

And that’s it! You’re now caught up with everything in Time Series. Next up, we’ll take a real-time series dataset and have some fun with it, so bookmark the site and keep yourself updated.