Time Series & Machine Learning – Autocorrelation, Heteroskedasticity, ARIMA [3/4]

Filed Under: Machine Learning
Time Series And Machine Learning An Introduction (3)

We cover everything that’s remaining in time series math here.

Check out the first part of a whole tutorial from beginning to end on time series analysis:

Time Series and Machine Learning- An introduction

Check out the second part here :

Time Series and Machine Learning – The mathematics beneath

And welcome to the third part!

1. Time Series Autocorrelation

Definition. Autocorrelation refers to the degree of association over two consecutive time periods of the same variables. It calculates how the lagged version of a variable ‘s value is related in a time series to the initial version of the variable.

Autocorrelation is also known as serial association. The autoregressive-moving-average model (ARMA) and the autoregressive-integrated-moving-average model (ARIMA) which we will be learning here commonly utilizes autocorrelation.

The autocorrelation value varies from -1 to 1. Negative autocorrelation represents a value between -1 and 0. Positive autocorrelation is expressed by a value between 0 and 1.

Autocorrelation
Autocorrelation function

2. Heteroskedasticity

In statistics and economics, heteroskedasticity is a major phenomenon.

This extends to knowledge with unequal variation (scatter) through a set of the residual term or error term.

The opposite of which is homoscedasticity meaning “having the same dispersion.”

For comparison, here’s an example I drew of random points in homoscedasticity vs heteroskedasticity:

Homoskedasticity Vs Heteroskedasticity
Homoskedasticity Vs Heteroskedasticity

In regression modeling, heteroskedasticity is an important principle and regression models are used in the financial environment to describe the success of stocks and investment portfolios.

The Capital Asset Pricing Model (CAPM) is the most well-known of these, describing the price of a commodity in terms of its volatility relative to the market as a whole. Other predictor variables such as scale, momentum, efficiency, and style (value vs. growth) have been introduced by extensions of this model.

Moving Average(1) process

The first order MA process is defined as:

Ma1 Process
MA(1) Process

where, θ(B) = (1+b1B)

Moving Average(2) process

The second order moving average process is defined as:

Ma2 Process
MA(2) Process

The MA(2) process is stationary for all values of β1 and β2 .

Conceptually, a moving-average model is a linear regression of the series’ current value against current and past (observed) white noise error terms or random shocks. At each point, the random shocks are presumed to be mutually independent and come from the same distribution, usually a regular distribution, with a constant scale and a zero position.

Autoregressive(AR) process

Let {Zt} be a purely random process with mean zero and variance σZ2 .

The process {Xt} where the current value of the process Xt expresses as a finite, linear aggregate of previous p values of the process, and a random component Zt is called an autoregressive process of order p.

Autoregressive Process
Autoregressive Process

The model is analogous to a linear model of multiple regression. But Xt is regressed on past values of Xt rather than on separate predictor variables.

And that is exactly why it is called autoregressive.

ARMA
It is stationary when roots of Φ(B) = 0 lies outside the unit circle.it is always stationary.
It is always invertible.Invertible when roots of Φ(B) = 0 lies outside the unit circle.
InfiniteFinite
Comparison between AR and MA processes

The different forecasting methods are summarized in the table below:

Different Forecasting Methods Based On Different Conditions
Different Forecasting Methods Based On Different Conditions

3. ARIMA model

It is one of the most popular forecasting methods, devised in 1998.

The ARIMA model with two seasonal cycles was suggested for modeling emergency medical system calls.

There are three components to the ARIMA model:

  1. an AR(p) process
  2. Integrated – the differencing of raw observations to allow for the time series to become stationary
  3. an MA(q) process in an optimized moving average autoregressive process, the data is differentiated in order to keep it stationary.

A paradigm that illustrates stationarity is one that illustrates that the data is stable over time. Trends are seen in most economic and market statistics, so the object of separation is to exclude any trends or seasonal structures.

Ending Note

And that’s it! You’re now caught up with everything in Time Series. Next up, we’ll take a real-time series dataset and have some fun with it, so bookmark the site and keep yourself updated.

Leave a Reply

Your email address will not be published. Required fields are marked *

close
Generic selectors
Exact matches only
Search in title
Search in content
Search in posts
Search in pages