A time series is a set of observations on a variable's outcomes in different time periods (in discrete and equally spaced intervals). For instance, a historical series of quarterly stock returns. Time-series data involve many observations from different time periods for the same asset class, company, person, country or other entity.

There are two types of time series: univariate and multivariate. For univariate time series analysis only one variable is measured over time, and for multivariate time series analysis more than one variable are measured simultaneously.

Typically, a time series comprises four components: trend component (long term movement- upward or downward), seasonal component (regular and predictable patterns of movement within the year), cyclic component (unknown periodicity) and irregular component (chaotic noisy residuals left over when other components e.g., trend, seasonal and cyclical have been accounted for).

Trend and seasonality are inextricably mixed up and it is not possible to isolate one without trying to isolate the other.

Time-series analysis is extensively used in forecasting or to make investment decisions. We may use time-series models to explain the past or to predict the future. We may fit regression in time series for two purposes:

1- To predict the future behavior of a variable based on causal relationships with other variables.

2- To predict the future behavior of a variable based on the past behavior of the same variable.

For the case 2, we often run an autoregressive (AR) time-series models, wherein we regress a time series on its own past values.

Our objective is to fit a linear regression to a given time series. To apply time-series analysis, the

classical linear regression model assumptions

**Assumptions**

1- Linear Relationship between variables.

2- The independent variable X is not random.

3- The expected value of the error term is zero

4- The variance of the error term is the same for all observations

5- The error term is uncorrelated across observations

6- The error term is normally distributed

must be satisfied for valid conclusions.

If these assumptions are violated, the estimated regression coefficients (b^_{0} , b^_{1}) will be biased and inconsistent.

In case of such violation, however, we can transform the time series (for example, into log), or specify the regression model differently, so that linear regression assumptions are met.

To evaluate the uncertainty of time-series forecasts, we may apply same techniques as we do in linear regression models.

When we suspect that a time series has more than one regime, we may split the series into two or more, applying appropriate model for each regime. Our statistical conclusions may be sensitive to the starting and ending dates of a time series sample.

We start the time-series analysis with trend models.

**Trend Models:**

We need to estimate a trend before to use that for predicting future values of any time
series. We determine whether a linear, exponential growth, quadratic or S-curve trends seems most sensible (commonly by plotting the series). To know whether the time series has a trend (rising or declining), we focus the co-movement of fitted trend line and the line of actual observations in a chart
.

When we select other than a linear trend, we basically transform one or more series into different forms (for example, by taking the natural logarithm of the variable) and then apply linear regression on the transformed version of the data.

Now, we move on to explore each trend model given at carousel of this site and know how to use these models to make forecasts.