Gå til indhold
Ordbog / Forecast error

Forecast error

Sidst opdateret 21. august 2023

What does forecast error mean?

Forecast error refers to the difference between the actual value of a variable and the forecasted value of that variable. It is, in other words, the discrepancy between the expected outcome and the actual outcome. Forecast error is a common measure used to evaluate the accuracy of forecasting models: a low forecast error naturally indicates that the model is accurate in its predictions, while a high forecast error suggests that the model may need to be revised or improved.

There are different types of forecast error. Let's look at some of the popular ones.


The Mean Absolute Error (MAE) is a measure of the difference between the actual and predicted values of a dataset.

MAE is similar to MAD (the Mean Absolute Deviation), which averages the absolute differences between each value and the mean of all values. Thus, MAD is a measure of the average distance between each observation and the mean of all observations.


The Mean Absolute Percentage Error is a measure of the accuracy of predictions in a dataset, expressed as a percentage.

MAPE is useful when comparing the accuracy of different models or forecasting methods across different datasets, as it allows for a standardized comparison of the forecasting performance. MAPE represents the average percentage deviation of the forecasts from the actual values, thus, it ranges from 0% to infinity, where a lower MAPE indicates a better forecasting performance.

Weighted MAPE

Weighted MAPE (WMAPE) is a modified version of MAPE that accounts for the relative importance or weight of each data point in the calculations. It is commonly used when the data has different levels of variability or when some observations are more important than others.

By assigning weights to each observation, WMAPE gives more emphasis to the observations that are most critical to the forecasting problem, thus, providing a more accurate measure of forecast error than MAPE.


Root Mean Squared Error (RMSE) measures the average difference between values predicted by a model and the actual values. In other words, it describes the spread of the residuals. The advantage of using RMSE is that it puts a higher weight on large errors, as the squared difference increases faster than the absolute difference.