Skip to content

APAE

timecave.validation_strategy_metrics.APAE(estimated_error, test_error)

Compute the Absolute Predictive Accuracy Error (APAE).

This function computes the APAE metric. Both the estimated (i.e. validation) error and the test error must be passed as parameters.

Parameters:

Name Type Description Default
estimated_error float | int

Validation error.

required
test_error float | int

True (i.e. test) error.

required

Returns:

Type Description
float

Absolute Predictive Accuracy Error.

See also

PAE: Predictive Accuracy Error.

RPAE: Relative Predictive Accuracy Error.

RAPAE: Relative Absolute Predictive Accuracy Error.

sMPAE: Symmetric Mean Predictive Accuracy Error.

Notes

The Absolute Predictive Accuracy Error is defined as the absolute value of the difference between the estimate of a model's error given by a validation method and the model's true error. In other words, it is the absolute value of the Predictive Accuracy Error:

\[ APAE = |\hat{L}_m - L_m| = |PAE| \]

Since the APAE is always non-negative, this metric cannot be used to determine whether the validation method is overestimating or underestimating the model's true error.

Note that, in all likelihood, the true error will not be known. It is usually estimated using an independent test set. For more details, please refer to [1].

References

1

Cerqueira, V., Torgo, L., Mozetiˇc, I., 2020. Evaluating time series forecasting models: An empirical study on performance estimation methods. Machine Learning 109, 1997–2028.

Examples:

>>> from timecave.validation_strategy_metrics import APAE
>>> APAE(10, 3)
7
>>> APAE(1, 5)
4
>>> APAE(8, 8)
0
Source code in timecave/validation_strategy_metrics.py
def APAE(estimated_error: float | int, test_error: float | int) -> float:

    """
    Compute the Absolute Predictive Accuracy Error (APAE).

    This function computes the APAE metric. Both the estimated (i.e. validation) error
    and the test error must be passed as parameters.

    Parameters
    ----------
    estimated_error : float | int
        Validation error.

    test_error : float | int
        True (i.e. test) error.

    Returns
    -------
    float
        Absolute Predictive Accuracy Error.

    See also
    --------
    [PAE](pae.md):
        Predictive Accuracy Error.

    [RPAE](rpae.md): 
        Relative Predictive Accuracy Error.

    [RAPAE](rapae.md):
        Relative Absolute Predictive Accuracy Error.

    [sMPAE](smpae.md):
        Symmetric Mean Predictive Accuracy Error.

    Notes
    -----
    The Absolute Predictive Accuracy Error is defined as the absolute value of the difference between the \
    estimate of a model's error given by a validation method \
    and the model's true error. In other words, it is the absolute value of the Predictive Accuracy Error:

    $$
    APAE = |\hat{L}_m - L_m| = |PAE|
    $$ 

    Since the APAE is always non-negative, this metric cannot be used to determine whether the validation method is overestimating or underestimating\
    the model's true error.

    Note that, in all likelihood, the true error will not be known. It is usually estimated using an independent test set. For more details, please refer to [[1]](#1).

    References
    ----------
    ##1
    Cerqueira, V., Torgo, L., Mozetiˇc, I., 2020. Evaluating time series forecasting
    models: An empirical study on performance estimation methods.
    Machine Learning 109, 1997–2028.

    Examples
    --------
    >>> from timecave.validation_strategy_metrics import APAE
    >>> APAE(10, 3)
    7
    >>> APAE(1, 5)
    4
    >>> APAE(8, 8)
    0
    """

    return abs(estimated_error - test_error);