Skip to content

RPAE

timecave.validation_strategy_metrics.RPAE(estimated_error, test_error)

Compute the Relative Predictive Accuracy Error (RPAE).

This function computes the RPAE metric. Both the estimated (i.e. validation) error and the test error must be passed as parameters.

Parameters:

Name Type Description Default
estimated_error float | int

Validation error.

required
test_error float | int

True (i.e. test) error.

required

Returns:

Type Description
float

Relative Predictive Accuracy Error.

Raises:

Type Description
ValueError

If test_error is zero.

See also

PAE: Predictive Accuracy Error.

APAE: Absolute Predictive Accuracy Error.

RAPAE: Relative Absolute Predictive Accuracy Error.

sMPAE: Symmetric Mean Predictive Accuracy Error.

Notes

The Relative Predictive Accuracy Error is obtained by dividing the Predictive Accuracy Error (PAE) by the model's true error:

\[ RPAE = \frac{\hat{L}_m - L_m}{L_m} = \frac{PAE}{L_m} \]

This makes this metric scale-independent with respect to the model's true error, which in turn makes it useful for comparing validation methods across different time series and/or forecasting models. Since this is essentially a scaled version of the PAE, the sign retains its significance (negative sign for underestimation, positive sign for overestimation). However, it should be noted that the RPAE is asymmetric: in case of an underestimation, its values will be contained in the interval of \([-1, 0[\); if the error is overestimated, however, the RPAE can take any value in the range of \(]0, \infty[\). A value of zero denotes a perfect estimate.

Note that, in all likelihood, the true error will not be known. It is usually estimated using an independent test set.

Examples:

>>> from timecave.validation_strategy_metrics import RPAE
>>> RPAE(15, 5)
2.0
>>> RPAE(1, 5)
-0.8
>>> RPAE(8, 8)
0.0

If the true error is zero, the metric is undefined:

>>> RPAE(5, 0)
Traceback (most recent call last):
...
ValueError: The test error is zero. RPAE is undefined.
Source code in timecave/validation_strategy_metrics.py
def RPAE(estimated_error: float | int, test_error: float | int) -> float:

    """
    Compute the Relative Predictive Accuracy Error (RPAE).

    This function computes the RPAE metric. Both the estimated (i.e. validation) error
    and the test error must be passed as parameters.

    Parameters
    ----------
    estimated_error : float | int
        Validation error.

    test_error : float | int
        True (i.e. test) error.

    Returns
    -------
    float
        Relative Predictive Accuracy Error.

    Raises
    ------
    ValueError
        If `test_error` is zero.

    See also
    --------
    [PAE](pae.md):
        Predictive Accuracy Error.

    [APAE](apae.md): 
        Absolute Predictive Accuracy Error.

    [RAPAE](rapae.md):
        Relative Absolute Predictive Accuracy Error.

    [sMPAE](smpae.md):
        Symmetric Mean Predictive Accuracy Error.

    Notes
    -----
    The Relative Predictive Accuracy Error is obtained by dividing the Predictive Accuracy Error (PAE) by the model's true error:

    $$
    RPAE = \\frac{\hat{L}_m - L_m}{L_m} = \\frac{PAE}{L_m}
    $$ 

    This makes this metric scale-independent with respect to the model's true error, which in turn makes it useful for comparing validation methods \
    across different time series and/or forecasting models. Since this is essentially a scaled version of the PAE, \
    the sign retains its significance (negative sign for underestimation, positive sign for overestimation). \
    However, it should be noted that the RPAE is asymmetric: in case of an underestimation, its values will be contained in the interval of $[-1, 0[$; if the error is \
    overestimated, however, the RPAE can take any value in the range of $]0, \infty[$. A value of zero denotes a perfect estimate.

    Note that, in all likelihood, the true error will not be known. It is usually estimated using an independent test set.

    Examples
    --------
    >>> from timecave.validation_strategy_metrics import RPAE
    >>> RPAE(15, 5)
    2.0
    >>> RPAE(1, 5)
    -0.8
    >>> RPAE(8, 8)
    0.0

    If the true error is zero, the metric is undefined:

    >>> RPAE(5, 0)
    Traceback (most recent call last):
    ...
    ValueError: The test error is zero. RPAE is undefined.
    """

    if(test_error == 0):

        raise ValueError("The test error is zero. RPAE is undefined.");

    return (estimated_error - test_error) / test_error;