Skip to content

RAPAE

timecave.validation_strategy_metrics.RAPAE(estimated_error, test_error)

Compute the Relative Absolute Predictive Accuracy Error (RAPAE).

This function computes the RAPAE metric. Both the estimated (i.e. validation) error and the test error must be passed as parameters.

Parameters:

Name Type Description Default
estimated_error float | int

Validation error.

required
test_error float | int

True (i.e. test) error.

required

Returns:

Type Description
float

Relative Absolute Predictive Accuracy Error.

Raises:

Type Description
ValueError

If test_error is zero.

See also

PAE: Predictive Accuracy Error.

APAE: Absolute Predictive Accuracy Error.

RPAE: Relative Predictive Accuracy Error.

sMPAE: Symmetric Mean Predictive Accuracy Error.

Notes

The Relative Absolute Predictive Accuracy Error is defined as the Absolute Predictive Accuracy Error (APAE) divided by the model's true error. It can also be seen as the absolute value of the Relative Predictive Accuracy Error (RPAE):

\[ RAPAE = \frac{|\hat{L}_m - L_m|}{L_m} = \frac{|PAE|}{L_m} = \frac{APAE}{L_m} = |RPAE| \]

This metric essentially takes the absolute value of the RPAE, and can be used in a similar fashion. However, since it uses the absolute value, it cannot be used to determine whether a validation method is overestimating or underestimating the model's true error. Like the RPAE, it is an asymmetric measure.

Note that, in all likelihood, the true error will not be known. It is usually estimated using an independent test set.

Examples:

>>> from timecave.validation_strategy_metrics import RAPAE
>>> RAPAE(15, 5)
2.0
>>> RAPAE(1, 5)
0.8
>>> RAPAE(8, 8)
0.0

If the true error is zero, the metric is undefined:

>>> RAPAE(5, 0)
Traceback (most recent call last):
...
ValueError: The test error is zero. RAPAE is undefined.
Source code in timecave/validation_strategy_metrics.py
def RAPAE(estimated_error: float | int, test_error: float | int) -> float:

    """
    Compute the Relative Absolute Predictive Accuracy Error (RAPAE).

    This function computes the RAPAE metric. Both the estimated (i.e. validation) error
    and the test error must be passed as parameters.

    Parameters
    ----------
    estimated_error : float | int
        Validation error.

    test_error : float | int
        True (i.e. test) error.

    Returns
    -------
    float
        Relative Absolute Predictive Accuracy Error.

    Raises
    ------
    ValueError
        If `test_error` is zero.

    See also
    --------
    [PAE](pae.md):
        Predictive Accuracy Error.

    [APAE](apae.md): 
        Absolute Predictive Accuracy Error.

    [RPAE](rpae.md):
        Relative Predictive Accuracy Error.

    [sMPAE](smpae.md):
        Symmetric Mean Predictive Accuracy Error.    

    Notes
    -----
    The Relative Absolute Predictive Accuracy Error is defined as the Absolute Predictive Accuracy Error (APAE) divided by the \
    model's true error. It can also be seen as the absolute value of the Relative Predictive Accuracy Error (RPAE):

    $$
    RAPAE = \\frac{|\hat{L}_m - L_m|}{L_m} = \\frac{|PAE|}{L_m} = \\frac{APAE}{L_m} = |RPAE|
    $$ 

    This metric essentially takes the absolute value of the RPAE, and can be used in a similar fashion. However, since it uses the \
    absolute value, it cannot be used to determine whether a validation method is overestimating or underestimating the model's \
    true error. Like the RPAE, it is an asymmetric measure.

    Note that, in all likelihood, the true error will not be known. It is usually estimated using an independent test set.

    Examples
    --------
    >>> from timecave.validation_strategy_metrics import RAPAE
    >>> RAPAE(15, 5)
    2.0
    >>> RAPAE(1, 5)
    0.8
    >>> RAPAE(8, 8)
    0.0

    If the true error is zero, the metric is undefined:

    >>> RAPAE(5, 0)
    Traceback (most recent call last):
    ...
    ValueError: The test error is zero. RAPAE is undefined.
    """

    if(test_error == 0):

        raise ValueError("The test error is zero. RAPAE is undefined.");

    return abs(estimated_error - test_error) / test_error;