Adapted hv Block Cross-validation method
timecave.validation_methods.CV.AdaptedhvBlockCV(splits, ts, fs=1, h=1)
Bases: BaseSplitter
Implements the Adapted hv Block Cross-validation method.
This class implements the Adapted hv Block Cross-validation method. It is similar to the BlockCV class, but it does not support weight generation. Consequently, in order to implement a weighted version of this method, the user must implement their own derived class or compute the weights separately.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
splits |
int
|
The number of folds used to partition the data. |
required |
ts |
ndarray | Series
|
Univariate time series. |
required |
fs |
float | int
|
Sampling frequency (Hz). |
1
|
h |
int
|
Controls the amount of samples that will be removed from the training set.
The |
1
|
Attributes:
| Name | Type | Description |
|---|---|---|
n_splits |
int
|
The number of splits. |
sampling_freq |
int | float
|
The series' sampling frequency (Hz). |
Methods:
| Name | Description |
|---|---|
split |
Split the time series into training and validation sets. |
info |
Provide additional information on the validation method. |
statistics |
Compute relevant statistics for both training and validation sets. |
plot |
Plot the partitioned time series. |
Raises:
| Type | Description |
|---|---|
TypeError
|
If |
ValueError
|
If |
ValueError
|
If |
See also
Block CV: The original Block CV method, where no training samples are removed.
Notes
The Adapted hv Block Cross-validation method splits the data into \(N\) different folds. Then, in every iteration \(i\), the model is validated on data from the \(i^{th}\) folds and trained on data from the remaining folds. There is, however, one subtle difference from the original Block Cross-validation method: the \(h\) training samples that lie closest to the validation set are removed, thereby reducing the correlation between the training set and the validation set. The average error on the validation sets is then taken as the estimate of the model's true error. This method does not preserve the temporal order of the observations.

This method was proposed by Cerqueira et al. [1].
References
1
Vitor Cerqueira, Luis Torgo, and Igor Mozetiˇc. Evaluating time series forecasting models: An empirical study on performance estimation methods. Machine Learning, 109(11):1997–2028, 2020.
Source code in timecave/validation_methods/CV.py
info()
Provide some basic information on the training and validation sets.
This method displays the number of splits and the fold size.
Examples:
>>> import numpy as np
>>> from timecave.validation_methods.CV import AdaptedhvBlockCV
>>> ts = np.ones(10);
>>> splitter = AdaptedhvBlockCV(5, ts);
>>> splitter.info();
Adapted hv Block CV method
---------------
Time series size: 10 samples
Number of splits: 5
Fold size: 1 to 2 samples (10.0 to 20.0 %)
Source code in timecave/validation_methods/CV.py
plot(height, width)
Plot the partitioned time series.
This method allows the user to plot the partitioned time series. The training and validation sets are plotted using different colours.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
height |
int
|
The figure's height. |
required |
width |
int
|
The figure's width. |
required |
Examples:
>>> import numpy as np
>>> from timecave.validation_methods.CV import AdaptedhvBlockCV
>>> ts = np.ones(100);
>>> splitter = AdaptedhvBlockCV(5, ts, h=5);
>>> splitter.plot(10, 10);

Source code in timecave/validation_methods/CV.py
split()
Split the time series into training and validation sets.
This method splits the series' indices into disjoint sets containing the training and validation indices.
At every iteration, an array of training indices and another one containing the validation indices are generated.
Note that this method is a generator. To access the indices, use the next() method or a for loop.
Yields:
| Type | Description |
|---|---|
ndarray
|
Array of training indices. |
ndarray
|
Array of validation indices. |
float
|
Weight assigned the error estimate. |
Examples:
>>> import numpy as np
>>> from timecave.validation_methods.CV import AdaptedhvBlockCV
>>> ts = np.ones(10);
>>> splitter = AdaptedhvBlockCV(5, ts); # Split the data into 5 different folds
>>> for ind, (train, val, _) in enumerate(splitter.split()):
...
... print(f"Iteration {ind+1}");
... print(f"Training set indices: {train}");
... print(f"Validation set indices: {val}");
Iteration 1
Training set indices: [3 4 5 6 7 8 9]
Validation set indices: [0 1]
Iteration 2
Training set indices: [0 5 6 7 8 9]
Validation set indices: [2 3]
Iteration 3
Training set indices: [0 1 2 7 8 9]
Validation set indices: [4 5]
Iteration 4
Training set indices: [0 1 2 3 4 9]
Validation set indices: [6 7]
Iteration 5
Training set indices: [0 1 2 3 4 5 6]
Validation set indices: [8 9]
If the number of samples is not divisible by the number of folds, the first folds will contain more samples:
>>> ts2 = np.ones(17);
>>> splitter = AdaptedhvBlockCV(5, ts2, h=2);
>>> for ind, (train, val, _) in enumerate(splitter.split()):
...
... print(f"Iteration {ind+1}");
... print(f"Training set indices: {train}");
... print(f"Validation set indices: {val}");
Iteration 1
Training set indices: [ 6 7 8 9 10 11 12 13 14 15 16]
Validation set indices: [0 1 2 3]
Iteration 2
Training set indices: [ 0 1 10 11 12 13 14 15 16]
Validation set indices: [4 5 6 7]
Iteration 3
Training set indices: [ 0 1 2 3 4 5 13 14 15 16]
Validation set indices: [ 8 9 10]
Iteration 4
Training set indices: [ 0 1 2 3 4 5 6 7 8 16]
Validation set indices: [11 12 13]
Iteration 5
Training set indices: [ 0 1 2 3 4 5 6 7 8 9 10 11]
Validation set indices: [14 15 16]
Source code in timecave/validation_methods/CV.py
1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 | |
statistics()
Compute relevant statistics for both training and validation sets.
This method computes relevant time series features, such as mean, strength-of-trend, etc. for both the whole time series, the training set and the validation set. It can and should be used to ensure that the characteristics of both the training and validation sets are, statistically speaking, similar to those of the time series one wishes to forecast. If this is not the case, using the validation method will most likely lead to a poor assessment of the model's performance.
Returns:
| Type | Description |
|---|---|
DataFrame
|
Relevant features for the entire time series. |
DataFrame
|
Relevant features for the training set. |
DataFrame
|
Relevant features for the validation set. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If the time series is composed of less than three samples. |
ValueError
|
If the folds comprise less than two samples. |
Examples:
>>> import numpy as np
>>> from timecave.validation_methods.CV import AdaptedhvBlockCV
>>> ts = np.hstack((np.ones(5), np.zeros(5)));
>>> splitter = AdaptedhvBlockCV(5, ts);
>>> ts_stats, training_stats, validation_stats = splitter.statistics();
>>> ts_stats
Mean Median Min Max Variance P2P_amplitude Trend_slope Spectral_centroid Spectral_rolloff Spectral_entropy Strength_of_trend Mean_crossing_rate Median_crossing_rate
0 0.5 0.5 0.0 1.0 0.25 1.0 -0.151515 0.114058 0.5 0.38717 1.59099 0.111111 0.111111
>>> training_stats
Mean Median Min Max Variance P2P_amplitude Trend_slope Spectral_centroid Spectral_rolloff Spectral_entropy Strength_of_trend Mean_crossing_rate Median_crossing_rate
0 0.285714 0.0 0.0 1.0 0.204082 1.0 -0.178571 0.146421 0.428571 0.702232 1.212183 0.166667 0.166667
0 0.166667 0.0 0.0 1.0 0.138889 1.0 -0.142857 0.250000 0.500000 1.000000 0.931695 0.200000 0.200000
0 0.500000 0.5 0.0 1.0 0.250000 1.0 -0.257143 0.138889 0.500000 0.455486 1.250000 0.200000 0.200000
0 0.833333 1.0 0.0 1.0 0.138889 1.0 -0.142857 0.125000 0.500000 0.792481 0.931695 0.200000 0.200000
0 0.714286 1.0 0.0 1.0 0.204082 1.0 -0.178571 0.094706 0.428571 0.556506 1.212183 0.166667 0.166667
>>> validation_stats
Mean Median Min Max Variance P2P_amplitude Trend_slope Spectral_centroid Spectral_rolloff Spectral_entropy Strength_of_trend Mean_crossing_rate Median_crossing_rate
0 1.0 1.0 1.0 1.0 0.00 0.0 -7.850462e-17 0.00 0.0 0.0 inf 0.0 0.0
0 1.0 1.0 1.0 1.0 0.00 0.0 -7.850462e-17 0.00 0.0 0.0 inf 0.0 0.0
0 0.5 0.5 0.0 1.0 0.25 1.0 -1.000000e+00 0.25 0.5 0.0 inf 1.0 1.0
0 0.0 0.0 0.0 0.0 0.00 0.0 0.000000e+00 0.00 0.0 0.0 inf 0.0 0.0
0 0.0 0.0 0.0 0.0 0.00 0.0 0.000000e+00 0.00 0.0 0.0 inf 0.0 0.0
Source code in timecave/validation_methods/CV.py
1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 | |