MAPIE is based on the resampling methods introduced in a state-of-the-art research paper by R. Foygel-Barber et al. (2021) [1] for estimating prediction intervals in regression settings and coming with strong guarantees. MAPIE implements no less than 8 different methods from this paper, in particular the Jackknife+ and the CV+.

The so-called Jackknife+ method is based on the construction of a set of leave-one-out models: each perturbed model is trained on the entire training data with one point removed. Interval predictions are then estimated from the distribution of the leave-one-out residuals estimated by these perturbed models. The novelty of this elegant method is that predictions on a new test sample are no longer centered on the predictions estimated out by the base model as with the standard jackknife method but on the predictions from each perturbed model. This small and seemingly minor change triggers a major result: estimated prediction intervals are always stable and theoretically guaranteed !

In practice, when you aim for a confidence interval of let’s say 90%, it means that you want to be 90% sure that the true value of your new observation lies within your prediction interval. Historical methods like the standard bootstrap or jackknife methods do not give you any guarantees on this claim and can suffer from high instabilities. With this method, the theorems described by Foygel-Barber et al. guarantee that this chance is always higher than 80% and in practice very close to 90% most of the time. In other words, it means that ~ 90% of the target values of your new test samples will lie in the prediction intervals estimated with the Jackknife+ method.

However, the standard Jackknife+ method is computationally heavy as it requires to compute as many models as the number of training samples, it is therefore possible to adopt a lighter cross-validation approach, called the CV+. The CV+ method acts as a standard cross-validation: K perturbed models are trained, with K ranging typically from 5 to 10, on the entire training set with each fold removed, and the corresponding residuals are computed. As for the Jackknife+, prediction intervals are centered on the predictions performed by each out-of-fold model. The same stability is therefore guaranteed by the theory although the prediction intervals are usually slightly wider since each perturbed model is trained on a lower number…

Continue reading: https://towardsdatascience.com/with-mapie-uncertainties-are-back-in-machine-learning-882d5c17fdc3?source=rss—-7f60cf5620c9—4

Source: towardsdatascience.com