Skip to content

Latest commit

 

History

History
58 lines (58 loc) · 2.26 KB

2024-06-30-attias24a.md

File metadata and controls

58 lines (58 loc) · 2.26 KB
title section abstract layout series publisher issn id month tex_title firstpage lastpage page order cycles bibtex_author author date address container-title volume genre issued pdf extras
Universal Rates for Regression: Separations between Cut-Off and Absolute Loss
Original Papers
In this work we initiate the study of regression in the universal rates framework of Bousquet et al. Unlike the traditional uniform learning setting, we are interested in obtaining learning guarantees that hold for all fixed data-generating distributions, but do not hold uniformly across them. We focus on the realizable setting and we consider two different well-studied loss functions: the cut-off loss at scale $\gamma > 0$, which asks for predictions that are $\gamma$-close to the correct one, and the absolute loss, which measures how far away the prediction is from the correct one. Our results show that the landscape of the achievable rates in the two cases is completely different. First we give a trichotomic characterization of the optimal learning rates under the cut-off loss: each class is learnable either at an exponential rate, a (nearly) linear rate or requires arbitrarily slow rates. Moving to the absolute loss, we show that the achievable learning rates are significantly more involved by illustrating that an infinite number of different optimal learning rates is achievable. This is the first time that such a rich landscape of rates is obtained in the universal rates literature.
inproceedings
Proceedings of Machine Learning Research
PMLR
2640-3498
attias24a
0
Universal Rates for Regression: Separations between Cut-Off and Absolute Loss
359
405
359-405
359
false
Attias, Idan and Hanneke, Steve and Kalavasis, Alkis and Karbasi, Amin and Velegkas, Grigoris
given family
Idan
Attias
given family
Steve
Hanneke
given family
Alkis
Kalavasis
given family
Amin
Karbasi
given family
Grigoris
Velegkas
2024-06-30
Proceedings of Thirty Seventh Conference on Learning Theory
247
inproceedings
date-parts
2024
6
30