Lorenzo Rosasco (DIBRIS)
Title:听An implicit tour of regularization
础产蝉迟谤补肠迟:听Regularization is a key ingredient in the design of learning algorithms. Classically it amounts to the definition of a constrained/penalized empirical objective to be minimized. Optimization aspects are then considered separately. In practice, these distinctions are much more blurred. Indeed, it is a classical observation that an optimization process can have a self-regularizing effect by (implicitly) enforcing some inductive bias. 听This observation has recently become popular in machine learning. On the one hand, it seems to help understanding learning curves in deep learning. On the other hand, controlling regularization by optimization can improve efficiency in learning. In this talk, I will provide an overview of classical and recent results on the topic.听
听
听
Seminar MTL Machine Learning and Optimization (MTL MLOpt)
Veuillez vous inscrire 脿 la liste d'envoi/Please subscribe to the mailing list: