Title (deu)
High probability and risk-averse guarantees for stochastic saddle point problems
Speaker / Lecturer
Mert Gürbüzbalaban
Rutgers U
Description (deu)
We first consider stochastic strongly-convex-strongly-concave (SCSC) saddle point (SP) problems which frequently arise in applications ranging from distributionally robust learning to game theory and fairness in machine learning. We focus on the recently developed stochastic accelerated primal-dual algorithm (SAPD), which admits optimal complexity in several settings as an accelerated algorithm. We provide high probability guarantees for convergence to a neighborhood of the saddle point that reflects accelerated convergence behavior. We also provide an analytical formula for the limiting covariance matrix of the iterates for a class of stochastic SCSC quadratic problems where the gradient noise is additive and Gaussian. This allows us to develop lower bounds for this class of quadratic problems which show that our analysis is tight in terms of the high probability bound dependency to the parameters. We also provide a risk-averse convergence analysis characterizing the ``Conditional Value at Risk'', the ``Entropic Value at Risk'', and the χ2-divergence of the distance to the saddle point, highlighting the trade-offs between the bias and the risk associated with an approximate solution obtained by terminating the algorithm at any iteration. Second, we discuss how our results and techniques can be extended to non-convex minimax problems, where we provide high-probability guarantees for non-convex/concave problems.
Keywords (deu)
One World Optimization Seminar
Subject (eng)
ÖFOS 2012 -- 101 -- Mathematics
Type (eng)
Language
[eng]
Persistent identifier
Date created
2024-06-03
Place of creation (eng)
ESI
Duration
32 minutes 9 seconds
License
- Citable links
Persistent identifier
https://phaidra.univie.ac.at/o:2069741 - Content
- Details
- Usage statistics--
- Metadata
- Export formats
Media Package Identifier
id=94241307-e1f3-473b-ad0d-66ed12df27c2
