Academia.eduAcademia.edu

Outline

Logarithmic regret algorithms for online convex optimization

2007, Machine Learning

https://doi.org/10.1007/S10994-007-5016-8

Abstract

In an online convex optimization problem a decision-maker makes a sequence of decisions, i.e., chooses a sequence of points in Euclidean space, from a fixed feasible set. After each point is chosen, it encounters a sequence of (possibly unrelated) convex cost functions. Zinkevich (ICML 2003) introduced this framework, which models many natural repeated decision-making problems and generalizes many existing problems such as Prediction from Expert Advice and Cover's Universal Portfolios. Zinkevich showed that a simple online gradient descent algorithm achieves additive regret O( √ T ), for an arbitrary sequence of T convex cost functions (of bounded gradients), with respect to the best single decision in hindsight. In this paper, we give algorithms that achieve regret O(log(T )) for an arbitrary sequence of strictly convex functions (with bounded first and second derivatives). This mirrors what has been done for the special cases of prediction from expert advice by Kivinen and Warmuth (EuroCOLT 1999), and Universal Portfolios by Cover (Math. Finance 1: [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19] 1991). We propose several algorithms achieving logarithmic regret, which besides being more general are also much more efficient to implement. The main new ideas give rise to an efficient algorithm based on the Newton method for optimization, a new tool in the field. Our analysis shows a surprising connection between the natural follow-the-leader approach and the Newton method. We also analyze other algorithms, which tie together several different previous approaches including follow-the-leader, exponential weighting, Cover's algorithm and gradient descent.

References (20)

  1. Blum, A., & Kalai, A. (1997). Universal portfolios with and without transaction costs. In COLT '97: pro- ceedings of the tenth annual conference on computational learning theory (pp. 309-313). New York: ACM.
  2. Brookes, M. (2005). The matrix reference manual. http://www.ee.ic.ac.uk/hp/staff/dmb/matrix/intro.html.
  3. Boyd, S., & Vandenberghe, L. (2004). Convex optimization. New York: Cambridge University Press.
  4. Cesa-Bianchi, N., & Lugosi, G. (2006). Prediction, learning, and games. Cambridge: Cambridge University Press.
  5. Cover, T. (1991). Universal portfolios. Mathematical Finance, 1, 1-19.
  6. Gaivoronski, A. A., & Stella, F. (2000). Stochastic nonstationary optimization for finding universal portfolios. Annals of Operations Research, 100, 165-188.
  7. Hannan, J. (1957). Approximation to bayes risk in repeated play. In M. Dresher, A.W. Tucker, & P. Wolfe (Eds.), Contributions to the theory of games (Vol. III, pp. 97-139).
  8. Hazan, E. (2006). Efficient algorithms for online convex optimization and their applications. PhD thesis, Princeton University.
  9. Kakade, S. (2005). Personal communication.
  10. Kalai, A., & Vempala, S. (2003). Efficient algorithms for universal portfolios. Journal of Machine Learning Research, 3, 423-440.
  11. Kalai, A., & Vempala, S. (2005). Efficient algorithms for on-line optimization. Journal of Computer and System Sciences, 71(3), 291-307.
  12. Kivinen, J., & Warmuth, M. K. (1998). Relative loss bounds for multidimensional regression problems. In M. I. Jordan, M. J. Kearns, & S.A. Solla (Eds.), Advances in neural information processing systems (Vol. 10). Cambridge: MIT.
  13. Kivinen, J., & Warmuth, M. K. (1999). Averaging expert predictions. In Computational learning theory: 4th European conference (EuroCOLT '99) (pp. 153-167). Berlin: Springer.
  14. Lovász, L., & Vempala, S. (2003a). The geometry of logconcave functions and an o * (n 3 ) sampling algorithm. Technical Report MSR-TR-2003-04, Microsoft Research.
  15. Lovász, L., & Vempala, S. (2003b). Simulated annealing in convex bodies and an 0 * (n 4 ) volume algorithm. In Proceedings of the 44th symposium on foundations of computer science (FOCS) (pp. 650-659).
  16. Lobo, M. S., Vandenberghe, L., Boyd, S., & Lebret, H. (1998). Applications of second-order cone program- ming.
  17. Merhav, N., & Feder, M. (1992). Universal sequential learning and decision from individual data sequences. In COLT '92: Proceedings of the fifth annual workshop on computational learning theory (pp. 413-427). New York: ACM.
  18. Riedel, K. (1991). A Sherman-Morrison-Woodbury identity for rank augmenting matrices with application to centering. SIAM Journal on Mathematical Analysis, 12(1), 80-95.
  19. Vaidya, P. M. (1996). A new algorithm for minimizing convex functions over convex sets. Mathematical Programming, 73(3), 291-341.
  20. Zinkevich, M. (2003). Online convex programming and generalized infinitesimal gradient ascent. In Proceed- ings of the twentieth international conference on machine learning (ICML) (pp. 928-936).