Online Non-Convex Learning: Following the Perturbed Leader is Optimal

A. Suggala,Praneeth Netrapalli

Published 2019 in International Conference on Algorithmic Learning Theory

ABSTRACT

We study the problem of online learning with non-convex losses, where the learner has access to an offline optimization oracle. We show that the classical Follow the Perturbed Leader (FTPL) algorithm achieves optimal regret rate of $O(T^{-1/2})$ in this setting. This improves upon the previous best-known regret rate of $O(T^{-1/3})$ for FTPL. We further show that an optimistic variant of FTPL achieves better regret bounds when the sequence of losses encountered by the learner is `predictable'.

PUBLICATION RECORD

  • Publication year

    2019

  • Venue

    International Conference on Algorithmic Learning Theory

  • Publication date

    2019-03-19

  • Fields of study

    Mathematics, Computer Science

  • Identifiers
  • External record

    Open on Semantic Scholar

  • Source metadata

    Semantic Scholar

CITATION MAP

EXTRACTION MAP

CLAIMS

  • No claims are published for this paper.

CONCEPTS

  • No concepts are published for this paper.

CITED BY

Showing 1-64 of 64 citing papers · Page 1 of 1