We describe an asynchronous parallel stochastic coordinate descent algorithm for minimizing smooth unconstrained or separably constrained functions. The method achieves a linear convergence rate on functions that satisfy an essential strong convexity property and a sublinear rate (1/K) on general convex functions. Near-linear speedup on a multicore system can be expected if the number of processors is O(n1/2) in unconstrained optimization and O(n1/4) in the separable-constrained case, where n is the number of variables. We describe results from implementation on 40-core processors.
An asynchronous parallel stochastic coordinate descent algorithm
Ji Liu,Stephen J. Wright,C. Ré,Victor Bittorf,Srikrishna Sridhar
Published 2013 in Journal of machine learning research
ABSTRACT
PUBLICATION RECORD
- Publication year
2013
- Venue
Journal of machine learning research
- Publication date
2013-11-08
- Fields of study
Mathematics, Computer Science
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-35 of 35 references · Page 1 of 1