Multi-task sparse feature learning aims to improve the generalization performance by exploiting the shared features among tasks. It has been successfully applied to many applications including computer vision and biomedical informatics. Most of the existing multi-task sparse feature learning algorithms are formulated as a convex sparse regularization problem, which is usually suboptimal, due to its looseness for approximating an [Formula: see text]-type regularizer. In this paper, we propose a non-convex formulation for multi-task sparse feature learning based on a novel regularizer. To solve the non-convex optimization problem, we propose a Multi-Stage Multi-Task Feature Learning (MSMTFL) algorithm. Moreover, we present a detailed theoretical analysis showing that MSMTFL achieves a better parameter estimation error bound than the convex formulation. Empirical studies on both synthetic and real-world data sets demonstrate the effectiveness of MSMTFL in comparison with the state of the art multi-task sparse feature learning algorithms.
Multi-stage multi-task feature learning
Pinghua Gong,Jieping Ye,Changshui Zhang
Published 2012 in Journal of machine learning research
ABSTRACT
PUBLICATION RECORD
- Publication year
2012
- Venue
Journal of machine learning research
- Publication date
2012-10-22
- Fields of study
Medicine, Computer Science, Mathematics
- Identifiers
- External record
- Source metadata
Semantic Scholar, PubMed
CITATION MAP
EXTRACTION MAP
CLAIMS
CONCEPTS
- convex sparse regularization
A convex sparsity-inducing regularization framework used for multi-task feature learning.
- msmtfl algorithm
A multi-stage optimization procedure for solving the proposed multi-task feature learning problem.
Aliases: Multi-Stage Multi-Task Feature Learning, MSMTFL
- multi-task sparse feature learning
A learning setting that seeks shared sparse features across multiple related tasks.
Aliases: MTFL
- non-convex formulation
A sparsity-promoting optimization model that uses a non-convex regularizer instead of a convex one.
- parameter estimation error bound
A theoretical bound on the estimation error of the learned parameters under the model.
- synthetic and real-world data sets
Benchmark datasets used to test the method on simulated and practical multi-task learning problems.
Aliases: synthetic data sets and real-world data sets
REFERENCES
Showing 1-48 of 48 references · Page 1 of 1