Continuous-time Bayesian networks is a natural structured representation language for multi-component stochastic processes that evolve continuously over time. Despite the compact representation provided by this language, inference in such models is intractable even in relatively simple structured networks. We introduce a mean field variational approximation in which we use a product of inhomogeneous Markov processes to approximate a joint distribution over trajectories. This variational approach leads to a globally consistent distribution, which can be efficiently queried. Additionally, it provides a lower bound on the probability of observations, thus making it attractive for learning tasks. Here we describe the theoretical foundations for the approximation, an efficient implementation that exploits the wide range of highly optimized ordinary differential equations (ODE) solvers, experimentally explore characterizations of processes for which this approximation is suitable, and show applications to a large-scale real-world inference problem.
Mean Field Variational Approximation for Continuous-Time Bayesian Networks
Ido Cohn,T. El-Hay,N. Friedman,R. Kupferman
Published 2009 in Journal of machine learning research
ABSTRACT
PUBLICATION RECORD
- Publication year
2009
- Venue
Journal of machine learning research
- Publication date
2009-06-18
- Fields of study
Mathematics, Computer Science
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-34 of 34 references · Page 1 of 1
CITED BY
Showing 1-74 of 74 citing papers · Page 1 of 1