Share Email Print
cover

Proceedings Paper

Dynamic Bayesian learning by expectation propagation
Author(s): Tao Wei
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

For modeling time-series data, it is natural to use directed graphical models, since they can capture the time flow. If arcs of a graphical model are all directed both within and between time-slice, the model is called dynamic Bayesian network (DBN). Dynamic Bayesian networks are becoming increasingly important for research and applications in the area of machine learning, artificial intelligence and signal processing. It has several advantages over other data analysis methods including rule bases, neural network, decision trees, etc. In this paper, there explored dynamic Bayesian learning over DBNs by a new deterministic approximate inference method called Expectation Propagation (EP). EP is an extension of belief propagation and is developed in machine learning. A crucial step of EP is the likelihoods recycling, which makes possible further improvement over the extended Kalman smoother. This study examined EP solutions to a non-linear state-space model and compared its performance with other inference methods such as particle filter, extended Kalman filter, etc.

Paper Details

Date Published: 20 February 2006
PDF: 8 pages
Proc. SPIE 6041, ICMIT 2005: Information Systems and Signal Processing, 60411N (20 February 2006); doi: 10.1117/12.664342
Show Author Affiliations
Tao Wei, The Univ. of Texas at San Antonio (United States)


Published in SPIE Proceedings Vol. 6041:
ICMIT 2005: Information Systems and Signal Processing
Yunlong Wei; Kil To Chong; Takayuki Takahashi, Editor(s)

© SPIE. Terms of Use
Back to Top