Share Email Print
cover

Proceedings Paper

Prediction-error variance in Bayesian model updating: a comparative study
Author(s): Parisa Asadollahi; Jian Li; Yong Huang
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes’ Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model class level produces more robust results especially when the number of measurement is small.

Paper Details

Date Published: 12 April 2017
PDF: 8 pages
Proc. SPIE 10168, Sensors and Smart Structures Technologies for Civil, Mechanical, and Aerospace Systems 2017, 101683P (12 April 2017); doi: 10.1117/12.2260398
Show Author Affiliations
Parisa Asadollahi, The Univ. of Kansas (United States)
Jian Li, The Univ. of Kansas (United States)
Yong Huang, Harbin Institute of Technology (China)


Published in SPIE Proceedings Vol. 10168:
Sensors and Smart Structures Technologies for Civil, Mechanical, and Aerospace Systems 2017
Jerome P. Lynch, Editor(s)

© SPIE. Terms of Use
Back to Top