File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1016/j.ress.2010.09.013
- Scopus: eid_2-s2.0-79959591356
- WOS: WOS:000293107900013
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: Bayesian uncertainty analysis with applications to turbulence modeling
Title | Bayesian uncertainty analysis with applications to turbulence modeling |
---|---|
Authors | |
Keywords | Model validation under uncertainty Model inadequacy representations Forward propagation of uncertainty Stochastic model classes Bayesian analysis Turbulence modeling |
Issue Date | 2011 |
Citation | Reliability Engineering and System Safety, 2011, v. 96, n. 9, p. 1137-1149 How to Cite? |
Abstract | In this paper, we apply Bayesian uncertainty quantification techniques to the processes of calibrating complex mathematical models and predicting quantities of interest (QoI's) with such models. These techniques also enable the systematic comparison of competing model classes. The processes of calibration and comparison constitute the building blocks of a larger validation process, the goal of which is to accept or reject a given mathematical model for the prediction of a particular QoI for a particular scenario. In this work, we take the first step in this process by applying the methodology to the analysis of the SpalartAllmaras turbulence model in the context of incompressible, boundary layer flows. Three competing model classes based on the SpalartAllmaras model are formulated, calibrated against experimental data, and used to issue a prediction with quantified uncertainty. The model classes are compared in terms of their posterior probabilities and their prediction of QoI's. The model posterior probability represents the relative plausibility of a model class given the data. Thus, it incorporates the model's ability to fit experimental observations. Alternatively, comparing models using the predicted QoI connects the process to the needs of decision makers that use the results of the model. We show that by using both the model plausibility and predicted QoI, one has the opportunity to reject some model classes after calibration, before subjecting the remaining classes to additional validation challenges. © 2011 Elsevier Ltd. All rights reserved. |
Persistent Identifier | http://hdl.handle.net/10722/296069 |
ISSN | 2023 Impact Factor: 9.4 2023 SCImago Journal Rankings: 2.028 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Cheung, Sai Hung | - |
dc.contributor.author | Oliver, Todd A. | - |
dc.contributor.author | Prudencio, Ernesto E. | - |
dc.contributor.author | Prudhomme, Serge | - |
dc.contributor.author | Moser, Robert D. | - |
dc.date.accessioned | 2021-02-11T04:52:46Z | - |
dc.date.available | 2021-02-11T04:52:46Z | - |
dc.date.issued | 2011 | - |
dc.identifier.citation | Reliability Engineering and System Safety, 2011, v. 96, n. 9, p. 1137-1149 | - |
dc.identifier.issn | 0951-8320 | - |
dc.identifier.uri | http://hdl.handle.net/10722/296069 | - |
dc.description.abstract | In this paper, we apply Bayesian uncertainty quantification techniques to the processes of calibrating complex mathematical models and predicting quantities of interest (QoI's) with such models. These techniques also enable the systematic comparison of competing model classes. The processes of calibration and comparison constitute the building blocks of a larger validation process, the goal of which is to accept or reject a given mathematical model for the prediction of a particular QoI for a particular scenario. In this work, we take the first step in this process by applying the methodology to the analysis of the SpalartAllmaras turbulence model in the context of incompressible, boundary layer flows. Three competing model classes based on the SpalartAllmaras model are formulated, calibrated against experimental data, and used to issue a prediction with quantified uncertainty. The model classes are compared in terms of their posterior probabilities and their prediction of QoI's. The model posterior probability represents the relative plausibility of a model class given the data. Thus, it incorporates the model's ability to fit experimental observations. Alternatively, comparing models using the predicted QoI connects the process to the needs of decision makers that use the results of the model. We show that by using both the model plausibility and predicted QoI, one has the opportunity to reject some model classes after calibration, before subjecting the remaining classes to additional validation challenges. © 2011 Elsevier Ltd. All rights reserved. | - |
dc.language | eng | - |
dc.relation.ispartof | Reliability Engineering and System Safety | - |
dc.subject | Model validation under uncertainty | - |
dc.subject | Model inadequacy representations | - |
dc.subject | Forward propagation of uncertainty | - |
dc.subject | Stochastic model classes | - |
dc.subject | Bayesian analysis | - |
dc.subject | Turbulence modeling | - |
dc.title | Bayesian uncertainty analysis with applications to turbulence modeling | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1016/j.ress.2010.09.013 | - |
dc.identifier.scopus | eid_2-s2.0-79959591356 | - |
dc.identifier.volume | 96 | - |
dc.identifier.issue | 9 | - |
dc.identifier.spage | 1137 | - |
dc.identifier.epage | 1149 | - |
dc.identifier.isi | WOS:000293107900013 | - |
dc.identifier.issnl | 0951-8320 | - |