Search results

1 – 7 of 7
Per page
102050
Citations:
Loading...
Access Restricted. View access options
Article
Publication date: 9 January 2019

Xiaoyu Hu, Evan Chodora, Saurabh Prabhu, Akshay Gupte and Sez Atamturktur

This paper aims to present an approach for calibrating the numerical models of dynamical systems that have spatially localized nonlinear components. The approach implements the…

187

Abstract

Purpose

This paper aims to present an approach for calibrating the numerical models of dynamical systems that have spatially localized nonlinear components. The approach implements the extended constitutive relation error (ECRE) method using multi-harmonic coefficients and is conceived to separate the errors in the representation of the global, linear and local, nonlinear components of the dynamical system through a two-step process.

Design/methodology/approach

The first step focuses on the system’s predominantly linear dynamic response under a low magnitude periodic excitation. In this step, the discrepancy between measured and predicted multi-harmonic coefficients is calculated in terms of residual energy. This residual energy is in turn used to spatially locate errors in the model, through which one can identify the erroneous model inputs which govern the linear behavior that need to be calibrated. The second step involves measuring the system’s nonlinear dynamic response under a high magnitude periodic excitation. In this step, the response measurements under both low and high magnitude excitation are used to iteratively calibrate the identified linear and nonlinear input parameters.

Findings

When model error is present in both linear and nonlinear components, the proposed iterative combined multi-harmonic balance method (MHB)-ECRE calibration approach has shown superiority to the conventional MHB-ECRE method, while providing more reliable calibration results of the nonlinear parameter with less dependency on a priori knowledge of the associated linear system.

Originality/value

This two-step process is advantageous as it reduces the confounding effects of the uncertain model parameters associated with the linear and locally nonlinear components of the system.

Details

Engineering Computations, vol. 36 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

Access Restricted. View access options
Article
Publication date: 16 April 2018

Garrison N. Stevens, Sez Atamturktur, D. Andrew Brown, Brian J. Williams and Cetin Unal

Partitioned analysis is an increasingly popular approach for modeling complex systems with behaviors governed by multiple, interdependent physical phenomena. Yielding accurate…

108

Abstract

Purpose

Partitioned analysis is an increasingly popular approach for modeling complex systems with behaviors governed by multiple, interdependent physical phenomena. Yielding accurate representations of reality from partitioned models depends on the availability of all necessary constituent models representing relevant physical phenomena. However, there are many engineering problems where one or more of the constituents may be unavailable because of lack of knowledge regarding the underlying principles governing the behavior or the inability to experimentally observe the constituent behavior in an isolated manner through separate-effect experiments. This study aims to enable partitioned analysis in such situations with an incomplete representation of the full system by inferring the behavior of the missing constituent.

Design/methodology/approach

This paper presents a statistical method for inverse analysis infer missing constituent physics. The feasibility of the method is demonstrated using a physics-based visco-plastic self-consistent (VPSC) model that represents the mechanics of slip and twinning behavior in 5182 aluminum alloy. However, a constituent model to carry out thermal analysis representing the dependence of hardening parameters on temperature is unavailable. Using integral-effect experimental data, the proposed approach is used to infer an empirical constituent model, which is then coupled with VPSC to obtain an experimentally augmented partitioned model representing the thermo-mechanical properties of 5182 aluminum alloy.

Findings

Results demonstrate the capability of the method to enable model predictions dependent upon relevant operational conditions. The VPSC model is coupled with the empirical constituent, and the newly enabled thermal-dependent predictions are compared with experimental data.

Originality/value

The method developed in this paper enables the empirical inference of a functional representation of input parameter values in lieu of a missing constituent model. Through this approach, development of partitioned models in the presence of uncertainty regarding a constituent model is made possible.

Details

Engineering Computations, vol. 35 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

Access Restricted. View access options
Article
Publication date: 3 July 2017

Saurabh Prabhu, Sez Atamturktur and Scott Cogan

This paper aims to focus on the assessment of the ability of computer models with imperfect functional forms and uncertain input parameters to represent reality.

109

Abstract

Purpose

This paper aims to focus on the assessment of the ability of computer models with imperfect functional forms and uncertain input parameters to represent reality.

Design/methodology/approach

In this assessment, both the agreement between a model’s predictions and available experiments and the robustness of this agreement to uncertainty have been evaluated. The concept of satisfying boundaries to represent input parameter sets that yield model predictions with acceptable fidelity to observed experiments has been introduced.

Findings

Satisfying boundaries provide several useful indicators for model assessment, and when calculated for varying fidelity thresholds and input parameter uncertainties, reveal the trade-off between the robustness to uncertainty in model parameters, the threshold for satisfactory fidelity and the probability of satisfying the given fidelity threshold. Using a controlled case-study example, important modeling decisions such as acceptable level of uncertainty, fidelity requirements and resource allocation for additional experiments are shown.

Originality/value

Traditional methods of model assessment are solely based on fidelity to experiments, leading to a single parameter set that is considered fidelity-optimal, which essentially represents the values which yield the optimal compensation between various sources of errors and uncertainties. Rather than maximizing fidelity, this study advocates for basing model assessment on the model’s ability to satisfy a required fidelity (or error tolerance). Evaluating the trade-off between error tolerance, parameter uncertainty and probability of satisfying this predefined error threshold provides us with a powerful tool for model assessment and resource allocation.

Details

Engineering Computations, vol. 34 no. 5
Type: Research Article
ISSN: 0264-4401

Keywords

Access Restricted. View access options
Article
Publication date: 12 March 2018

Sepideh Yazdekhasti, Kalyan Ram Piratla, John C. Matthews, Abdul Khan and Sez Atamturktur

There has been a sustained interest over the past couple of decades in developing sophisticated leak detection techniques (LDTs) that are economical and reliable. Majority of…

491

Abstract

Purpose

There has been a sustained interest over the past couple of decades in developing sophisticated leak detection techniques (LDTs) that are economical and reliable. Majority of current commercial LDTs are acoustics based and they are not equally suitable to all pipe materials and sizes. There is also limited knowledge on the comparative merits of such acoustics-based leak detection techniques (ALDTs). The purpose of this paper is to review six commercial ALDTs based on four decisive criteria and subsequently develop guidance for the optimal selection of an ALDT.

Design/methodology/approach

Numerous publications and field demonstration reports are reviewed for evaluating the performance of various ALDTs in this study to inform their optimal selection using an integrated multi-criteria decision analysis (MCDA) framework. The findings are validated using interviews of water utility experts.

Findings

The study approach and the findings will have a broad impact on the water utility industry by identifying a suite of suitable ALDTs for a range of typical application scenarios. The evaluated ALDTs include listening devices, noise loggers, leak-noise correlators, free-swimming acoustic, tethered acoustic, and acoustic emissions. The evaluation criteria include cost, reliability, access requirements, and the ability to quantify leakage severity. The guidance presented in this paper will support efficient decision making in water utility management to minimize pipeline leakage.

Originality/value

This study attempts to address the problem of severe dearth of performance data for pipeline inspection techniques. Performance data reported in the published literature on various ALDTs are appropriately aggregated and compared using a MCDA, while the uncertainty in performance data is addressed using the Monte Carlo simulation approach.

Details

Management of Environmental Quality: An International Journal, vol. 29 no. 2
Type: Research Article
ISSN: 1477-7835

Keywords

Access Restricted. View access options
Article
Publication date: 13 June 2016

Garrison Stevens, Sez Atamturktur, Ricardo Lebensohn and George Kaschner

Highly anisotropic zirconium is a material used in the cladding of nuclear fuel rods, ensuring containment of the radioactive material within. The complex material structure of…

287

Abstract

Purpose

Highly anisotropic zirconium is a material used in the cladding of nuclear fuel rods, ensuring containment of the radioactive material within. The complex material structure of anisotropic zirconium requires model developers to replicate not only the macro-scale stresses but also the meso-scale material behavior as the crystal structure evolves; leading to strongly coupled multi-scale plasticity models. Such strongly coupled models can be achieved through partitioned analysis techniques, which couple independently developed constituent models through an iterative exchange of inputs and outputs. Throughout this iterative process, biases, and uncertainties inherent within constituent model predictions are inevitably transferred between constituents either compensating for each other or accumulating during iterations. The paper aims to discuss these issues.

Design/methodology/approach

A finite element model at the macro-scale is coupled in an iterative manner with a meso-scale viscoplastic self-consistent model, where the former supplies the stress input and latter represents the changing material properties. The authors present a systematic framework for experiment-based validation taking advantage of both separate-effect experiments conducted within each constituent’s domain to calibrate the constituents in their respective scales and integral-effect experiments executed within the coupled domain to test the validity of the coupled system.

Findings

This framework developed is shown to improve predictive capability of a multi-scale plasticity model of highly anisotropic zirconium.

Originality/value

For multi-scale models to be implemented to support high-consequence decisions, such as the containment of radioactive material, this transfer of biases and uncertainties must be evaluated to ensure accuracy of the predictions of the coupled model. This framework takes advantage of the transparency of partitioned analysis to reduce the accumulation of errors and uncertainties.

Details

Multidiscipline Modeling in Materials and Structures, vol. 12 no. 1
Type: Research Article
ISSN: 1573-6105

Keywords

Access Restricted. View access options
Article
Publication date: 5 October 2015

Sez Atamturktur and Ismail Farajpour

Physical phenomena interact with each other in ways that one cannot be analyzed without considering the other. To account for such interactions between multiple phenomena…

130

Abstract

Purpose

Physical phenomena interact with each other in ways that one cannot be analyzed without considering the other. To account for such interactions between multiple phenomena, partitioning has become a widely implemented computational approach. Partitioned analysis involves the exchange of inputs and outputs from constituent models (partitions) via iterative coupling operations, through which the individually developed constituent models are allowed to affect each other’s inputs and outputs. Partitioning, whether multi-scale or multi-physics in nature, is a powerful technique that can yield coupled models that can predict the behavior of a system more complex than the individual constituents themselves. The paper aims to discuss these issues.

Design/methodology/approach

Although partitioned analysis has been a key mechanism in developing more realistic predictive models over the last decade, its iterative coupling operations may lead to the propagation and accumulation of uncertainties and errors that, if unaccounted for, can severely degrade the coupled model predictions. This problem can be alleviated by reducing uncertainties and errors in individual constituent models through further code development. However, finite resources may limit code development efforts to just a portion of possible constituents, making it necessary to prioritize constituent model development for efficient use of resources. Thus, the authors propose here an approach along with its associated metric to rank constituents by tracing uncertainties and errors in coupled model predictions back to uncertainties and errors in constituent model predictions.

Findings

The proposed approach evaluates the deficiency (relative degree of imprecision and inaccuracy), importance (relative sensitivity) and cost of further code development for each constituent model, and combines these three factors in a quantitative prioritization metric. The benefits of the proposed metric are demonstrated on a structural portal frame using an optimization-based uncertainty inference and coupling approach.

Originality/value

This study proposes an approach and its corresponding metric to prioritize the improvement of constituents by quantifying the uncertainties, bias contributions, sensitivity analysis, and cost of the constituent models.

Details

Engineering Computations, vol. 32 no. 7
Type: Research Article
ISSN: 0264-4401

Keywords

Access Restricted. View access options
Article
Publication date: 5 May 2015

Garrison Stevens, Kendra Van Buren, Elizabeth Wheeler and Sez Atamturktur

Numerical models are being increasingly relied upon to evaluate wind turbine performance by simulating phenomena that are infeasible to measure experimentally. These numerical…

238

Abstract

Purpose

Numerical models are being increasingly relied upon to evaluate wind turbine performance by simulating phenomena that are infeasible to measure experimentally. These numerical models, however, require a large number of input parameters that often need to be calibrated against available experiments. Owing to the unavoidable scarcity of experiments and inherent uncertainties in measurements, this calibration process may yield non-unique solutions, i.e. multiple sets of parameters may reproduce the available experiments with similar fidelity. The purpose of this paper is to study the trade-off between fidelity to measurements and the robustness of this fidelity to uncertainty in calibrated input parameters.

Design/methodology/approach

Here, fidelity is defined as the ability of the model to reproduce measurements and robustness is defined as the allowable variation in the input parameters with which the model maintains a predefined level of threshold fidelity. These two vital attributes of model predictiveness are evaluated in the development of a simplified finite element beam model of the CX-100 wind turbine blade.

Findings

Findings of this study show that calibrating the input parameters of a numerical model with the sole objective of improving fidelity to available measurements degrades the robustness of model predictions at both tested and untested settings. A more optimal model may be obtained by calibration methods considering both fidelity and robustness. Multi-criteria Decision Making further confirms the conclusion that the optimal model performance is achieved by maintaining a balance between fidelity and robustness during calibration.

Originality/value

Current methods for model calibration focus solely on fidelity while the authors focus on the trade-off between fidelity and robustness.

Details

Engineering Computations, vol. 32 no. 3
Type: Research Article
ISSN: 0264-4401

Keywords

1 – 7 of 7
Per page
102050