Rainald Löhner, Fumiya Togashi and Joseph David Baum
A common observation made when computing chemically reacting flows is how central processing unit (CPU)-intensive these are in comparison to cold flow cases. The update of tens or…
Abstract
Purpose
A common observation made when computing chemically reacting flows is how central processing unit (CPU)-intensive these are in comparison to cold flow cases. The update of tens or hundreds of species with hundreds or thousands of reactions can easily consume more than 95% of the total CPU time. In many cases, the region where reactions (combustion) are actually taking place comprises only a very small percentage of the volume. Typical examples are flame fronts propagating through a domain. In such cases, only a small fraction of points/cells needs a full chemistry update. This leads to extreme load imbalances on parallel machines. The purpose of the present work is to develop a methodology to balance the work in an optimal way.
Design/methodology/approach
Points that require a full chemistry update are identified, gathered and distributed across the network, so that work is evenly distributed. Once the chemistry has been updated, the unknowns are gathered back.
Findings
The procedure has been found to work extremely well, leading to optimal load balance with insignificant communication overheads.
Research limitations/implications
In many production runs, the procedure leads to a reduction in CPU requirements of more than an order of magnitude. This allows much larger and longer runs, improving accuracy and statistics.
Practical implications
The procedure has allowed the calculation of chemically reacting flow cases that were hitherto not possible.
Originality/value
To the authors’ knowledge, this type of load balancing has not been published before.
Details
Keywords
Rainald Löhner, Lingquan Li, Orlando Antonio Soto and Joseph David Baum
This study aims to evaluate blast loads on and the response of submerged structures.
Abstract
Purpose
This study aims to evaluate blast loads on and the response of submerged structures.
Design/methodology/approach
An arbitrary Lagrangian–Eulerian method is developed to model fluid–structure interaction (FSI) problems of close-in underwater explosions (UNDEX). The “fluid” part provides the loads for the structure considers air, water and high explosive materials. The spatial discretization for the fluid domain is performed with a second-order vertex-based finite volume scheme with a tangent of hyperbola interface capturing technique. The temporal discretization is based on explicit Runge–Kutta methods. The structure is described by a large-deformation Lagrangian formulation and discretized via finite elements. First, one-dimensional test cases are given to show that the numerical method is free of mesh movement effects. Thereafter, three-dimensional FSI problems of close-in UNDEX are studied. Finally, the computation of UNDEX near a ship compartment is performed.
Findings
The difference in the flow mechanisms between rigid targets and deforming targets is quantified and evaluated.
Research limitations/implications
Cavitation is modeled only approximately and may require further refinement/modeling.
Practical implications
The results demonstrate that the proposed numerical method is accurate, robust and versatile for practical use.
Social implications
Better design of naval infrastructure [such as bridges, ports, etc.].
Originality/value
To the best of the authors’ knowledge, this study has been conducted for the first time.
Details
Keywords
Thiroshnee Naidoo and Charlene Lew
The learning outcomes are as follows: understanding of the principles of choice overload and the impact of consumer choice overload on company sustainability and growth prospects;…
Abstract
Learning outcomes
The learning outcomes are as follows: understanding of the principles of choice overload and the impact of consumer choice overload on company sustainability and growth prospects; understanding of how several heuristics inform consumer decision-making; applying nudge theory to interpret and clarify the impact and consequences of nudges on consumer decision-making; and considering the challenge of a newly appointed CEO to influence consumer choice.
Case overview/synopsis
The case study and teaching note offers insights into the use of behavioural economics principles in consumer choice. The case study methodology was used to design, analyse and interpret the real-life application of behavioural economics in the retail sector. The case demonstrates how choice overload, dual process theory, decision heuristics and nudge theory play a role in consumer decision-making. The case offers insights into the application of behavioural economics to support the sustainability of a company in an emerging market context. Managers can use the findings to consider how to use behavioural economics principles to drive consumer choice. The application of behavioural economics to an industry facing challenges of sustainability offers new insights into how to design spaces and cues for consumer choice.
Complexity academic level
The case study is suitable for course in business administration, specifically at postgraduate level.
Supplementary materials
Teaching notes are available for educators only.
Subject code
CSS 8: Marketing
Details
Keywords
Joel A.C. Baum and Joseph Lampel
All knowledge claims are also to some extent legitimacy claims. No theory can receive serious attention, let alone gain credence, unless it is also seen as legitimate…
Abstract
All knowledge claims are also to some extent legitimacy claims. No theory can receive serious attention, let alone gain credence, unless it is also seen as legitimate. Philosophers of science have spent decades trying to frame criteria that determine the legitimacy of theories, only to agree to disagree on the matter. Sociologists of science, on the other hand, take a broader view, arguing that rather than seeking ex ante criteria of knowledge it is best to examine how researchers legitimate knowledge in practice.
Orlando A. Soto, Joseph D. Baum, Fumiya Togashi, Rainald Löhner, Robert A. Frank and Ali Amini
– The purpose of this paper is to determine the reason for the discrepancy in estimated and observed damage caused by fragmenting charges in closed environments.
Abstract
Purpose
The purpose of this paper is to determine the reason for the discrepancy in estimated and observed damage caused by fragmenting charges in closed environments.
Design/methodology/approach
A series of carefully conducted physical and numerical experiments was conducted. The results were analyzed and compared.
Findings
The analysis shows that for fragmenting charges in closed environments, dust plays a far larger role than previously thought, leading to much lower pressures and damage.
Research limitations/implications
In light of these findings, many assumptions and results for fragmenting charges in closed environments need to be reconsidered.
Practical implications
This implies that for a far larger class of problems than previously estimated it is imperative to take into consideration dust production and its effect on the resulting pressures.
Originality/value
This is the first time such a finding has been reported in this context.
Details
Keywords
Rainald Löhner and Joseph Baum
Limitations in space and city planning constraints have led to the search for alternative shock mitigation devices that are architecturally appealing. The purpose of this paper is…
Abstract
Purpose
Limitations in space and city planning constraints have led to the search for alternative shock mitigation devices that are architecturally appealing. The purpose of this paper is to consider a compromise solution which consists of partially open, thick, bending-resistant shapes made of acrylic material that may be Kevlar- or steel-reinforced. Seven different configurations were analyzed numerically.
Design/methodology/approach
For the flow solver, the FEM-FCT scheme as implemented in FEFLO is used. The flowfields are initialized from the output of highly detailed 1-D (spherically symmetric) runs. Peak pressure and impulse are stored and compared. In total, seven different configurations were analyzed numerically.
Findings
It is found that for some of these, the maximum pressure is comparable to usual, closed walls, and the maximum impulse approximately 50 percent higher. This would indicate that such designs offer a blast mitigation device eminently suitable for built-up city environments.
Research limitations/implications
Future work will consider fully coupled fluid-structure runs for the more appealing designs, in order to assess whether such devices can be manufactured from commonly available materials such as acrylics or other poly-carbonates.
Practical implications
This would indicate that such designs offer a blast mitigation device eminently suitable for built-up city environments.
Originality/value
This is the first time such a semi-open blastwall approach has been tried and analyzed.
Details
Keywords
Rainald Löhner and Joseph D. Baum
Prompted by the empirical evidence that achievable flow solver speeds for large problems are limited by what appears to be a time of the order of O(0.1) sec/timestep regardless of…
Abstract
Purpose
Prompted by the empirical evidence that achievable flow solver speeds for large problems are limited by what appears to be a time of the order of O(0.1) sec/timestep regardless of the number of cores used, the purpose of this paper is to identify why this phenomenon occurs.
Design/methodology/approach
A series of timing studies, as well as in-depth analysis of memory and inter-processors transfer requirements were carried out for a typical field solver. The results were analyzed and compared to the expected performance.
Findings
The analysis shows that at present flow speeds per core are already limited by the achievable transfer rate to RAM. For smaller domains/larger number of processors, the limiting speed of CFD solvers is given by the MPI communication network.
Research limitations/implications
This implies that at present, there is a “limiting useful size” for domains, and that there is a lower limit for the time it takes to update a flowfield.
Practical implications
For practical calculations this implies that the time required for running large-scale problems will not decrease markedly once these applications migrate to machines with hundreds of thousands of cores.
Originality/value
This is the first time such a finding has been reported in this context.