Search results
1 – 10 of over 17000Yanhu Guo, Guangbin He and Andrew T. Hsu
Proposes the use of genetic algorithms to assist the development of turbulence models. A variable Schmidt number model for scalar mixing in jet‐in‐crossflows was developed through…
Abstract
Proposes the use of genetic algorithms to assist the development of turbulence models. A variable Schmidt number model for scalar mixing in jet‐in‐crossflows was developed through theoretical analyses. A uniform micro genetic algorithm is implemented to optimize the model. This is the first known application of the genetic algorithm (GA) technique to turbulence model development. Overall, the GA technique worked exceptionally well for this problem in a cost‐effective and time‐efficient manner. A set of experimental data on a single round jet issued into a confined crossflow is selected for calibration and optimization of the model constants using the uniform micro‐genetic optimization algorithm. Three sets of experimental data of jet‐in‐crossflows are used for the validation of the new model. Numerical results show that the proposed scheme of using the genetic algorithms to develop turbulence models produces very promising results.
Details
Keywords
Robert E. Spekman, Derek A. Newton and Alexandra Ranson
This case serves as an introduction to field sales management. A manager must address three sales representatives' ingrained behaviors in order to implement a major shift in…
Abstract
This case serves as an introduction to field sales management. A manager must address three sales representatives' ingrained behaviors in order to implement a major shift in marketing strategy. Students should recognize the nature of the "man-in-the-middle" squeeze: the manager caught between the pressure of implementing a new strategy from the top and the resistance to change from the bottom.
Details
Keywords
This study was launched because practitioners of Appreciative Inquiry (AI) instilled awareness for needed AI outcome research. Therefore, the goal of this research was to identify…
Abstract
This study was launched because practitioners of Appreciative Inquiry (AI) instilled awareness for needed AI outcome research. Therefore, the goal of this research was to identify the salient AI processes and levers and the rate of AI success and failure. This study was specific to U.S. municipalities due to a researcher finding AI failure probability therein. In direct opposition, eight U.S. municipalities were identified from the literature as having utilized AI in 14 projects and all were successful even when resistance was present in three applications. A survey revealed 15 AI initiatives identified as successful even when resistance was present in eight, resulting in validation. This study utilized a mixed methods exploratory case study design, sequentially in the mix, consisting of a literature review and application of two unique instruments applied to three populations.
Christine Amsler, Robert James, Artem Prokhorov and Peter Schmidt
The traditional predictor of technical inefficiency proposed by Jondrow, Lovell, Materov, and Schmidt (1982) is a conditional expectation. This chapter explores whether, and by…
Abstract
The traditional predictor of technical inefficiency proposed by Jondrow, Lovell, Materov, and Schmidt (1982) is a conditional expectation. This chapter explores whether, and by how much, the predictor can be improved by using auxiliary information in the conditioning set. It considers two types of stochastic frontier models. The first type is a panel data model where composed errors from past and future time periods contain information about contemporaneous technical inefficiency. The second type is when the stochastic frontier model is augmented by input ratio equations in which allocative inefficiency is correlated with technical inefficiency. Compared to the standard kernel-smoothing estimator, a newer estimator based on a local linear random forest helps mitigate the curse of dimensionality when the conditioning set is large. Besides numerous simulations, there is an illustrative empirical example.
Details
Keywords
The purpose of this paper is to improve Kingdon’s (1984, 2011) concept of policy entrepreneurs (PE) with regard to the theoretical development of the definition and identification…
Abstract
Purpose
The purpose of this paper is to improve Kingdon’s (1984, 2011) concept of policy entrepreneurs (PE) with regard to the theoretical development of the definition and identification and level of agency by supplementing it with elements of Schmidt’s (2008, 2010, 2011, 2012) sentient agents. The improved concept of discursive policy entrepreneurs (DPEs) is then applied in an in-depth case study about the agenda setting process of micro and macro whole-of-government accounting in Australia in the late 1990s and early 2000s.
Design/methodology/approach
Based on the concept of DPEs, a series of operationalised characteristics and proxies are developed to identify them and describe their behaviour. These are then applied in the case study. The two main data sources are semi-structured in-depth interviews and archival documents.
Findings
The findings show that the focus on DPEs’ discursive and coordination activities is critical for identifying and investigating the key actors of the Generally Accepted Accounting Principles (GAAP)/Government Finance Statistics (GFS) harmonisation agenda setting process. The study also finds that the two relevant decision-making bodies, the Financial Reporting Council and the Australian Accounting Standards Board, lost control over their agendas due to the actions of DPEs.
Research limitations/implications
The improved concepts of DPEs will allow researchers to better identify the main agents of policy change and differentiate them from other supporters of policy ideas. Due to the qualitative nature of the study, the findings are not necessarily generalisable.
Practical implications
The findings from this study can help participants of agenda setting processes to gain a better understanding of the actions and behaviours of DPEs. This might allow standard setting bodies to mitigate against undue influences by DPEs.
Originality/value
This study is the first study that uses Schmidt’s concept of the sentient agent to address the limitations of Kingdon’s concept of PE and develops and applies characteristics to identify PEs and their actions. It is also the only study to date that investigates the GAAP/GFS harmonisation agenda setting process.
Details
Keywords
Adam T. Schmidt, Jacquelynn Duron, Becca K. Bergquist, Alexandra C. Bammel, Kelsey A. Maloney, Abigail Williams-Butler and Gerri R. Hanten
Though prosocial attributes are linked to positive outcomes among justice-involved adolescents and are a mainstay of numerous interventions, few measures have been specifically…
Abstract
Purpose
Though prosocial attributes are linked to positive outcomes among justice-involved adolescents and are a mainstay of numerous interventions, few measures have been specifically designed to evaluate prosocial functioning within this population. Although multiple instruments measuring aspects of prosocial behavior exist, these instruments were not designed to measure prosocial behaviors among youth in juvenile justice settings. This study aims to provide a preliminary validation of a new measure of prosocial attributes (the Prosocial Status Inventory – PSI), which was designed to comprehensively evaluate in greater depth the prosocial functioning of urban, justice-involved youth.
Design/methodology/approach
Youth (n = 51) were recruited as part of a larger study and were participants in a community-based mentoring program in a large, urban county in the Southern USA. Youth completed the PSI at baseline prior to their participation in the community-based mentoring program. The authors obtained follow-up data on recidivism from the county juvenile justice department.
Findings
PSI scores were positively related to a lower rate of recidivism and a decrease in offending frequency over a 12-month follow-up period.
Originality/value
The current findings complement previous work, suggesting that prosocial attributes are measurable and related to important outcomes among justice-involved youth and support the utility of strengths-based treatment approaches. Moreover, it provides preliminary evidence of the utility of a new self-report measure to assess these traits within a juvenile justice population.
Details
Keywords
Alex A. Schmidt, Alice de Jesus Kozakevicius and Stefan Jakobsson
The current work aims to present a parallel code using the open multi-processing (OpenMP) programming model for an adaptive multi-resolution high-order finite difference scheme…
Abstract
Purpose
The current work aims to present a parallel code using the open multi-processing (OpenMP) programming model for an adaptive multi-resolution high-order finite difference scheme for solving 2D conservation laws, comparing efficiencies obtained with a previous message passing interface formulation for the same serial scheme and considering the same type of 2D formulations laws.
Design/methodology/approach
The serial version of the code is naturally suitable for parallelization because the spatial operator formulation is based on a splitting scheme per direction for which the flux components are numerically computed by a Lax–Friedrichs factorization independently for each row or column. High-order approximations for numerical fluxes are computed by the third-order essentially non-oscillatory (ENO) and fifth-order weighted essentially non-oscillatory (WENO) interpolation schemes, assuming sparse grids in each direction. The grid adaptivity is obtained by a cubic interpolating wavelet transform applied in each space dimension, associated to a threshold operator. Time is evolved by a third order TVD Runge–Kutta method.
Findings
The parallel formulation is implemented automatically at compiling time by the OpenMP library routines, being virtually transparent to the programmer. This over simplifies any concerns about managing and/or updating the adaptive grid when compared to what is necessary to be done when other parallel approaches are considered. Numerical simulations results and the large speedups obtained for the Euler equations in gas dynamics highlight the efficiency of the OpenMP approach.
Research limitations/implications
The resulting speedups reflect the effectiveness of the OpenMP approach but are, to a large extension, limited by the hardware used (2 E5-2620 Intel Xeon processors, 6 cores, 2 threads/core, hyper-threading enabled). As the demand for OpenMP threads increases, the code starts to make explicit use of the second logical thread available in each E5-2620 processor core and efficiency drops. The speedup peak is reached near the possible maximum (24) at about 22, 23 threads. This peak reflects the hardware configuration and the true software limit should be located way beyond this value.
Practical implications
So far no attempts have been made to parallelize other possible code segments (for instance, the ENO|-WENO-TVD code lines that process the different data components which could potentially push the speed up limit to higher values even further. The fact that the speedup peak is located close to the present hardware limit reflects the scalability properties of the OpenMP programming and of the splitting scheme as well. Consequently, it is likely that the speedup peak with the OpenMP approach for this kind of problem formulation will be close to the physical (and/or logical) limit of the hardware used.
Social implications
This work is the result of a successful collaboration among researchers from two different institutions, one internationally well-known and with a long-term experience in applied mathematics for industrial applications and the other in a starting process of international academic insertion. In this way, this scientific partnership has the potential of promoting further knowledge exchange, involving students and other collaborators.
Originality/value
The proposed methodology (use of OpenMP programming model for the wavelet adaptive splitting scheme) is original and contributes to a very active research area in the past years, namely, adaptive methods for conservation laws and their parallel formulations, which is of great interest for the entire scientific community.
Details
Keywords
Jerome L. Antonio, Alexander Lennart Schmidt, Dominik K. Kanbach and Natanya Meyer
Entrepreneurial ventures aspiring to disrupt existing market incumbents often use business-model innovation to increase the attractiveness of their offerings. A value proposition…
Abstract
Purpose
Entrepreneurial ventures aspiring to disrupt existing market incumbents often use business-model innovation to increase the attractiveness of their offerings. A value proposition is the central element of a business model, and is critical for this purpose. However, how entrepreneurial ventures modify their value propositions to increase the attractiveness of their comparatively inferior offerings is not well understood. The purpose of this paper is to analyze the value proposition innovation (VPI) of aspiring disruptors.
Design/methodology/approach
The authors used a flexible pattern matching approach to ground the inductive findings in extant theory. The authors conducted 21 semi-structured interviews with managers from startups in the global electric vehicle industry.
Findings
The authors developed a framework, showing two factors, determinants and tactics, that play a key role in VPI connected by a continuous feedback loop. Directed by the determinants of cognitive antecedents, development drivers and realization capabilities, aspiring disruptors determine the scope, focus and priorities of various configuration and support tactics to enable and secure the success of their value proposition.
Originality/value
The authors contribute to theory by showing how cognitive antecedents, development drivers and capabilities determine VPI tactics to disrupt existing market incumbents, furthering the understanding of configuration tactics. The results have important implications for disruptive innovation theory, and entrepreneurship research and practice, as they offer an explanatory framework to analyze strategies of aspiring disruptors who increase the attractiveness of sustainable technologies, thereby accelerating their diffusion.
Details
Keywords
Despite its central role in the influence process, power has largely been overlooked by scholars seeking to understand global leaders' influence over their constituents. As a…
Abstract
Despite its central role in the influence process, power has largely been overlooked by scholars seeking to understand global leaders' influence over their constituents. As a consequence, we currently have limited understanding of the varieties of power that global leaders hold, how power is exercised in global contexts, and what impact exercising power has in global organizations. The intended purpose of this chapter is to mobilize research on this important topic through systematic review. The review is organized around the following guiding questions: (i) how is power defined in global leadership research? (ii) what power bases do global leaders possess? (iii) how do global leaders exercise power? (iv) what factors influence global leaders' exercise of power? and (v) what are the outcomes of global leaders' exercise of power? Based on a synthesis of extant insights, this chapter develops a foundation for future research on power in global leadership by mapping critical knowledge gaps and outlining paths for further inquiry.
Details
Keywords
Juan Sebastian Gomez Bonilla, Maximilian Alexander Dechet, Jochen Schmidt, Wolfgang Peukert and Andreas Bück
The purpose of this paper is to investigate the effect of different heating approaches during thermal rounding of polymer powders on powder bulk properties such as particle size…
Abstract
Purpose
The purpose of this paper is to investigate the effect of different heating approaches during thermal rounding of polymer powders on powder bulk properties such as particle size, shape and flowability, as well as on the yield of process.
Design/methodology/approach
This study focuses on the rounding of commercial high-density polyethylene polymer particles in two different downer reactor designs using heated walls (indirect heating) and preheated carrier gas (direct heating). Powder bulk properties of the product obtained from both designs are characterized and compared.
Findings
Particle rounding with direct heating leads to a considerable increase in process yield and a reduction in powder agglomeration compared to the design with indirect heating. This subsequently leads to higher powder flowability. In terms of shape, indirect heating yields not only particles with higher sphericity but also entails substantial agglomeration of the rounded particles.
Originality/value
Shape modification via thermal rounding is the decisive step for the success of a top-down process chain for selective laser sintering powders with excellent flowability, starting with polymer particles from comminution. This report provides new information on the influence of the heating mode (direct/indirect) on the performance of the rounding process and particle properties.
Details