Animesh Patari, Shantanu Pramanik and Tanmoy Mondal
The present study scrutinizes the relative performance of various near-wall treatments coupled with two-equation RANS models to explore the turbulence transport mechanism in terms…
Abstract
Purpose
The present study scrutinizes the relative performance of various near-wall treatments coupled with two-equation RANS models to explore the turbulence transport mechanism in terms of the kinetic energy budget in a plane wall jet and the significance of the near-wall molecular and turbulent shear, to select the best combination among the models which reveals wall jet characteristics most efficiently.
Design/methodology/approach
A two-dimensional steady incompressible plane wall jet in a quiescent surrounding is simulated using ANSYS-Fluent solver. Three near-wall treatments, namely the Standard Wall Function (SWF), Enhanced Wall Treatment (EWT) and Menter-Lechner (ML) treatment coupled with Realisable, RNG and Standard k-e models and also the Standard and Shear-Stress Transport (SST) k-ω models are employed for this investigation.
Findings
The ML treatment slightly overestimated the budget components on an outer scale, whereas the k-ω models strikingly underestimated them. In the buffer layer at the inner scale, the SWF highly over-predicts turbulent production and dissipation and k-ω models over-predict dissipation. Appreciably accurate inner and outer scale k-budgets are observed with the EWT schemes. With a sufficiently resolved near-wall mesh, the Realisable model with EWT exhibits the mean flow, turbulence characteristics and turbulence energy transport even better than the SST k-ω model.
Originality/value
Three distinct near-wall strategies are chosen for comparative performance analysis, focusing not only on the mean flow and turbulence characteristics but the turbulence energy budget as well, for finding the best combination, having potential as a viable and low-cost alternative to LES and DNS for wall jet simulation in industrial application.
Details
Keywords
Abstract
Graphical abstarct
Purpose
This paper aims to improve the refractive index sensor performance for analytes with large refractive index by adopting the technology of microstructured fiber (MF) and surface plasmon resonance (SPR).
Design/methodology/approach
The structure adopts an MF with a hexagonal lattice cladding structure composed of all-circular air holes, and three defect regions are introduced. The liquid analyte that needs to be tested is filled in the defect area. The surface plasmon polarition mode is generated and coupled with the core mode, thus forming a refractive index sensing channel. When the resonance conditions are satisfied, the resonance wavelength will be changed with the refractive index of the liquid analyte. All parameters that may affect the performance of the sensor are numerical simulated, and the structure is optimized through a large number of calculations.
Findings
The results demonstrate that the maximum dynamic sensitivity (SR) can reach to 24,260 nm/RIU, and the average sensitivity (SR-AV) can reach to 18,046 nm/RIU when the refractive index range is from 1.42 to 1.47. Besides, the sensitivity linearity (R2) is approximately 0.965, and its resolution is 4.1 × 10–6 RIU. The comparison with some literature results shown that the proposed sensor has certain advantages over the sensors reported in these literatures.
Originality/value
This work proposed an SPR-based refractive index sensor with a simple MF structure. It has a certain reference significance for the design and optimization of SPR-based MF sensors. Moreover, owing to its simple structure, high refractive index sensitivity and linear sensing performance, this sensor will play an important role in the detection of high refractive index liquid analytes.
Details
Keywords
Unmesa Ray, Cristian Arteaga, Yonghan Ahn and JeeWoong Park
Equipment failure is a critical factor in construction accidents, often leading to severe consequences. Therefore, this study addresses two significant gaps in construction safety…
Abstract
Purpose
Equipment failure is a critical factor in construction accidents, often leading to severe consequences. Therefore, this study addresses two significant gaps in construction safety research: (1) effectively using historical data to investigate equipment failure and (2) understanding the classification of equipment failure according to Occupational Safety and Health Administration (OSHA) standards.
Design/methodology/approach
Our research utilized a multi-stage methodology. We curated data from the OSHA database, distinguishing accidents involving equipment failures. Then we developed a framework using generative artificial intelligence (AI) and large language models (LLMs) to minimize manual processing. This framework employed a two-step prompting strategy: (1) classifying narratives that describe equipment failures and (2) analyzing these cases to extract specific failure details (e.g. names, types, categories). To ensure accuracy, we conducted a manual analysis of a subset of reports to establish ground truth and tested two different LLMs within our approach, comparing their performance against this ground truth.
Findings
The tested LLMs demonstrated 95% accuracy in determining if narratives describe equipment failures and 73% accuracy in extracting equipment names, enabling automated categorical identifications. These findings highlight LLMs’ promising identification accuracy compared to manual methods.
Research limitations/implications
The research’s focus on equipment data not only validates the research framework but also highlights its potential for broader application across various accident categories beyond construction, extending into any domain with accessible accident narratives. Given that such data are essential for regulatory bodies like OSHA, the framework’s adoption could significantly enhance safety analysis and reporting, contributing to more robust safety protocols industry-wide.
Practical implications
Using the developed approach, the research enables us to use accident narratives, a reliable source of accident data, in accident analysis. It provides deeper insights than traditional data types, enabling a more detailed understanding of accidents at an unprecedented level. This enhanced understanding can significantly inform and improve worker safety training, education and safety policies, with the potential for broader applications across various safety-critical domains.
Originality/value
This research presents a novel approach to analyzing construction accident reports using AI and LLMs, significantly reducing manual processing time while maintaining high accuracy. By identifying equipment failures more efficiently, our work lays the groundwork for developing targeted safety protocols, contributing to overall safety improvements in construction practices and advancing data-driven analysis processes.
Details
Keywords
Raheleh Khosravi, Maryam Mogheiseh and Reza Hasanzadeh Ghasemi
The present study aims to design and simulate various types of deoxyribonucleic acid (DNA) origami-based nanopores and explore their stability under different temperatures and…
Abstract
Purpose
The present study aims to design and simulate various types of deoxyribonucleic acid (DNA) origami-based nanopores and explore their stability under different temperatures and constraints. To create DNA origami nanopores, both one-layer and two-layer structures can be utilized.
Design/methodology/approach
One of the key applications of DNA origami structures involves the creation of nanopores, which have garnered significant interest for their diverse applications across multiple scientific disciplines. DNA origami nanopores can be studied individually and in combination with other structures. The structural stability of these nanopores across various temperature conditions is crucial for enabling the passage of diverse payloads.
Findings
Comparing these DNA origami structures can provide valuable insights into the performance of these nanopores under different conditions. The results indicate that two-layer nanopores exhibit better structural stability under various temperatures compared to one-layer nanopores. Additionally, small structural changes in two-layer nanopores enable them to maintain stability even at high temperatures.
Originality/value
In this paper, various DNA origami-based nanopores were designed and simulated, focusing specifically on one-layer and two-layer configurations. The two-layer nanopore consistently exhibited superior stability across both free and restrained scenarios, undergoing fewer structural changes compared to the one-layer nanopore. As temperatures increased, the two-layer nanopore remained less susceptible to deformation, maintaining closer to its original shape. Moreover, in the free scenario, the geometric shape of the two-layer nanopore demonstrated fewer variations than the one-layer nanopore.
Details
Keywords
The purpose of this study is to investigate how ethical leadership impacts employees’ innovative work behavior among public employees through the mediating role of group…
Abstract
Purpose
The purpose of this study is to investigate how ethical leadership impacts employees’ innovative work behavior among public employees through the mediating role of group cohesiveness. This work further offers deeper insight into the moderating mechanism of openness to experience in the relationship between ethical leadership and employees’ innovative work behavior.
Design/methodology/approach
Three time-lagged sets of data (n = 532) were collected among Vietnamese public employees. The partial least squares – structural equation modeling method was applied to test the research hypotheses.
Findings
Ethical leadership positively relates to employees’ innovative work behavior. Furthermore, group cohesiveness plays a mediating role in the link between ethical leadership and employees’ innovative work behavior. The moderating impact of openness to experience between ethical leadership and employees’ innovative work behavior is supported.
Originality/value
This inquiry is probably the first attempt to explore the mechanism linking ethical leadership and employees’ innovative work behavior through the mediator of group cohesiveness. Additionally, this study extends the current knowledge by investigating the moderating role of openness to experience in ethical leadership and employees’ innovative work behavior nexus.
Details
Keywords
Ning Huang, Qiang Du, Libiao Bai and Qian Chen
In recent decades, infrastructure has continued to develop as an important basis for social development and people's lives. Resource management of these large-scale projects has…
Abstract
Purpose
In recent decades, infrastructure has continued to develop as an important basis for social development and people's lives. Resource management of these large-scale projects has been immensely concerned because dozens of construction enterprises (CEs) often work together. In this situation, resource collaboration among enterprises has become a key measure to ensure project implementation. Thus, this study aims to propose a systematic multi-agent resource collaborative decision-making optimization model for large projects from a matching perspective.
Design/methodology/approach
The main contribution of this work was an advancement of the current research by: (1) generalizing the resource matching decision-making problem and quantifying the relationship between CEs. (2) Based on the matching domain, the resource input costs and benefits of each enterprise in the associated group were comprehensively analyzed to build the mathematical model, which also incorporated prospect theory to map more realistic decisions. (3) According to the influencing factors of resource decision-making, such as cost, benefit and attitude of decision-makers, determined the optimal resource input in different situations.
Findings
Numerical experiments were used to verify the effectiveness of the multi-agent resource matching decision (MARMD) method in this study. The results indicated that this model could provide guidance for optimal decision-making for each participating enterprise in the resource association group under different situations. And the results showed the psychological preference of decision-makers has an important influence on decision performance.
Research limitations/implications
While the MARMD method has been proposed in this research, MARMD still has many limitations. A more detailed matching relationship between different resource types in CEs is still not fully analyzed, and relevant studies about more accurate parameters of decision-makers’ psychological preferences should be conducted in this area in the future.
Practical implications
Compared with traditional projects, large-scale engineering construction has the characteristics of huge resource consumption and more participants. While decision-makers can determine the matching relationship between related enterprises, this is ambiguous and the wider range will vary with more participants or complex environment. The MARMD method provided in this paper is an effective methodological tool with clearer decision-making positioning and stronger actual operability, which could provide references for large-scale project resource management.
Social implications
Large-scale engineering is complex infrastructure projects that ensure national security, increase economic development, improve people's lives and promote social progress. During the implementation of large-scale projects, CEs realize value-added through resource exchange and integration. Studying the optimal collaborative decision of multi-agent resources from a matching perspective can realize the improvement of resource transformation efficiency and promote the development of large-scale engineering projects.
Originality/value
The current research on engineering resources decision-making lacks a matching relationship, which leads to unclear decision objectives, ambiguous decision processes and poor operability decision methods. To solve these issues, a novel approach was proposed to reveal the decision mechanism of multi-agent resource optimization in large-scale projects. This paper could bring inspiration to the research of large-scale project resource management.
Details
Keywords
Feng Kong and Kaixin Chen
In the realistic multi-project scheduling, resources are not always shared among multiple projects, nor are they available to perform activities throughout the planning horizon…
Abstract
Purpose
In the realistic multi-project scheduling, resources are not always shared among multiple projects, nor are they available to perform activities throughout the planning horizon. Besides, according to construction technology, some architectural jobs cannot be interrupted for any reason. However, these characteristics of resources and activities have not been fully studied, which may lead to the reduction of engineering quality and the failure of scheduling work. Therefore, this paper aims to model a multi-project scheduling problem with the above characteristics and provide an effective method to meet the actual needs of the construction industry.
Design/methodology/approach
A three-phase CPLEX with quota auction mechanism (TPCP–QAM) is developed to solve this problem, which significantly improves the solving performance of CPLEX by adjusting the search strategy and implementing a distributed procedure. In this approach, resources are dedicated to individual projects through a global coordination mechanism, while each project is independently scheduled by a local scheduling algorithm.
Findings
(1) For the proposed problem, CPLEX 2019's default search strategy. (Auto) is far inferior to another search strategy (Multi-point) in optimizing the project total cost and average resource capacity. (2) Compared with other two algorithms, TPCP–QAM has obvious advantages in the multi-project total cost (MPTC) and CPU time, especially for large-size instances. (3) Even though the number of non-working days may not be changed for the protection of labor resources, managers can reduce MPTC or shorten the multi-project total makespan (TMS) by appropriately adjusting the distribution of non-working days.
Originality/value
This paper fulfils an identified need to investigate how to complete a multi-project portfolio with the minimum cost while ensuring engineering quality under a practical multi-project scheduling environment.
Details
Keywords
Mohammad Yaghtin and Youness Javid
The purpose of this research is to address the complex multiobjective unrelated parallel machine scheduling problem with real-world constraints, including sequence-dependent setup…
Abstract
Purpose
The purpose of this research is to address the complex multiobjective unrelated parallel machine scheduling problem with real-world constraints, including sequence-dependent setup times and periodic machine maintenance. The primary goal is to minimize total tardiness, earliness and total completion times simultaneously. This study aims to provide effective solution methods, including a Mixed-Integer Programming (MIP) model, an Epsilon-constraint method and the Nondominated Sorting Genetic Algorithm (NSGA-II), to offer valuable insights into solving large-sized instances of this challenging problem.
Design/methodology/approach
This study addresses a multiobjective unrelated parallel machine scheduling problem with sequence-dependent setup times and periodic machine maintenance activities. An MIP model is introduced to formulate the problem, and an Epsilon-constraint method is applied for a solution. To handle the NP-hard nature of the problem for larger instances, an NSGA-II is developed. The research involves the creation of 45 problem instances for computational experiments, which evaluate the performance of the algorithms in terms of proposed measures.
Findings
The research findings demonstrate the effectiveness of the proposed solution approaches for the multiobjective unrelated parallel machine scheduling problem. Computational experiments on 45 generated problem instances reveal that the NSGA-II algorithm outperforms the Epsilon-constraint method, particularly for larger instances. The algorithms successfully minimize total tardiness, earliness and total completion times, showcasing their practical applicability and efficiency in handling real-world scheduling scenarios.
Originality/value
This study contributes original value by addressing a complex multiobjective unrelated parallel machine scheduling problem with real-world constraints, including sequence-dependent setup times and periodic machine maintenance activities. The introduction of an MIP model, the application of the Epsilon-constraint method and the development of the NSGA-II algorithm offer innovative approaches to solving this NP-hard problem. The research provides valuable insights into efficient scheduling methods applicable in various industries, enhancing decision-making processes and operational efficiency.
Details
Keywords
Linlin Xie, Ziyi Yu and Xianbo Zhao
To meet an ever - increasing urbanization demand, urban complex projects have evolved to form the development type of HOPSCA (an acronym for Hotel, Office, Park, Shopping mall…
Abstract
Purpose
To meet an ever - increasing urbanization demand, urban complex projects have evolved to form the development type of HOPSCA (an acronym for Hotel, Office, Park, Shopping mall, Convention and Apartment, representing a new type of urban complex). Its integrated functions, complex structures and superior siting expose HOPSCA’s construction phase to higher and more uncertain safety risks. Despite this, research on construction safety risks of large urban complexes is scarce. This study addresses this by introducing the interval ordinal priority approach (Interval-OPA) method to build a safety risk assessment model for HOPSCA, targeting its construction safety risk management.
Design/methodology/approach
This study initially identifies risk factors via literature review, field survey and three Delphi method rounds, forming a construction safety risk list of HOPSCA projects. Then, Interval-OPA is employed to create a safety risk assessment model, and its validity confirmed through a representative case study of an ongoing project. Lastly, uncertainty and weighting analyses of the model results identify the most probable major construction accidents, safety risk factors and targeted prevention strategies for the urban complex projects construction phase.
Findings
The findings reveal that (1) there are 33 construction safety risks in HOPSCA’s construction phase across 4 aspects: “man-machine-environment-management”; (2) object strikes are the most prominent of accidents and need to be prioritized for prevention, especially when managerial risks are arising; (3) falls from heights are evaluated with the highest level of uncertainty, which represents an ambiguous area for safety management and (4) the result of the risk evaluation shows that there are nine critical construction safety risk factors for the HOPSCA project and that most of the management-level risk factors have high uncertainty. This study explores and provides effective measures to combat these factors.
Originality/value
This study innovatively applies the Interval-OPA method to risk assessment, offering a fitting method for evaluating the HOPSCA project’s construction safety risks and accidents. The model aids decision-makers in appropriate risk classification and selection of scientific risk prevention strategies, enhances HOPSCA’s construction safety management system and even benefits all under-construction projects, promoting the construction industry’s sustainable development.
Details
Keywords
Kai Li, Cheng Zhu, Jianjiang Wang and Junhui Gao
With burgeoning interest in the low-altitude economy, applications of long-endurance unmanned aerial vehicles (LE-UAVs) have increased in remote logistics distribution. Given…
Abstract
Purpose
With burgeoning interest in the low-altitude economy, applications of long-endurance unmanned aerial vehicles (LE-UAVs) have increased in remote logistics distribution. Given LE-UAVs’ advantages of wide coverage, strong versatility and low cost, in addition to logistics distribution, they are widely used in military reconnaissance, communication relay, disaster monitoring and other activities. With limited autonomous intelligence, LE-UAVs require regular periodic and non-periodic control from ground control resources (GCRs) during flights and mission execution. However, the lack of GCRs significantly restricts the applications of LE-UAVs in parallel.
Design/methodology/approach
We consider the constraints of GCRs, investigating an integrated optimization problem of multi-LE-UAV mission planning and GCR allocation (Multi-U&G IOP). The problem integrates GCR allocation into traditional multi-UAV cooperative mission planning. The coupling decision of mission planning and GCR allocation enlarges the decision space and adds complexities to the problem’s structure. Through characterizing the problem, this study establishes a mixed integer linear programming (MILP) model for the integrated optimization problem. To solve the problem, we develop a three-stage iterative optimization algorithm combining a hybrid genetic algorithm with local search-variable neighborhood decent, heuristic conflict elimination and post-optimization of GCR allocation.
Findings
Numerical experimental results show that our developed algorithm can solve the problem efficiently and exceeds the solution performance of the solver CPLEX. For small-scale instances, our algorithm can obtain optimal solutions in less time than CPLEX. For large-scale instances, our algorithm produces better results in one hour than CPLEX does. Implementing our approach allows efficient coordination of multiple UAVs, enabling faster mission completion with a minimal number of GCRs.
Originality/value
Drawing on the interplay between LE-UAVs and GCRs and considering the practical applications of LE-UAVs, we propose the Multi-U&G IOP problem. We formulate this problem as a MILP model aiming to minimize the maximum task completion time (makespan). Furthermore, we present a relaxation model for this problem. To efficiently address the MILP model, we develop a three-stage iterative optimization algorithm. Subsequently, we verify the efficacy of our algorithm through extensive experimentation across various scenarios.