Nima Gerami Seresht, Rodolfo Lourenzutti, Ahmad Salah and Aminah Robinson Fayek
Due to the increasing size and complexity of construction projects, construction engineering and management involves the coordination of many complex and dynamic processes and…
Abstract
Due to the increasing size and complexity of construction projects, construction engineering and management involves the coordination of many complex and dynamic processes and relies on the analysis of uncertain, imprecise and incomplete information, including subjective and linguistically expressed information. Various modelling and computing techniques have been used by construction researchers and applied to practical construction problems in order to overcome these challenges, including fuzzy hybrid techniques. Fuzzy hybrid techniques combine the human-like reasoning capabilities of fuzzy logic with the capabilities of other techniques, such as optimization, machine learning, multi-criteria decision-making (MCDM) and simulation, to capitalise on their strengths and overcome their limitations. Based on a review of construction literature, this chapter identifies the most common types of fuzzy hybrid techniques applied to construction problems and reviews selected papers in each category of fuzzy hybrid technique to illustrate their capabilities for addressing construction challenges. Finally, this chapter discusses areas for future development of fuzzy hybrid techniques that will increase their capabilities for solving construction-related problems. The contributions of this chapter are threefold: (1) the limitations of some standard techniques for solving construction problems are discussed, as are the ways that fuzzy methods have been hybridized with these techniques in order to address their limitations; (2) a review of existing applications of fuzzy hybrid techniques in construction is provided in order to illustrate the capabilities of these techniques for solving a variety of construction problems and (3) potential improvements in each category of fuzzy hybrid technique in construction are provided, as areas for future research.
Details
Keywords
Waqar Ahmed Khan, S.H. Chung, Muhammad Usman Awan and Xin Wen
The purpose of this paper is three-fold: to review the categories explaining mainly optimization algorithms (techniques) in that needed to improve the generalization performance…
Abstract
Purpose
The purpose of this paper is three-fold: to review the categories explaining mainly optimization algorithms (techniques) in that needed to improve the generalization performance and learning speed of the Feedforward Neural Network (FNN); to discover the change in research trends by analyzing all six categories (i.e. gradient learning algorithms for network training, gradient free learning algorithms, optimization algorithms for learning rate, bias and variance (underfitting and overfitting) minimization algorithms, constructive topology neural networks, metaheuristic search algorithms) collectively; and recommend new research directions for researchers and facilitate users to understand algorithms real-world applications in solving complex management, engineering and health sciences problems.
Design/methodology/approach
The FNN has gained much attention from researchers to make a more informed decision in the last few decades. The literature survey is focused on the learning algorithms and the optimization techniques proposed in the last three decades. This paper (Part II) is an extension of Part I. For the sake of simplicity, the paper entitled “Machine learning facilitated business intelligence (Part I): Neural networks learning algorithms and applications” is referred to as Part I. To make the study consistent with Part I, the approach and survey methodology in this paper are kept similar to those in Part I.
Findings
Combining the work performed in Part I, the authors studied a total of 80 articles through popular keywords searching. The FNN learning algorithms and optimization techniques identified in the selected literature are classified into six categories based on their problem identification, mathematical model, technical reasoning and proposed solution. Previously, in Part I, the two categories focusing on the learning algorithms (i.e. gradient learning algorithms for network training, gradient free learning algorithms) are reviewed with their real-world applications in management, engineering, and health sciences. Therefore, in the current paper, Part II, the remaining four categories, exploring optimization techniques (i.e. optimization algorithms for learning rate, bias and variance (underfitting and overfitting) minimization algorithms, constructive topology neural networks, metaheuristic search algorithms) are studied in detail. The algorithm explanation is made enriched by discussing their technical merits, limitations, and applications in their respective categories. Finally, the authors recommend future new research directions which can contribute to strengthening the literature.
Research limitations/implications
The FNN contributions are rapidly increasing because of its ability to make reliably informed decisions. Like learning algorithms, reviewed in Part I, the focus is to enrich the comprehensive study by reviewing remaining categories focusing on the optimization techniques. However, future efforts may be needed to incorporate other algorithms into identified six categories or suggest new category to continuously monitor the shift in the research trends.
Practical implications
The authors studied the shift in research trend for three decades by collectively analyzing the learning algorithms and optimization techniques with their applications. This may help researchers to identify future research gaps to improve the generalization performance and learning speed, and user to understand the applications areas of the FNN. For instance, research contribution in FNN in the last three decades has changed from complex gradient-based algorithms to gradient free algorithms, trial and error hidden units fixed topology approach to cascade topology, hyperparameters initial guess to analytically calculation and converging algorithms at a global minimum rather than the local minimum.
Originality/value
The existing literature surveys include comparative study of the algorithms, identifying algorithms application areas and focusing on specific techniques in that it may not be able to identify algorithms categories, a shift in research trends over time, application area frequently analyzed, common research gaps and collective future directions. Part I and II attempts to overcome the existing literature surveys limitations by classifying articles into six categories covering a wide range of algorithm proposed to improve the FNN generalization performance and convergence rate. The classification of algorithms into six categories helps to analyze the shift in research trend which makes the classification scheme significant and innovative.
Details
Keywords
Qasim Zaheer, Mir Majaid Manzoor and Muhammad Jawad Ahamad
The purpose of this article is to analyze the optimization process in depth, elaborating on the components of the entire process and the techniques used. Researchers have been…
Abstract
Purpose
The purpose of this article is to analyze the optimization process in depth, elaborating on the components of the entire process and the techniques used. Researchers have been drawn to the expanding trend of optimization since the turn of the century. The rate of research can be used to measure the progress and increase of this optimization procedure. This study is phenomenal to understand the optimization process and different algorithms in addition to their application by keeping in mind the current computational power that has increased the implementation for several engineering applications.
Design/methodology/approach
Two-dimensional analysis has been carried out for the optimization process and its approaches to addressing optimization problems, i.e. computational power has increased the implementation. The first section focuses on a thorough examination of the optimization process, its objectives and the development of processes. Second, techniques of the optimization process have been evaluated, as well as some new ones that have emerged to overcome the above-mentioned problems.
Findings
This paper provided detailed knowledge of optimization, several approaches and their applications in civil engineering, i.e. structural, geotechnical, hydraulic, transportation and many more. This research provided tremendous emerging techniques, where the lack of exploratory studies is to be approached soon.
Originality/value
Optimization processes have been studied for a very long time, in engineering, but the current computational power has increased the implementation for several engineering applications. Besides that, different techniques and their prediction modes often require high computational strength, such parameters can be mitigated with the use of different techniques to reduce computational cost and increase accuracy.
Details
Keywords
This study aims to examine the theoretical foundations for multivariate portfolio optimization algorithms under illiquid market conditions. In this study, special emphasis is…
Abstract
Purpose
This study aims to examine the theoretical foundations for multivariate portfolio optimization algorithms under illiquid market conditions. In this study, special emphasis is devoted to the application of a risk-engine, which is based on the contemporary concept of liquidity-adjusted value-at-risk (LVaR), to multivariate optimization of investment portfolios.
Design/methodology/approach
This paper examines the modeling parameters of LVaR technique under event market settings and discusses how to integrate asset liquidity risk into LVaR models. Finally, the authors discuss scenario optimization algorithms for the assessment of structured investment portfolios and present a detailed operational methodology for computer programming purposes and prospective research design with the backing of a graphical flowchart.
Findings
To that end, the portfolio/risk manager can specify different closeout horizons and dependence measures and calculate the necessary LVaR and resulting investable portfolios. In addition, portfolio managers can compare the return/risk ratio and asset allocation of obtained investable portfolios with different liquidation horizons in relation to the conventional Markowitz´s mean-variance approach.
Practical implications
The examined optimization algorithms and modeling techniques have important practical applications for portfolio management and risk assessment, and can have many uses within machine learning and artificial intelligence, expert systems and smart financial applications, financial technology (FinTech), and within big data environments. In addition, it provide key real-world implications for portfolio/risk managers, treasury directors, risk management executives, policymakers and financial regulators to comply with the requirements of Basel III best practices on liquidly risk.
Originality/value
The proposed optimization algorithms can aid in advancing portfolios selection and management in financial markets by assessing investable portfolios subject to meaningful operational and financial constraints. Furthermore, the robust risk-algorithms and portfolio optimization techniques can aid in solving some real-world dilemmas under stressed and adverse market conditions, such as the effect of liquidity when it dries up in financial and commodity markets, the impact of correlations factors when there is a switching in their signs and the integration of the influence of the nonlinear and non-normal distribution of assets’ returns in portfolio optimization and management.
Details
Keywords
Jinlin Gong, Frédéric Gillon and Nicolas Bracikowski
This paper aims to investigate three low-evaluation-budget optimization techniques: output space mapping (OSM), manifold mapping (MM) and Kriging-OSM. Kriging-OSM is an original…
Abstract
Purpose
This paper aims to investigate three low-evaluation-budget optimization techniques: output space mapping (OSM), manifold mapping (MM) and Kriging-OSM. Kriging-OSM is an original approach having high-order mapping.
Design/methodology/approach
The electromagnetic device to be optimally sized is a five-phase linear induction motor, represented through two levels of modeling: coarse (Kriging model) and fine.The optimization comparison of the three techniques on the five-phase linear induction motor is discussed.
Findings
The optimization results show that the OSM takes more time and iteration to converge the optimal solution compared to MM and Kriging-OSM. This is mainly because of the poor quality of the initial Kriging model. In the case of a high-quality coarse model, the OSM technique would show its domination over the other two techniques. In the case of poor quality of coarse model, MM and Kriging-OSM techniques are more efficient to converge to the accurate optimum.
Originality/value
Kriging-OSM is an original approach having high-order mapping. An advantage of this new technique consists in its capability of providing a sufficiently accurate model for each objective and constraint function and makes the coarse model converge toward the fine model more effectively.
Details
Keywords
Ibrahim T. Teke and Ahmet H. Ertas
The paper's goal is to examine and illustrate the useful uses of submodeling in finite element modeling for topology optimization and stress analysis. The goal of the study is to…
Abstract
Purpose
The paper's goal is to examine and illustrate the useful uses of submodeling in finite element modeling for topology optimization and stress analysis. The goal of the study is to demonstrate how submodeling – more especially, a 1D approach – can reliably and effectively produce ideal solutions for challenging structural issues. The paper aims to demonstrate the usefulness of submodeling in obtaining converged solutions for stress analysis and optimized geometry for improved fatigue life by studying a cantilever beam case and using beam formulations. In order to guarantee the precision and dependability of the optimization process, the developed approach will also be validated through experimental testing, such as 3-point bending tests and 3D printing. Using 3D finite element models, the 1D submodeling approach is further validated in the final step, showing a strong correlation with experimental data for deflection calculations.
Design/methodology/approach
The authors conducted a literature review to understand the existing research on submodeling and its practical applications in finite element modeling. They selected a cantilever beam case as a test subject to demonstrate stress analysis and topology optimization through submodeling. They developed a 1D submodeling approach to streamline the optimization process and ensure result validity. The authors utilized beam formulations to optimize and validate the outcomes of the submodeling approach. They 3D-printed the optimized models and subjected them to a 3-point bending test to confirm the accuracy of the developed approach. They employed 3D finite element models for submodeling to validate the 1D approach, focusing on specific finite elements for deflection calculations and analyzed the results to demonstrate a strong correlation between the theoretical models and experimental data, showcasing the effectiveness of the submodeling methodology in achieving optimal solutions efficiently and accurately.
Findings
The findings of the paper are as follows: 1. The use of submodeling, specifically a 1D submodeling approach, proved to be effective in achieving optimal solutions more efficiently and accurately in finite element modeling. 2. The study conducted on a cantilever beam case demonstrated successful stress analysis and topology optimization through submodeling, resulting in optimized geometry for enhanced fatigue life. 3. Beam formulations were utilized to optimize and validate the outcomes of the submodeling approach, leading to the successful 3D printing and testing of the optimized models through a 3-point bending test. 4. Experimental results confirmed the accuracy and validity of the developed submodeling approach in streamlining the optimization process. 5. The use of 3D finite element models for submodeling further validated the 1D approach, with specific finite elements showing a strong correlation with experimental data in deflection calculations. Overall, the findings highlight the effectiveness of submodeling techniques in achieving optimal solutions and validating results in finite element modeling, stress analysis and optimization processes.
Originality/value
The originality and value of the paper lie in its innovative approach to utilizing submodeling techniques in finite element modeling for structural analysis and optimization. By focusing on the reduction of finite element models and the creation of smaller, more manageable models through submodeling, the paper offers designers a more efficient and accurate way to achieve optimal solutions for complex problems. The study's use of a cantilever beam case to demonstrate stress analysis and topology optimization showcases the practical applications of submodeling in real-world scenarios. The development of a 1D submodeling approach, along with the utilization of beam formulations and 3D printing for experimental validation, adds a novel dimension to the research. Furthermore, the paper's integration of 1D and 3D submodeling techniques for deflection calculations and validation highlights the thoroughness and rigor of the study. The strong correlation between the finite element models and experimental data underscores the reliability and accuracy of the developed approach. Overall, the originality and value of this paper lie in its comprehensive exploration of submodeling techniques, its practical applications in structural analysis and optimization and its successful validation through experimental testing.
Details
Keywords
This paper aims to examine from commodity portfolio managers’ perspective the performance of liquidity adjusted risk modeling in assessing the market risk parameters of a large…
Abstract
Purpose
This paper aims to examine from commodity portfolio managers’ perspective the performance of liquidity adjusted risk modeling in assessing the market risk parameters of a large commodity portfolio and in obtaining efficient and coherent portfolios under different market circumstances.
Design/methodology/approach
The implemented market risk modeling algorithm and investment portfolio analytics using reinforcement machine learning techniques can simultaneously handle risk-return characteristics of commodity investments under regular and crisis market settings besides considering the particular effects of the time-varying liquidity constraints of the multiple-asset commodity portfolios.
Findings
In particular, the paper implements a robust machine learning method to commodity optimal portfolio selection and within a liquidity-adjusted value-at-risk (LVaR) framework. In addition, the paper explains how the adapted LVaR modeling algorithms can be used by a commodity trading unit in a dynamic asset allocation framework for estimating risk exposure, assessing risk reduction alternates and creating efficient and coherent market portfolios.
Originality/value
The optimization parameters subject to meaningful operational and financial constraints, investment portfolio analytics and empirical results can have important practical uses and applications for commodity portfolio managers particularly in the wake of the 2007–2009 global financial crisis. In addition, the recommended reinforcement machine learning optimization algorithms can aid in solving some real-world dilemmas under stressed and adverse market conditions (e.g. illiquidity, switching in correlations factors signs, nonlinear and non-normal distribution of assets’ returns) and can have key applications in machine learning, expert systems, smart financial functions, internet of things (IoT) and financial technology (FinTech) in big data ecosystems.
Details
Keywords
Habib Karimi, Hossein Ahmadi Danesh Ashtiani and Cyrus Aghanajafi
This paper aims to examine total annual cost from economic view mixed materials heat exchangers based on three optimization algorithms. This study compares the use of three…
Abstract
Purpose
This paper aims to examine total annual cost from economic view mixed materials heat exchangers based on three optimization algorithms. This study compares the use of three optimization algorithms in the design of economic optimization shell and tube mixed material heat exchangers.
Design/methodology/approach
A shell and tube mixed materials heat exchanger optimization design approach is expanded based on the total annual cost measured by dividing the costs of the heat exchanger to area of surface and power consumption. In this study, optimization and minimization of the total annual cost is considered as the objective function. There are three types of exchangers: cheap, expensive and mixed. Mixed materials are used in corrosive flows in the heat exchanger network. The present study explores the use of three optimization techniques, namely, hybrid genetic-particle swarm optimization, shuffled frog leaping algorithm techniques and ant colony optimization.
Findings
There are three parameters as decision variables such as tube outer diameter, shell diameter and central baffle spacing considered for optimization. Results have been compared with the findings of previous studies to demonstrate the accuracy of algorithms.
Originality/value
The present study explores the use of three optimization techniques, namely, hybrid genetic-particle swarm optimization, shuffled frog leaping algorithm techniques and ant colony optimization. This study has demonstrated successful application of each technique for the optimal design of a mixed material shell and tube heat exchanger from the economic view point.
Details
Keywords
D.D. Devisasi Kala and D. Thiripura Sundari
Optimization involves changing the input parameters of a process that is experimented with different conditions to obtain the maximum or minimum result. Increasing interest is…
Abstract
Purpose
Optimization involves changing the input parameters of a process that is experimented with different conditions to obtain the maximum or minimum result. Increasing interest is shown by antenna researchers in finding the optimum solution for designing complex antenna arrays which are possible by optimization techniques.
Design/methodology/approach
Design of antenna array is a significant electro-magnetic problem of optimization in the current era. The philosophy of optimization is to find the best solution among several available alternatives. In an antenna array, energy is wasted due to side lobe levels which can be reduced by various optimization techniques. Currently, developing optimization techniques applicable for various types of antenna arrays is focused on by researchers.
Findings
In the paper, different optimization algorithms for reducing the side lobe level of the antenna array are presented. Specifically, genetic algorithm (GA), particle swarm optimization (PSO), ant colony optimization (ACO), cuckoo search algorithm (CSA), invasive weed optimization (IWO), whale optimization algorithm (WOA), fruitfly optimization algorithm (FOA), firefly algorithm (FA), cat swarm optimization (CSO), dragonfly algorithm (DA), enhanced firefly algorithm (EFA) and bat flower pollinator (BFP) are the most popular optimization techniques. Various metrics such as gain enhancement, reduction of side lobe, speed of convergence and the directivity of these algorithms are discussed. Faster convergence is provided by the GA which is used for genetic operator randomization. GA provides improved efficiency of computation with the extreme optimal result as well as outperforming other algorithms of optimization in finding the best solution.
Originality/value
The originality of the paper includes a study that reveals the usage of the different antennas and their importance in various applications.
Details
Keywords
- Particle swarm optimization (PSO)
- Ant colony optimization (ACO)
- Cuckoo search algorithm (CSA)
- Invasive weed optimization (IWO)
- Whale optimization algorithm (WOA)
- FruitFly optimization algorithm (FOA)
- Genetic algorithm (GA)
- Firefly algorithm (FA)
- Cat swarm optimization (CSO)
- Dragonfly algorithm (DA)
- Enhanced firefly algorithm (EFA) and bat flower pollinator (BFP)
Kamal Sharma, Varsha Shirwalkar and Prabir K. Pal
This paper aims to provide a solution to the first phase of a force-controlled circular Peg-In-Hole assembly using an industrial robot. The paper suggests motion planning of the…
Abstract
Purpose
This paper aims to provide a solution to the first phase of a force-controlled circular Peg-In-Hole assembly using an industrial robot. The paper suggests motion planning of the robot’s end-effector so as to perform Peg-In-Hole search with minimum a priori information of the working environment.
Design/methodology/approach
The paper models Peg-In-Hole search problem as a problem of finding the minima in depth profile for a particular assembly. Thereafter, various optimization techniques are used to guide the robot to locate minima and complete the hole search. This approach is inspired by a human’s approach of searching a hole by moving peg in various directions so as to search a point of maximum insertion which is same as the minima in depth profile.
Findings
The usage of optimization techniques for hole search allows the robot to work with minimum a priori information of the working environment. Also, the iterative nature of the techniques adapts to any disturbance during assembly.
Practical implications
The techniques discussed here are quite useful if a force-controlled assembly needs to be performed in a highly unknown environment and also when the assembly setup can get disturbed in between.
Originality/value
The concept is original and provides a non-conventional use of optimization techniques, not for optimization of some process directly but for an industrial robot’s motion planning.