Search results
1 – 10 of over 2000The governance of our towns and cities requires an approach that connects people with nature and places. Digital technology can be the glue that does this, if it serves the needs…
Abstract
The governance of our towns and cities requires an approach that connects people with nature and places. Digital technology can be the glue that does this, if it serves the needs of the various stakeholders, including urban communities. It means identifying the potential connections across people, digital, and place themes, examining successful approaches, and exploring some of the current practice (or lack of it) in spatial planning and smart cities. This can be considered using a range of Internet of Things (IoT) technologies with other methodologies which combine the use of socioeconomic and environmental data about the urban environment. This ambient domain sensing can provide the ecological and other data to show how digital connectivity is addressing the placemaking challenges alongside providing implications for urban governance and communities.
Details
Keywords
Lin Yang, Xiaoyue Lv and Xianbo Zhao
Abnormal behaviors such as rework, backlog, changes and claims generated by project organizations are unavoidable in complex projects. When abnormal behaviors emerge, the…
Abstract
Purpose
Abnormal behaviors such as rework, backlog, changes and claims generated by project organizations are unavoidable in complex projects. When abnormal behaviors emerge, the previously normal state of interactions between organizations will be altered to some extent. However, previous studies have ignored the associations and interactions between organizations in the context of abnormal organizational behaviors (AOBs), making this challenging to cope with AOBs. As a result, the objective of this paper is to explore how to reduce AOBs in complex projects at the organizational level from a network perspective.
Design/methodology/approach
To overcome the inherent limitations of a single case study, this research integrated two data collection methods: questionnaire survey and expert scoring method. The questionnaire survey captured the universal data on the influence possibility of AOBs between complex project organizations and the expert scoring method got the influence probability scores of AOBs between organizations in the case. Using these data, four organizational influence network models of AOBs based on a case were developed to demonstrate how to destroy AOBs networks in complex projects using network attack theory (NAT).
Findings
First, the findings show that controlling AOBs generated by key organizations preferentially and improving the ability of key organizations can weaken AOBs network, enabling more effective coping strategies. Second, the owners, government, material suppliers and designers are identified as key organizations across all four influence networks of AOBs. Third, change and claim behaviors are more manageable from the organizational level.
Practical implications
Project managers can target specific organizations for intervention, weaken the AOBs network by applying NAT and achieve better project outcomes through coping strategies. Additionally, by taking a network perspective, this research provides a novel approach to comprehending the associations and interactions between organizations in the context of complex projects.
Originality/value
This paper proposes a new approach to investigating AOBs in complex projects by simultaneously examining rework, backlog, change and claim. Leveraging NAT as a novel tool for managing the harmful effects of influence networks, this study extends the knowledge body in the field of organizational behavior (OB) management and complex project management.
Details
Keywords
Beatrice Audifasi Nyallu, Xiaopeng Deng and Melckzedeck Michael Mgimba
Knowledge loss (KL) is still an unfortunate fact, causing many challenges, including poor organisational performance, despite prior efforts to investigate knowledge retention…
Abstract
Purpose
Knowledge loss (KL) is still an unfortunate fact, causing many challenges, including poor organisational performance, despite prior efforts to investigate knowledge retention methods. Therefore, this study identifies early approaches to combat KL and poor organisational performance, shifting focus on employee personality traits.
Design/methodology/approach
Grounded on the social exchange theory (SET) cross-section data from 400 Chinese construction knowledge employees were used to investigate the role of internal work locus of control (IWLC) on job rotation (JR), KL and organisational performance. The data were analysed through IBM SPSS Statistics 25 and SmartPLS 4 software.
Findings
The results demonstrated that IWLC minimises KL and positively influences JR. Then, JR negatively influences KL and decreases in organisational performance. The negative mediating effect of JR in the relationship between IWLC, KL and decreased organisational performance were also found. Finally, KL proved to positively influence a decrease in organisational performance.
Research limitations/implications
This study contributes to the new understanding of individual behaviour and its influence on organisational outcome variables. Specifically, for ultimate KL prevention and organisational performance improvement, an organisation should understand its employees' behaviours to establish progressive collective learning and knowledge sharing.
Practical implications
This study contributes to the new understanding of individual behaviour and its influence on organisational outcome variables. Specifically, for ultimate KL prevention and organisational performance improvement, an organisation should understand its employees’ behaviours to establish progressive collective learning and knowledge sharing.
Originality/value
This study is the first attempt to explore the influence of personality traits in the early minimisation of KL, particularly the role of IWLC and JR in combating KL and improving organisational performance.
Details
Keywords
Hai Le and Phuong Nguyen
This study examines the importance of exchange rate and credit growth fluctuations when designing monetary policy in Thailand. To this end, the authors construct a small open…
Abstract
Purpose
This study examines the importance of exchange rate and credit growth fluctuations when designing monetary policy in Thailand. To this end, the authors construct a small open economy New Keynesian dynamic stochastic general equilibrium (DSGE) model. The model encompasses several essential characteristics, including incomplete financial markets, incomplete exchange rate pass-through, deviations from the law of one price and a banking sector. The authors consider generalized Taylor rules, in which policymakers adjust policy rates in response to output, inflation, credit growth and exchange rate fluctuations. The marginal likelihoods are then employed to investigate whether the central bank responds to fluctuations in the exchange rate and credit growth.
Design/methodology/approach
This study constructs a small open economy DSGE model and then estimates the model using Bayesian methods.
Findings
The authors demonstrate that the monetary authority does target exchange rates, whereas there is no evidence in favor of incorporating credit growth into the policy rules. These findings survive various robustness checks. Furthermore, the authors demonstrate that domestic shocks contribute significantly to domestic business cycles. Although the terms of trade shock plays a minor role in business cycles, it explains the most significant proportion of exchange rate fluctuations, followed by the country risk premium shock.
Originality/value
This study is the first attempt at exploring the relevance of exchange rate and credit growth fluctuations when designing monetary policy in Thailand.
Details
Keywords
Christina Anderl and Guglielmo Maria Caporale
The article aims to establish whether the degree of aversion to inflation and the responsiveness to deviations from potential output have changed over time.
Abstract
Purpose
The article aims to establish whether the degree of aversion to inflation and the responsiveness to deviations from potential output have changed over time.
Design/methodology/approach
This paper assesses time variation in monetary policy rules by applying a time-varying parameter generalised methods of moments (TVP-GMM) framework.
Findings
Using monthly data until December 2022 for five inflation targeting countries (the UK, Canada, Australia, New Zealand, Sweden) and five countries with alternative monetary regimes (the US, Japan, Denmark, the Euro Area, Switzerland), we find that monetary policy has become more averse to inflation and more responsive to the output gap in both sets of countries over time. In particular, there has been a clear shift in inflation targeting countries towards a more hawkish stance on inflation since the adoption of this regime and a greater response to both inflation and the output gap in most countries after the global financial crisis, which indicates a stronger reliance on monetary rules to stabilise the economy in recent years. It also appears that inflation targeting countries pay greater attention to the exchange rate pass-through channel when setting interest rates. Finally, monetary surprises do not seem to be an important determinant of the evolution over time of the Taylor rule parameters, which suggests a high degree of monetary policy transparency in the countries under examination.
Originality/value
It provides new evidence on changes over time in monetary policy rules.
Details
Keywords
Yufeng Ren, Changqing Bai and Hongyan Zhang
This study aims to investigate the formation and characteristics of Taylor bubbles resulting from short-time gas injection in liquid-conveying pipelines. Understanding these…
Abstract
Purpose
This study aims to investigate the formation and characteristics of Taylor bubbles resulting from short-time gas injection in liquid-conveying pipelines. Understanding these characteristics is crucial for optimizing pipeline efficiency and enhancing production safety.
Design/methodology/approach
The authors conducted short-time gas injection experiments in a vertical rectangular pipe, focusing on Taylor bubble formation time and stable length. Computational fluid dynamics simulations using large eddy simulation and volume of fluid models were used to complement the experiments.
Findings
Results reveal that the stable length of Taylor bubbles is significantly influenced by gas injection velocity and duration. Specifically, high injection velocity and duration lead to increased bubble aggregation and recirculation region capture, extending the stable length. Additionally, a higher injection velocity accelerates reaching the critical local gas volume fraction, thereby reducing formation time. The developed fitting formulas for stable length and formation time show good agreement with experimental data, with average errors of 6.5% and 7.39%, respectively. The predicted values of the formulas in glycerol-water and ethanol solutions are also in good agreement with the simulation results.
Originality/value
This research provides new insights into Taylor bubble dynamics under short-time gas injection, offering predictive formulas for bubble formation time and stable length. These findings are valuable for optimizing industrial pipeline designs and mitigating potential safety issues.
Details
Keywords
This study explores the immobilisation of enzymes within porous catalysts of various geometries, including spheres, cylinders and flat pellets. The objective is to understand the…
Abstract
Purpose
This study explores the immobilisation of enzymes within porous catalysts of various geometries, including spheres, cylinders and flat pellets. The objective is to understand the irreversible Michaelis-Menten kinetic process within immobilised enzymes through advanced mathematical modelling.
Design/methodology/approach
Mathematical models were developed based on reaction-diffusion equations incorporating nonlinear variables associated with Michaelis-Menten kinetics. This research introduces fractional derivatives to investigate enzyme reaction kinetics, addressing a significant gap in the existing literature. A novel approximation method, based on the independent polynomials of the complete bipartite graph, is employed to explore solutions for substrate concentration and effectiveness factor across a spectrum of parameter values. The analytical solutions generated through the bipartite polynomial approximation method (BPAM) are rigorously tested against established methods, including the Bernoulli wavelet method (BWM), Taylor series method (TSM), Adomian decomposition method (ADM) and fourth-order Runge-Kutta method (RKM).
Findings
The study identifies two main findings. Firstly, the behaviour of dimensionless substrate concentration with distance is analysed for planar, cylindrical and spherical catalysts using both integer and fractional order Michaelis-Menten modelling. Secondly, the research investigates the variability of the dimensionless effectiveness factor with the Thiele modulus.
Research limitations/implications
The study primarily focuses on mathematical modelling and theoretical analysis, with limited experimental validation. Future research should involve more extensive experimental verification to corroborate the findings. Additionally, the study assumes ideal conditions and uniform catalyst properties, which may not fully reflect real-world complexities. Incorporating factors such as mass transfer limitations, non-uniform catalyst structures and enzyme deactivation kinetics could enhance the model’s accuracy and broaden its applicability. Furthermore, extending the analysis to include multi-enzyme systems and complex reaction networks would provide a more comprehensive understanding of biocatalytic processes.
Practical implications
The validated bipartite polynomial approximation method presents a practical tool for optimizing enzyme reactor design and operation in industrial settings. By accurately predicting substrate concentration and effectiveness factor, this approach enables efficient utilization of immobilised enzymes within porous catalysts. Implementation of these findings can lead to enhanced process efficiency, reduced operating costs and improved product yields in various biocatalytic applications such as pharmaceuticals, food processing and biofuel production. Additionally, this research fosters innovation in enzyme immobilisation techniques, offering practical insights for engineers and researchers striving to develop sustainable and economically viable bioprocesses.
Social implications
The advancement of enzyme immobilisation techniques holds promise for addressing societal challenges such as sustainable production, environmental protection and healthcare. By enabling more efficient biocatalytic processes, this research contributes to reducing industrial waste, minimizing energy consumption and enhancing access to pharmaceuticals and bio-based products. Moreover, the development of eco-friendly manufacturing practices through biocatalysis aligns with global efforts towards sustainability and mitigating climate change. The widespread adoption of these technologies can foster a more environmentally conscious society while stimulating economic growth and innovation in biotechnology and related industries.
Originality/value
This study offers a pioneering approximation method using the independent polynomials of the complete bipartite graph to investigate enzyme reaction kinetics. The comprehensive validation of this method through comparison with established solution techniques ensures its reliability and accuracy. The findings hold promise for advancing the field of biocatalysts and provide valuable insights for designing efficient enzyme reactors.
Details
Keywords
Mohanaphriya US and Tanmoy Chakraborty
This research focuses on the controlling irreversibilities in a radiative, chemically reactive electromagnetohydrodynamics (EMHD) flow of a nanofluid toward a stagnation point…
Abstract
Purpose
This research focuses on the controlling irreversibilities in a radiative, chemically reactive electromagnetohydrodynamics (EMHD) flow of a nanofluid toward a stagnation point. Key considerations include the presence of Ohmic dissipation, linear thermal radiation, second-order chemical reaction with the multiple slips. With these factors, this study aims to provide insights for practical applications where thermal management and energy efficiency are paramount.
Design/methodology/approach
Lie group transformation is used to revert the leading partial differential equations into nonlinear ODE form. Hence, the solutions are attained analytically through differential transformation method-Padé and numerically using the Runge–Kutta–Fehlberg method with shooting procedure, to ensure the precise and reliable determination of the solution. This dual approach highlights the robustness and versatility of the methods.
Findings
The system’s entropy generation is enhanced by incrementing the magnetic field parameter (M), while the electric field (E) and velocity slip parameters (ξ) control its growth. Mass transportation irreversibility and the Bejan number (Be) are significantly increased by the chemical reaction rate (Cr). In addition, there is a boost in the rate of heat transportation by 3.66% while 0.05⩽ξ⩽0.2; meanwhile for 0.2⩽ξ⩽1.1, the rate of mass transportation gets enhanced by 12.87%.
Originality/value
This paper presents a novel approach to analyzing the entropy optimization in a radiative, chemically reactive EMHD nanofluid flow near a stagnation point. Moreover, this research represents a significant advancement in the application of analytical techniques, complemented by numerical approaches to study boundary layer equations.
Details
Keywords
Francesco Busato, Maria Ferrara and Monica Varlese
This paper analyzes real and welfare effects of a permanent change in inflation rate, focusing on macroprudential policy’ role and its interaction with monetary policy.
Abstract
Purpose
This paper analyzes real and welfare effects of a permanent change in inflation rate, focusing on macroprudential policy’ role and its interaction with monetary policy.
Design/methodology/approach
While investigating disinflation costs, the authors simulate a medium-scale dynamic general equilibrium model with borrowing constraints, credit frictions and macroprudential authority.
Findings
Providing discussions on different policy scenarios in a context where still it is expected high inflation, there are three key contributions. First, when macroprudential authority actively operates to improve financial stability, losses caused by disinflation are limited. Second, a Taylor rule directly responding to financial variables might entail a trade-off between price and financial stability objectives, by increasing disinflation costs. Third, disinflation is welfare improving for savers, while costly for borrowers and banks. Indeed, while savers benefit from policies reducing price stickiness distortion, borrowers are worried about credit frictions, coming from collateral constraint.
Practical implications
The paper suggests threefold policy implications: the macroprudential authority should actively intervene during a disinflation process to minimize costs and financial instability deriving from it; policymakers should implement a disinflationary policy stabilizing also output; the central bank and the macroprudential regulator should pursue financial and price stability goals, separately.
Originality/value
This paper is the first attempt to study effects of a permanent inflation target reduction in focusing on the macroprudential policy’ role.
Details