Search results

1 – 10 of 648
Article
Publication date: 30 December 2024

Thi Thuy An Hoang, Doaa Aly, Muath Abdelqader, Muntaser J Melhem, Tamer K Darwish and Anas Al Tweijer

This study aims to explore the extent of Intellectual Capital Disclosure (ICD) in the annual reports of the top 50 listed Vietnamese companies. It assesses the influence of firm…

Abstract

Purpose

This study aims to explore the extent of Intellectual Capital Disclosure (ICD) in the annual reports of the top 50 listed Vietnamese companies. It assesses the influence of firm characteristics and corporate governance structure on ICD practices.

Design/methodology/approach

ICD was measured using content analysis, specifically word count percentage. Panel data regression analysis was employed to examine the relationship between firm characteristics, governance structures and the level of ICD.

Findings

Results reveal that ICD levels among Vietnamese firms sampled are relatively low, averaging 17.43% of the overall annual report word count. Relational capital emerges as the most disclosed category of IC. Firm size, profitability, industry type, number of independent board members and CEO duality significantly impact the level of ICD. However, leverage, board size and the presence of an audit committee show no significant influence on ICD.

Practical implications

These findings offer insights into agency and signaling theories. They provide empirical evidence for stakeholders, academics and regulatory bodies to comprehend ICD practices and identify factors that could enhance ICD in emerging markets like Vietnam.

Originality/value

This study contributes to the literature by examining ICD practices in an emerging market context and identifying the impact of firm characteristics and governance structures on ICD levels, offering valuable implications for both theory and practice.

Details

Journal of Financial Reporting and Accounting, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1985-2517

Keywords

Article
Publication date: 1 May 2003

M. Zeng and W.Q. Tao

A comparative study is performed to reveal the convergence characteristics and the robustness of four variants in the semi‐implicit method for pressure‐linked equations…

1015

Abstract

A comparative study is performed to reveal the convergence characteristics and the robustness of four variants in the semi‐implicit method for pressure‐linked equations (SIMPLE)‐family: SIMPLE, SIMPLE revised (SIMPLER), SIMPLE consistent (SIMPLEC), and SIMPLE extrapolation (SIMPLEX). The focus is concentrated in the solution at fine grid system. Four typical fluid flow and heat transfer problems are taken as the numerical examples (lid‐driven cavity flow, flow in an axisymmetric sudden expansion, flow in an annulus with inner surface rotating and the natural convection in a square enclosure). It is found that an appropriate convergence condition should include both mass conservation and momentum conservation requirements. For the four problems computed, the SIMPLEX always requires the largest computational time, the SIMPLER comes the next, and the computational time of SIMPLE and SIMPLEC are the least. As far as the robustness is concerned, the SIMPLE algorithm is the worst, the SIMPLER comes the next and the robustness of SIMPLEX and SIMPLEC are superior to the others. The SIMPLEC algorithm is then recommended, especially for the computation at a fine grid system. Brief discussion is provided to further reveal the reasons which may account for the difference of the four algorithms.

Details

Engineering Computations, vol. 20 no. 3
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 6 March 2017

Chung-Ho Chen and Chao-Yu Chou

The quality level setting problem determines the optimal process mean, standard deviation and specification limits of product/process characteristic to minimize the expected total…

Abstract

Purpose

The quality level setting problem determines the optimal process mean, standard deviation and specification limits of product/process characteristic to minimize the expected total cost associated with products. Traditionally, it is assumed that the product/process characteristic is normally distributed. However, this may not be true. This paper aims to explore the quality level setting problem when the probability distribution of the process characteristic deviates from normality.

Design/methodology/approach

Burr developed a density function that can represent a wide range of normal and non-normal distributions. This can be applied to investigate the effect of non-normality on the studies of statistical quality control, for example, designs of control charts and sampling plans. The quality level setting problem is examined by introducing Burr’s density function as the underlying probability distribution of product/process characteristic such that the effect of non-normality to the determination of optimal process mean, standard deviation and specification limits of product/process characteristic can be studied. The expected total cost associated with products includes the quality loss of conforming products, the rework cost of non-conforming products and the scrap cost of non-conforming products.

Findings

Numerical results show that the expected total cost associated with products is significantly influenced by the parameter of Burr’s density function, the target value of product/process characteristic, quality loss coefficient, unit rework cost and unit scrap cost.

Research limitations/implications

The major assumption of the proposed model is that the lower specification limit must be positive for practical applications, which definitely affects the space of feasible solution for the different combinations of process mean and standard deviation.

Social implications

The proposed model can provide industry/business application for promoting the product/service quality assurance for the customer.

Originality/value

The authors adopt the Burr distribution to determine the optimum process mean, standard deviation and specification limits under non-normality. To the best of their knowledge, this is a new method for determining the optimum process and product policy, and it can be widely applied.

Details

Engineering Computations, vol. 34 no. 1
Type: Research Article
ISSN: 0264-4401

Keywords

Book part
Publication date: 2 December 2024

Rim Gafsi

This chapter examines the significant role of non-fungible tokens (NFTs) and blockchain technology in fostering a sustainable economy in the metaverse. Blockchain allows the…

Abstract

This chapter examines the significant role of non-fungible tokens (NFTs) and blockchain technology in fostering a sustainable economy in the metaverse. Blockchain allows the saving and transfer of decentralized and secure data. As a primary component of the metaverse economy, NFTs are distinct and secure virtual assets saved on the blockchain. These assets facilitate possessing, trading, and monetizing digital assets. These advancing technologies have also revolutionized the method by which creators and artists test and exchange their digital work, introducing a novel period of ownership and value in the digital realm. However, the negative environmental effects of some blockchain technologies constitute a considerable constraint, pushing a shift to a sustainable economy. Platforms like The Sandbox have implemented initiatives to address environmental concerns. As a case study, The Sandbox play-to-earn model with tokenized assets showcases its ability to create value and encourage user participation. It shows the ability of NFTs and blockchain to support a sustainable economy.

Details

The Metaverse Dilemma: Challenges and Opportunities for Business and Society
Type: Book
ISBN: 978-1-83797-525-9

Keywords

Article
Publication date: 1 June 2010

M.A. Darwish and S.O. Duffuaa

The purpose of this paper is to integrate the decisions regarding optimal process mean and the parameters of a sampling plan.

431

Abstract

Purpose

The purpose of this paper is to integrate the decisions regarding optimal process mean and the parameters of a sampling plan.

Design/methodology/approach

A model is developed to determine these parameters. The model maximizes producer expected profit, while protecting the consumer through a constraint on the probability of accepting lots with low incoming quality. The model is presented for two cases. The first one is for non‐destructive testing and the other for destructive testing. An example is presented to demonstrate that the utility of the model and sensitivity analysis on key parameters of the model has been conducted.

Findings

The findings indicated that the optimal parameters for the process and the sampling plan are significantly different from when determined separately. The sensitivity analysis showed that the process parameters are very sensitive to changes in the process variance, moderately sensitive to the limit on incoming quality, and insensitive to the consumer risk and inspection cost.

Practical implications

The models developed offer an alternative approach for quality managers to address setting process targets, taking into consideration a sampling plan.

Originality/value

The originality of the paper is in the integration of two elements of quality that are usually treated separately in the literature.

Details

Journal of Quality in Maintenance Engineering, vol. 16 no. 2
Type: Research Article
ISSN: 1355-2511

Keywords

Article
Publication date: 1 August 2006

Krista Nerinckx, Jan Vierendeels and Erik Dick

To present conversion of the advection upwind splitting method (AUSM+) from the conventional density‐based and coupled formulation to the pressure‐based and segregated formulation.

Abstract

Purpose

To present conversion of the advection upwind splitting method (AUSM+) from the conventional density‐based and coupled formulation to the pressure‐based and segregated formulation.

Design/methodology/approach

The spatial discretization is done by a finite volume method. A collocated grid cell‐center formulation is used. The pressure‐correction procedure is set up in the usual way for a compressible flow problem. The conventional Rhie‐Chow interpolation methodology for the determination of the transporting velocity, and the conventional central interpolation for the pressure at the control volume faces, are replaced by AUSM+ definitions.

Findings

The AUSM+ flux definitions are spontaneously well suited for use in a collocated pressure‐correction formulation. The formulation does not require extensions to these flux definitions. As a consequence, the results of a density‐based fully coupled method, are identical to the results of a pressure‐based segregated formulation. The advantage of the pressure‐correction method with respect to the density‐based method, is the higher efficiency for low Mach number applications. The advantage of the AUSM+ flux definition for the transporting velocity with respect to the conventional Rhie‐Chow interpolation, is the improved accuracy in high Mach number flows. As a consequence, the combination of AUSM+ with a pressure‐correction method leads to an algorithm with improved performance for flows at all Mach numbers.

Originality/value

A new methodology, with obvious advantages, is composed by the combination of ingredients from an existing spatial discretization method (AUSM+) and an existing time stepping method (pressure‐correction).

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 16 no. 6
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 3 November 2021

Ifeoluwa Benjamin Oluleye, Abiodun Kolawole Oyetunji, Michael Ayodele Olukolajo and Daniel W.M. Chan

Building information modelling (BIM) is a novel technological advancement in the built environment. Despite the potentials of BIM, its adoption and implementation are undermined…

Abstract

Purpose

Building information modelling (BIM) is a novel technological advancement in the built environment. Despite the potentials of BIM, its adoption and implementation are undermined in facility management (FM) operations. This might be because of limited information on the critical success factors (CSFs) that can enhance its adoption. The study aims to integrate building information modelling to improve facility management operation by adopting fuzzy synthetic approach for evaluating the critical success factors.

Design/methodology/approach

Data for the study were sourced from practising and registered facility managers within Lagos metropolis, Nigeria. The data collected were analysed using a combination of methods which include mean item score, factor analysis and fuzzy synthetic evaluation (FSE).

Findings

The factor analysis results showed that six underlying groups of CSFs would enhance the effective adoption of BIM in facility operations. The FSE results showed that out of the six groups, the three topmost important CSF grouping (CSFG) in the decision rule would enhance the effectiveness of BIM adoption for FM operations.

Practical implications

The result of this study provides a credible road map for facility managers, policymakers and other stakeholders in FM operations on the CSFs and CSFG required for the adoption of BIM.

Originality/value

Previous studies that aimed at integrating BIM into FM are limited. Hence, this study provides a broad perspective on the CSF required for BIM adoption and implementation in FM operations using the FSE approach.

Details

Journal of Facilities Management , vol. 21 no. 2
Type: Research Article
ISSN: 1472-5967

Keywords

Article
Publication date: 5 April 2013

Kevin Yessian, Pat DeLaquil, Bruno Merven, Maurizio Gargiulo and Gary Goldstein

An economic assessment was performed of the potential for clean energy options to contribute to the power and desalination needs in the State of Kuwait over the next 20 to 40…

1242

Abstract

Purpose

An economic assessment was performed of the potential for clean energy options to contribute to the power and desalination needs in the State of Kuwait over the next 20 to 40 years. The paper aims to summarize two analyses that were performed for the Kuwait Institute for Scientific Research to develop a strategy promoting renewable energy and evaluating alternative technologies including nuclear energy.

Design/methodology/approach

The analyses were performed using a power and water model for Kuwait that was constructed using the International Energy Agency – Energy Technology Systems Analysis Programme (IEA‐ETSAP) TIMES modeling framework. Data provided by the Ministry of Electricity and Water (MEW) and the Kuwait Petroleum Company (KPC) characterizes the projected demand for power and water; the existing and planned power generation and water desalination plants, including the expected retirement of existing plants; and future fossil fuel prices and availability. New power generation options – including renewable energy (RE), nuclear, combined cycle gas turbines (CCGT) and reheat steam power plants (RHSPP) – were compared in this least‐cost optimization framework.

Findings

The model results indicate that by 2030 the cost‐effective RE share is 11 percent of electricity generation in the reference case and 8 percent in the case with the nuclear option. The RE technologies alone provide a 2030 net‐back value compared to the reference case of US$2.35 billion, while in the nuclear case they increase the 2030 net‐back value by an additional US$1.5 billion. Increasing the RE share, as a government policy, to 10 percent, 15 percent and 20 percent, decreases the 2030 netback benefit by US$1.0, $3.6 and $8.3 billion, respectively.

Research limitations/implications

Sensitivity runs based on scenarios that assume higher RE costs or lower availability, lower demand growth, lower oil and gas prices, higher nuclear plant investment costs, and RE capacity credit were analyzed.

Practical implications

The analysis provides a compelling economic basis for initiating a renewable energy program in the State of Kuwait. However, these forecasted benefits will only materialize to the extent the projected RE investments are achieved if they begin in earnest soon.

Originality/value

The analysis identifies a cost‐effective share of renewable energy use in Kuwait as about 11 percent of electricity generation in 2030. The investment in renewable energy provides the State of Kuwait with a net‐back value of US$2.35 billion, due to the fuel savings that are generated by using renewables.

Details

International Journal of Energy Sector Management, vol. 7 no. 1
Type: Research Article
ISSN: 1750-6220

Keywords

Article
Publication date: 12 February 2018

Noor Sharifatul Hana Yeop, Zaleha Md Isa, Khadijah Shamsuddin, Khor Geok Lin, Zaleha Abdullah Mahdy, Haslinda Hassan and Hasanain Ghazi

The aim of this study is to determine the prevalence of hypocalcaemia among first-trimester pregnant women and its contributing factors.

Abstract

Purpose

The aim of this study is to determine the prevalence of hypocalcaemia among first-trimester pregnant women and its contributing factors.

Design/methodology/approach

A cross-sectional study was carried out among first-trimester pregnant women who were recruited during their first antenatal visit. A total of 396 respondents of age 18-40 years completed the self-administered questionnaire (socio-demographic, socio-economic, obstetric information), validated semi-quantitative food frequency questionnaire for calcium (FFQ-calcium), anthropometric measurements (weight and height) and blood test for serum calcium during their first trimester.

Findings

The prevalence of hypocalcaemia based on serum calcium level of less than 2.11 mmol/L was 26.0 per cent (n = 103). The median serum calcium level was 2.2 mmol/L (IQR, 25th and 75th percentile – 2.1 and 2.3, respectively). Milk intake of less than two glasses per day during pregnancy showed a twofold increase in developing hypocalcaemia (OR, 2.231; 95 per cent CI, 1.399, 3.588). Other than that, underweight (aOR, 2.038; 95 per cent SK, 1.088, 3.820) and obese before pregnancy (aOR, 1.954; 95 per cent SK, 1.007, 3.790) are also predictors of hypocalcaemia.

Originality/value

The prevalence of hypocalcaemia among first-trimester pregnant women in this study was 26.0 per cent. Intake of two or more glasses of milk per day can help prevent hypocalcaemia at this stage of pregnancy.

Details

Nutrition & Food Science, vol. 48 no. 1
Type: Research Article
ISSN: 0034-6659

Keywords

Article
Publication date: 1 October 2004

M.F. Webster, I.J. Keshtiban and F. Belblidia

We introduce a second‐order accurate time‐marching pressure‐correction algorithm to accommodate weakly‐compressible highly‐viscous liquid flows at low Mach number. As the…

Abstract

We introduce a second‐order accurate time‐marching pressure‐correction algorithm to accommodate weakly‐compressible highly‐viscous liquid flows at low Mach number. As the incompressible limit is approached (Ma ≈ 0), the consistency of the compressible scheme is highlighted in recovering equivalent incompressible solutions. In the viscous‐dominated regime of low Reynolds number (zone of interest), the algorithm treats the viscous part of the equations in a semi‐implicit form. Two discrete representations are proposed to interpolate density: a piecewise‐constant form with gradient recovery and a linear interpolation form, akin to that on pressure. Numerical performance is considered on a number of classical benchmark problems for highly viscous liquid flows to highlight consistency, accuracy and stability properties. Validation bears out the high quality of performance of both compressible flow implementations, at low to vanishing Mach number. Neither linear nor constant density interpolations schemes degrade the second‐order accuracy of the original incompressible fractional‐staged pressure‐correction scheme. The piecewise‐constant interpolation scheme is advocated as a viable method of choice, with its advantages of order retention, yet efficiency in implementation.

Details

Engineering Computations, vol. 21 no. 7
Type: Research Article
ISSN: 0264-4401

Keywords

1 – 10 of 648