Shitao Liu, Rong Cui, Hongwei Cao and Jinhong Qiu
This paper aims to show a resin-flowing model based on Darcy’s law to display the flowing properties of prepreg during lamination. The conformity between the model and…
Abstract
Purpose
This paper aims to show a resin-flowing model based on Darcy’s law to display the flowing properties of prepreg during lamination. The conformity between the model and experimental results demonstrates that it can provide a guideline on print circuit board (PCB) lamination.
Design/methodology/approach
Based on the theoretical derivations of Darcy’s law, this paper made an analysis on the flow of prepreg in the pressing process, according to which a theoretical model, namely, resin-flowing model was further formulated.
Findings
This paper establishes a resin-flowing model, according to which two experiment-verified conclusions can be drawn: first, the resin-flowing properties of material A and B can be improved when the heating rate is between 1.5 and 2.5 min/°C; second, increased pressure gradient can add the amount of flowing resin, mainly featured by increasing pressure and reducing filled thickness of prepreg.
Originality/value
This model provides guidance on setting lamination parameters for most kinds of prepregs and decreasing starvation risk for PCB production.
Details
Keywords
Liang Chen, Leitao Cui, Rong Huang and Zhengyun Ren
This paper aims to present a bio-inspired neural network for improvement of information processing capability of the existing artificial neural networks.
Abstract
Purpose
This paper aims to present a bio-inspired neural network for improvement of information processing capability of the existing artificial neural networks.
Design/methodology/approach
In the network, the authors introduce a property often found in biological neural system – hysteresis – as the neuron activation function and a bionic algorithm – extreme learning machine (ELM) – as the learning scheme. The authors give the gradient descent procedure to optimize parameters of the hysteretic function and develop an algorithm to online select ELM parameters, including number of the hidden-layer nodes and hidden-layer parameters. The algorithm combines the idea of the cross validation and random assignment in original ELM. Finally, the authors demonstrate the advantages of the hysteretic ELM neural network by applying it to automatic license plate recognition.
Findings
Experiments on automatic license plate recognition show that the bio-inspired learning system has better classification accuracy and generalization capability with consideration to efficiency.
Originality/value
Comparing with the conventional sigmoid function, hysteresis as the activation function enables has two advantages: the neuron’s output not only depends on its input but also on derivative information, which provides the neuron with memory; the hysteretic function can switch between the two segments, thus avoiding the neuron falling into local minima and having a quicker learning rate. The improved ELM algorithm in some extent makes up for declining performance because of original ELM’s complete randomness with the cost of a litter slower than before.
Details
Keywords
The purpose of this paper is to study a geometric process (GP) maintenance model and policy for a repairable system.
Abstract
Purpose
The purpose of this paper is to study a geometric process (GP) maintenance model and policy for a repairable system.
Design/methodology/approach
Lam first introduced the GP and its application to maintenance model. Assume that a replacement policy N is applied by which the system will be replaced by a new, identical one following the Nth failure.
Findings
For a deteriorating system, an optimal replacement policy is determined analytically, and the monotonicity properties of the optimal replacement policy are then studied.
Originality/value
For an improving system, the paper shows that the optimal replacement policy is the ∞ policy, i.e., the policy without replacement.
Details
Keywords
Tetsushi Yuge, Taijiro Yoneda, Nobuyuki Tamura and Shigeru Yanagi
This paper aims to present a method for calculating the top event probability of a fault tree with priority AND gates.
Abstract
Purpose
This paper aims to present a method for calculating the top event probability of a fault tree with priority AND gates.
Design/methodology/approach
The paper makes use of Merle's temporal operators for obtaining the minimal cut sequence set of a dynamic fault tree. Although Merle's expression is based on the occurrence time of an event sequence, the paper treats the expression as an event containing the order of events. This enables the authors to treat the minimal cut sequence set by using the static fault tree techniques. The proposed method is based on the sum of disjoint products. The method for a static FT is extended to a more applicable one that can deal with the order operators proposed by Merle et al.
Findings
First, an algorithm to obtain the minimal cut sequence set of dynamic fault trees is proposed. This algorithm enables the authors to analyze reasonably large scale dynamic fault trees. Second, the proposed method of obtaining the top event probability of a dynamic fault tree is efficient compared with an inclusion‐exclusion based method proposed by Merle et al. and a conventional Markov chain approach. Furthermore, the paper shows the top event probability is derived easily when all the basic events have exponential failure rates.
Originality/value
The methodology presented shows a new solution for calculating the top event probability of dynamic fault trees.
Details
Keywords
Xufeng Zhao, Syouji Nakamura and Toshio Nakagawa
The purpose of this paper is to consider maintenance policies for an operating system which works at random times for jobs. Each job causes some damage to the system and these…
Abstract
Purpose
The purpose of this paper is to consider maintenance policies for an operating system which works at random times for jobs. Each job causes some damage to the system and these damages are additive, and the system fails when the total damage has exceeded a failure level K.
Design/methodology/approach
Using techniques of cumulative damage models, the maintenance is made at the N‐th completion of working time for the first model and at a damage level Z with a limit number of working times N for the second model.
Findings
The system is maintained at the first completion of the working time over time T for the third model. The system fails with probability p(x) when the total damage is x and undergoes minimal repair at failure for the fourth model. The expected cost rates are obtained and optimal maintenance policies are discussed analytically and computed numerically.
Originality/value
The paper discusses four maintenance policies for an operating system which works at successive random times for jobs, where the system fails due to damage that can be additive caused by jobs.
Details
Keywords
Ngapuli I. Sinisuka and Herry Nugraha
The purpose of this paper is to study the life cycle cost (LCC) on the operation of power generation. LCC is the total cost of ownership including the cost of the project or asset…
Abstract
Purpose
The purpose of this paper is to study the life cycle cost (LCC) on the operation of power generation. LCC is the total cost of ownership including the cost of the project or asset acquisition, operation and maintenance, and disposal. LCC includes both deterministic costs (such as acquisition costs, improvement costs and disposal costs) and probabilistic (such as the costs of failure, repairs, spare parts, downtime, lost gross margin). Most of the probabilistic costs are associated directly with the reliability and maintenance characteristics of the system.
Design/methodology/approach
To be able to analyze failure data using appropriate cost profile in order to represent the fact that each failure has different prices, in different time periods at an economical cycle the new methodology of LCCA Diagram is proposed. Developing criticality ranking of sub‐system, calculating values of Weibull Shape Factor β and Weibull Characteristic Life η for each sub‐system, calculating the time to failure of sub‐system, calculating the mean time to failure of sub‐system using Monte Carlo simulation, determining several alternatives, failure mode and effects analysis and root cause failure analysis are parts of the methodology.
Findings
To give a sample of case study, the LCC on the operation of coal fired power plant (CFPP) 330 MW is analyzed. Five alternatives calculation of LCC will be simulated. Graph of cash flow, break‐even graph, and graph of cost/benefit versus risk made for a period of 30 years can be used to asses an effective management program and costs of power plant with a low risk.
Originality/value
The paper suggests that LCC can be used to asses an effective management program and costs of power plant with a low risk.
Details
Keywords
Anna Gustafson, Håkan Schunnesson, Diego Galar and Uday Kumar
The purpose of this paper is to evaluate and analyse the production and maintenance performance of a manual and a semi‐automatic load haul dump (LHD) machine to find similarities…
Abstract
Purpose
The purpose of this paper is to evaluate and analyse the production and maintenance performance of a manual and a semi‐automatic load haul dump (LHD) machine to find similarities and differences.
Design/methodology/approach
Real time process‐, operational‐ and maintenance data, from an underground mine in Sweden, have been refined and aggregated into KPIs in order to make the comparison between the LHDs.
Findings
The main finding is the demonstration of how production and maintenance data can be improved through information fusion, showing some unexpected results for maintenance of automatic and semi‐automatic LHDs in the mining industry. It was found that up to one third of the manually entered workshop data are not consistent with the automatically recorded production times. It is found that there are similarities in utilization and filling rate but differences in produced tonnes/machine hour between the two machines.
Originality/value
The originality in this paper is the information fusion between automatically produced production data and maintenance data which increases the accuracy of reliability analysis data. Combining the production indicator and the maintenance indicator gives a common tool to the production and maintenance departments. This paper shows the difference in both maintenance and production performance between a manual and semi‐automatic LHD.
Details
Keywords
The aim of this study is to identify and establish management skills/knowledge required for the successful management of turnaround maintenance (TAM) projects.
Abstract
Purpose
The aim of this study is to identify and establish management skills/knowledge required for the successful management of turnaround maintenance (TAM) projects.
Design/methodology/approach
A mixed‐method research approach was adopted for this study involving questionnaire survey and case studies of major continuous process plants in the UK. Data were collected through questionnaires from 160 process plants and the case studies of six process plants in the UK. These data were triangulated to improve on the findings.
Findings
The findings show that there are specific management skills very specific towards ensuring the successful management of TAM projects. The findings also show that TAM managers are mostly at present appointed without assessing them with any specific skills set.
Practical implications
This study developed a set of management skills required for turnaround maintenance projects. These specific appropriate management skills established for TAM management should help to reduce mismatching of skills and the job. This is of paramount importance in the selection of the TAM manager to ensure the individual with the right skills is given the responsibility of managing the project.
Originality/value
Operators of process plants and hence engineering facilities still appoint their TAM manager without reference to any skill set. Most of the time, maintenance managers or project managers are appointed to manage TAM without considering any set of requirements. This study which is the first of its kind in this area is a major contribution to the field.
Details
Keywords
K.C. McCrae, R.A. Shaw, H.H. Mantsch, J.A. Thliveris, R.M. Das, K. Ahmed and J.E. Scott
Lung cancer is the leading cause of death worldwide. Physical and chemical agents such as tobacco smoke are the leading cause of various lung cancers. The intrinsic heterogeneity…
Abstract
Lung cancer is the leading cause of death worldwide. Physical and chemical agents such as tobacco smoke are the leading cause of various lung cancers. The intrinsic heterogeneity of normal lung tissue may be affected in different ways, giving rise to different types of lung cancers classified as either small‐cell lung cancer (SCLC) or non‐small cell lung cancer (NSCLC). Adenocarcinoma, a NSCLC, accounts for 40 percent of all lung cancer cases and the incidence is increasing worldwide, especially among women. The survival rate and prognosis is poorest for adenocarcinoma. Therefore, diagnosis at the earliest stage (Stage I, localized) is critical for increasing survival rates of those suffering from lung cancer. However, many factors affect early diagnosis including the variable natural growth of tumors plus technological and human factors associated with manipulation of tissue samples and interpretation of results. This article reviews potential problems associated with diagnosing lung cancer and considers future directions of diagnostic technology.