In “Can the subaltern speak?,” Gayatri Chakravorty Spivak makes the important distinction between representation as “Vertretung” and “Darstellung.” She also produces a strong…
Abstract
Purpose
In “Can the subaltern speak?,” Gayatri Chakravorty Spivak makes the important distinction between representation as “Vertretung” and “Darstellung.” She also produces a strong version of whom she regards as a subaltern woman. Thirty years on both the distinction between “Vertretung” and “Darstellung” and the question of who the subaltern woman is, remain extremely important, not least in methodological considerations in cross-cultural contexts. A number of questions may be asked in relation to representation, such as: how distinct are its two meanings in the interviewing context? And how do they relate to the notion of the co-production of knowledge which has gained such traction in the past three decades? The paper aims to discuss these issues.
Design/methodology/approach
In this paper, I draw on cross-cultural interviewing experiences. Starting from the silence of illiterate rural women in a study conducted in Madhya Pradesh, India, in 2011 (Mohanraj), this paper draws on the research experiences of the author and a number of projects reported on in Cross-Cultural Interviewing (Griffin, 2016) to elucidate how one might re-think both representation and subalternality in the contemporary globalized context.
Findings
The experiences of cross-cultural interviewing I draw on in this paper show that in the contemporary context subalternality may be more productively understood in terms of a continuum rather than as the radical state of unreachable, unspeaking alterity that Spivak proposes.
Originality/value
The paper contributes new perspectives on Spivak’s notion of the unspeaking alterity of the subaltern in light of globalized developments over the past 30 years and specific experiences of cross-cultural interviewing, as these comment on Spivak’s insights.
Details
Keywords
Ifadhila Affia and Ammar Aamer
Real-time visibility and traceability in warehousing could be accomplished by implementing the internet-of-things (IoT) technology. The purpose of this paper is to develop a…
Abstract
Purpose
Real-time visibility and traceability in warehousing could be accomplished by implementing the internet-of-things (IoT) technology. The purpose of this paper is to develop a roadmap for designing an IoT-based smart warehouse infrastructure and, respectively, design and apply the IoT-based smart warehouse infrastructure using a developed roadmap. More specifically, this study first identifies critical components to design an IoT-based smart warehouse infrastructure. Second, the study at hand identifies essential factors that contribute to the successful implementation of IoT-based smart warehouse infrastructure.
Design/methodology/approach
A qualitative-descriptive method, through a comprehensive review of the relevant studies, was used in this study to develop a roadmap. A prototype system was then designed to simulate a case company’s actual warehouse operations in one of the manufacturing companies in Indonesia.
Findings
A framework was proposed which is viable for designing an IoT-based smart warehouse infrastructure. Based on the data collected from a case company, the proposed smart warehouse infrastructure design successfully implemented real-time visibility and traceability and improved overall warehouse efficiency.
Research limitations/implications
While the framework in this research was carried out in one of the developing counties, the study could be used as the basis for future research in a smart warehouse, IoT and related topics.
Originality/value
This research enhances the limited knowledge to establish the IoT infrastructure for a smart warehouse to enable real-time visibility and traceability. This study is also the first to specifically propose a framework for designing an IoT-based smart warehouse infrastructure. The proposed framework can motivate companies in developing countries to deploy efficient and effective smart warehouses using IoT to drive the countries’ economic growth.
Details
Keywords
Monojit Das, V.N.A. Naikan and Subhash Chandra Panja
The aim of this paper is to review the literature on the prediction of cutting tool life. Tool life is typically estimated by predicting the time to reach the threshold flank wear…
Abstract
Purpose
The aim of this paper is to review the literature on the prediction of cutting tool life. Tool life is typically estimated by predicting the time to reach the threshold flank wear width. The cutting tool is a crucial component in any machining process, and its failure affects the manufacturing process adversely. The prediction of cutting tool life by considering several factors that affect tool life is crucial to managing quality, cost, availability and waste in machining processes.
Design/methodology/approach
This study has undertaken the critical analysis and summarisation of various techniques used in the literature for predicting the life or remaining useful life (RUL) of the cutting tool through monitoring the tool wear, primarily flank wear. The experimental setups that comprise diversified machining processes, including turning, milling, drilling, boring and slotting, are covered in this review.
Findings
Cutting tool life is a stochastic variable. Tool failure depends on various factors, including the type and material of the cutting tool, work material, cutting conditions and machine tool. Thus, the life of the cutting tool for a particular experimental setup must be modelled by considering the cutting parameters.
Originality/value
This submission discusses tool life prediction comprehensively, from monitoring tool wear, primarily flank wear, to modelling tool life, and this type of comprehensive review on cutting tool life prediction has not been reported in the literature till now. The future suggestions provided in this review are expected to provide avenues to solve the unexplored challenges in this field.
Details
Keywords
R.V. Mahendra Gowda and S. Mohanraj
Textile technologists have already recognized the role of fibre friction in various textile processes. Attempts have been made to develop a novel instrument system to quantify…
Abstract
Textile technologists have already recognized the role of fibre friction in various textile processes. Attempts have been made to develop a novel instrument system to quantify fibre friction and relate it with fibre processing, yarn quality and ultimately, the fabric handle. In this context, the current paper presents research work carried out in designing and developing a novel system to measure friction in various textile fibre assemblies. The paper discusses the novel features of the Computer Aided Friction Tester, designed and developed exclusively for characterising friction in fibres, sheets of yarn, fabrics, nonwovens, polymeric films, composites and other technical textiles. It also provides the highlights of frictional characteristics measured in various textile fibre assemblies and the reasons for their occurrences.
Details
Keywords
Dipankar Chatterjee and Suman Chakraborty
The purpose of this paper is to carry out a systematic energy analysis for predicting the first and second law efficiencies and the entropy generation during a laser surface…
Abstract
Purpose
The purpose of this paper is to carry out a systematic energy analysis for predicting the first and second law efficiencies and the entropy generation during a laser surface alloying (LSA) process.
Design/methodology/approach
A three‐dimensional transient macroscopic numerical model is developed to describe the turbulent transport phenomena during a typical LSA process and subsequently, the energy analysis is carried out to predict the entropy generation as well as the first and second law efficiencies. A modified k–ε model is used to address turbulent molten metal‐pool convection. The phase change aspects are addressed using a modified enthalpy‐porosity technique. A kinetic theory approach is adopted for modelling evaporation from the top surface of the molten pool.
Findings
It is found that the heat transfer due to the strong temperature gradient is mainly responsible for the irreversible degradation of energy in the form of entropy production and the flow and mass transfer effects are less important for this type of phase change problem. The first and second law efficiencies are found to increase with effective heat input and remain independent of the powder feed rate. With the scanning speed, the first law efficiency increases whereas the second law efficiency decreases.
Research limitations/implications
The top surface undulations are not taken care of in this model which is a reasonable approximation.
Practical implications
The results obtained will eventually lead to an optimized estimation of laser parameters (such as laser power, scanning speed, etc.), which in turn improves the process control and reduces the cost substantially.
Originality/value
This paper provides essential information for modelling solid–liquid phase transition as well as a systematic analysis for entropy generation prediction.
Details
Keywords
Katerina Kassela, Marina Papalexi and David Bamford
The purpose of this paper is to focus on the application of quality function deployment (QFD) in a Housing Association (HA) located in the UK. Facing the problem of improving a…
Abstract
Purpose
The purpose of this paper is to focus on the application of quality function deployment (QFD) in a Housing Association (HA) located in the UK. Facing the problem of improving a company’s performance, practitioners and academics have fashioned and applied a variety of models, theories and techniques.
Design/methodology/approach
The research questions were developed from a review of the quality and process improvement literature and tested using evidence from field-based, action research within a UK HA company. The case study provides insight to the benefits and challenges arising from the application of QFD.
Findings
The results provided insight to the benefits and challenges arising from the application of a specific tool, QFD. The primary findings were: QFD can be successfully adapted, applied and utilised within the challenging environment of social housing and other sectors, such as professional services; the model can be modified to use most processes/sub-processes; it must include both external and internal requirements and, to be useful, use more detailed process parameters appropriately.
Practical implications
The conclusions drawn add to ongoing commentaries on aspects of quality improvement, especially the application of QFD within the service sector. The authors develop questions for future research regarding improvement projects.
Originality/value
The conclusion proposes that the implementation of QFD should have a positive impact upon a company; if approached in the right manner. It provides a useful mechanism for developing evidence-based strategy of operational change, control and improvement. The research proposes questions for future research into aspects of operational quality and efficiency.
Details
Keywords
Amanda de Oliveira e Silva, Alice Leonel, Maisa Tonon Bitti Perazzini and Hugo Perazzini
Brewer's spent grain (BSG) is the main by-product of the brewing industry, holding significant potential for biomass applications. The purpose of this paper was to determine the…
Abstract
Purpose
Brewer's spent grain (BSG) is the main by-product of the brewing industry, holding significant potential for biomass applications. The purpose of this paper was to determine the effective thermal conductivity (keff) of BSG and to develop an Artificial Neural Network (ANN) to predict keff, since this property is fundamental in the design and optimization of the thermochemical conversion processes toward the feasibility of bioenergy production.
Design/methodology/approach
The experimental determination of keff as a function of BSG particle diameter and heating rate was performed using the line heat source method. The resulting values were used as a database for training the ANN and testing five multiple linear regression models to predict keff under different conditions.
Findings
Experimental values of keff were in the range of 0.090–0.127 W m−1 K−1, typical for biomasses. The results showed that the reduction of the BSG particle diameter increases keff, and that the increase in the heating rate does not statistically affect this property. The developed neural model presented superior performance to the multiple linear regression models, accurately predicting the experimental values and new patterns not addressed in the training procedure.
Originality/value
The empirical correlations and the developed ANN can be utilized in future work. This research conducted a discussion on the practical implications of the results for biomass valorization. This subject is very scarce in the literature, and no studies related to keff of BSG were found.
Details
Keywords
Rudolf Espada, Armando Apan and Kevin McDougall
The purpose of this paper was to develop an integrated framework for assessing the flood risk and climate adaptation capacity of an urban area and its critical infrastructures to…
Abstract
Purpose
The purpose of this paper was to develop an integrated framework for assessing the flood risk and climate adaptation capacity of an urban area and its critical infrastructures to help address flood risk management issues and identify climate adaptation strategies.
Design/methodology/approach
Using the January 2011 flood in the core suburbs of Brisbane City, Queensland, Australia, various spatial analytical tools (i.e. digital elevation modeling and urban morphological characterization with 3D analysis, spatial analysis with fuzzy logic, proximity analysis, line statistics, quadrat analysis, collect events analysis, spatial autocorrelation techniques with global Moran’s I and local Moran’s I, inverse distance weight method, and hot spot analysis) were implemented to transform and standardize hazard, vulnerability, and exposure indicating variables. The issue on the sufficiency of indicating variables was addressed using the topological cluster analysis of a two-dimension self-organizing neural network (SONN) structured with 100 neurons and trained by 200 epochs. Furthermore, the suitability of flood risk modeling was addressed by aggregating the indicating variables with weighted overlay and modified fuzzy gamma overlay operations using the Bayesian joint conditional probability weights. Variable weights were assigned to address the limitations of normative (equal weights) and deductive (expert judgment) approaches. Applying geographic information system (GIS) and appropriate equations, the flood risk and climate adaptation capacity indices of the study area were calculated and corresponding maps were generated.
Findings
The analyses showed that on the average, 36 (approximately 813 ha) and 14 per cent (approximately 316 ha) of the study area were exposed to very high flood risk and low adaptation capacity, respectively. In total, 93 per cent of the study area revealed negative adaptation capacity metrics (i.e. minimum of −23 to <0), which implies that the socio-economic resources in the area are not enough to increase climate resilience of the urban community (i.e. Brisbane City) and its critical infrastructures.
Research limitations/implications
While the framework in this study was obtained through a robust approach, the following are the research limitations and recommended for further examination: analyzing and incorporating the impacts of economic growth; population growth; technological advancement; climate and environmental disturbances; and climate change; and applying the framework in assessing the risks to natural environments such as in agricultural areas, forest protection and production areas, biodiversity conservation areas, natural heritage sites, watersheds or river basins, parks and recreation areas, coastal regions, etc.
Practical implications
This study provides a tool for high level analyses and identifies adaptation strategies to enable urban communities and critical infrastructure industries to better prepare and mitigate future flood events. The disaster risk reduction measures and climate adaptation strategies to increase urban community and critical infrastructure resilience were identified in this study. These include mitigation on areas of low flood risk or very high climate adaptation capacity; mitigation to preparedness on areas of moderate flood risk and high climate adaptation capacity; mitigation to response on areas of high flood risk and moderate climate adaptation capacity; and mitigation to recovery on areas of very high flood risk and low climate adaptation capacity. The implications of integrating disaster risk reduction and climate adaptation strategies were further examined.
Originality/value
The newly developed spatially explicit analytical technique, identified in this study as the Flood Risk-Adaptation Capacity Index-Adaptation Strategies (FRACIAS) Linkage/Integrated Model, allows the integration of flood risk and climate adaptation assessments which had been treated separately in the past. By applying the FRACIAS linkage/integrated model in the context of flood risk and climate adaptation capacity assessments, the authors established a framework for enhancing measures and adaptation strategies to increase urban community and critical infrastructure resilience to flood risk and climate-related events.
Details
Keywords
The purpose of this paper is twofold: to incorporate the symbolic relationships among the attributes of customer requirements (CRs) and engineering characteristics (ECs) as well…
Abstract
Purpose
The purpose of this paper is twofold: to incorporate the symbolic relationships among the attributes of customer requirements (CRs) and engineering characteristics (ECs) as well as to factor in the values numerically to enhance the prioritization process for an improved, comprehensive quality function deployment (QFD) analysis. The aim is to develop the concept of assimilating and factoring in the often-ignored interrelationships among CRs and ECs utilizing the weighted average method for the CR and EC correlations with overall calculations.
Design/methodology/approach
After a brief literature review of the methods utilized, the research paper discusses the framework for the correlation triangle challenge and introduces a novel mathematical solution utilizing triangle values in conjunction with computed initial raw weights for CRs and initial priority scores for ECs. The capability and applicability of the proposed model are demonstrated with a real-life example.
Findings
Through the proposed technique, the roof and the interrelationship triangle's signs and symbols are translated into numerical values for each permutation of ECs and CRs, and then the prioritization values are processed and finalized. The proposed model successfully modifies and removes vagueness from an otherwise overlooked part of the QFD process.
Practical implications
The illustrated case study aptly proves that the proposed methodology yields more revealing and informative outcomes for engineers and designers, thus adding much-needed reliability to the outcome and its analysis. The validation conducted through the rank comparison endorses the premise, and the results obtained reflect the strength and accuracy of the progressive QFD as a product planning tool.
Originality/value
The research article proposes a fresh and unique QFD approach that solves typical procedural complications encountered in a regular QFD. Whereas the traditional methods neglect the interrelationships among CRs and ECs, this new methodology employs them in an improved, numerical way by incorporating them in quantitative analysis, which leads to judicious and improved decision-making.
Details
Keywords
Fatih Erdoğdu, Seyfullah Gökoğlu and Mehmet Kara
The current study aimed to develop and validate Mobile Information Security Awareness Scale (MISAS) based on the prototype model for measuring information security awareness and…
Abstract
Purpose
The current study aimed to develop and validate Mobile Information Security Awareness Scale (MISAS) based on the prototype model for measuring information security awareness and the relevant literature.
Design/methodology/approach
The scale was developed and validated with the participation of 562 students from four universities. The construct validity of the scale was tested through exploratory factor analysis and confirmatory factor analysis.
Findings
The reliability of the scale was tested through corrected item-total correlations and Cronbach alpha. The MISAS includes six factors and 17 items. The identified factors were labeled as backup, instant messaging and navigation, password protection, update, access permission and using others' devices.
Research limitations/implications
The scale included only the human aspects of mobile information security. The technical aspects are not within the scope of this study. For this reason, future studies might develop and validate a different scale focusing on the technical aspects of mobile information security.
Originality/value
The developed scale contributes to the literature on the human aspects of mobile information security.