Significant effort has been made to support pre-service and novice teacher learning in the K-12 context. Less attention has been paid to promoting pre-service and novice second…
Abstract
Purpose
Significant effort has been made to support pre-service and novice teacher learning in the K-12 context. Less attention has been paid to promoting pre-service and novice second language teacher learning via collaboration with peers and more expert educators at the university level. In order to facilitate this type of teacher collaboration, a mentoring project was incorporated into the existing practicum of a Master of Arts in Teaching English as a Second Language (ESL) program at a US University. The purpose of this paper is to examine the nature of the mentoring experiences of four ESL mentor-pre-service teacher pairs in the US University context.
Design/methodology/approach
For this research project, eight teachers – four mentor-pre-service teacher pairs – participated as pairs in mentoring sessions focussed on activities such as co-planning, co-teaching, and co-reflecting on teaching. Informed by a sociocultural perspective on teacher learning (Vygotsky, 1978), this study presents case studies of all four pairs in order to demonstrate the complex nature of mentoring. The data analysis focussed on the content of the teachers’ interactions and their perceptions of the mentoring experience.
Findings
The study traced the developmental trajectories of the participating teachers over one 15-week academic semester. The study uncovered some critical contradictions that the participants encountered during the mentoring experience, thus pointing to its complexity. The study also uncovered the varied nature of mentoring: whereas in one pair the mentor acted as a more expert other (Vygotsky, 1978), in another pair, the mentoring relationship was more reciprocal.
Practical implications
This study showed that pre-service teachers can develop further through mentoring. Such mentoring can help teachers gain confidence and share teaching strategies. At the same time, the study revealed certain challenges associated with introducing a mentoring project in a pre-service teacher practicum. It is recommended that program faculty as a whole read the rich dialogues produced by participating teachers engaged in relationships focussed on collaborative learning, thereby discovering a foundation for revisions that go beyond individual teaching practices to the programmatic level.
Originality/value
This study’s principal contribution to the field is that it showcases the complex nature of mentoring experiences and the ways in which they differ from each other.
Details
Keywords
Jayalaxmi Anem, G. Sateeshkumar and R. Madhu
The main aim of this paper is to design a technique for improving the quality of EEG signal by removing artefacts which is obtained during acquisition. Initially, pre-processing…
Abstract
Purpose
The main aim of this paper is to design a technique for improving the quality of EEG signal by removing artefacts which is obtained during acquisition. Initially, pre-processing is done on EEG signal for quality improvement. Then, by using wavelet transform (WT) feature extraction is done. The artefacts present in the EEG are removed using deep convLSTM. This deep convLSTM is trained by proposed fractional calculus based flower pollination optimisation algorithm.
Design/methodology/approach
Nowadays' EEG signals play vital role in the field of neurophysiologic research. Brain activities of human can be analysed by using EEG signals. These signals are frequently affected by noise during acquisition and other external disturbances, which lead to degrade the signal quality. Denoising of EEG signals is necessary for the effective usage of signals in any application. This paper proposes a new technique named as flower pollination fractional calculus optimisation (FPFCO) algorithm for the removal of artefacts from EEG signal through deep learning scheme. FPFCO algorithm is the integration of flower pollination optimisation and fractional calculus which takes the advantages of both the flower pollination optimisation and fractional calculus which is used to train the deep convLSTM. The existed FPO algorithm is used for solution update through global and local pollinations. In this case, the fractional calculus (FC) method attempts to include the past solution by including the second order derivative. As a result, the suggested FPFCO algorithm approaches the best solution faster than the existing flower pollination optimization (FPO) method. Initially, 5 EEG signals are contaminated by artefacts such as EMG, EOG, EEG and random noise. These contaminated EEG signals are pre-processed to remove baseline and power line noises. Further, feature extraction is done by using WT and extracted features are applied to deep convLSTM, which is trained by proposed fractional calculus based flower pollination optimisation algorithm. FPFCO is used for the effective removal of artefacts from EEG signal. The proposed technique is compared with existing techniques in terms of SNR and MSE.
Findings
The proposed technique is compared with existing techniques in terms of SNR, RMSE and MSE.
Originality/value
100%.
Details
Keywords
Marc Wouters, Susana Morales, Sven Grollmuss and Michael Scheer
The paper provides an overview of research published in the innovation and operations management (IOM) literature on 15 methods for cost management in new product development, and…
Abstract
Purpose
The paper provides an overview of research published in the innovation and operations management (IOM) literature on 15 methods for cost management in new product development, and it provides a comparison to an earlier review of the management accounting (MA) literature (Wouters & Morales, 2014).
Methodology/approach
This structured literature search covers papers published in 23 journals in IOM in the period 1990–2014.
Findings
The search yielded a sample of 208 unique papers with 275 results (one paper could refer to multiple cost management methods). The top 3 methods are modular design, component commonality, and product platforms, with 115 results (42%) together. In the MA literature, these three methods accounted for 29%, but target costing was the most researched cost management method by far (26%). Simulation is the most frequently used research method in the IOM literature, whereas this was averagely used in the MA literature; qualitative studies were the most frequently used research method in the MA literature, whereas this was averagely used in the IOM literature. We found a lot of papers presenting practical approaches or decision models as a further development of a particular cost management method, which is a clear difference from the MA literature.
Research limitations/implications
This review focused on the same cost management methods, and future research could also consider other cost management methods which are likely to be more important in the IOM literature compared to the MA literature. Future research could also investigate innovative cost management practices in more detail through longitudinal case studies.
Originality/value
This review of research on methods for cost management published outside the MA literature provides an overview for MA researchers. It highlights key differences between both literatures in their research of the same cost management methods.
Details
Keywords
Thankachan T. Pullan, M. Bhasi and G. Madhu
The purpose of this paper is to address the capture and documentation of essential design for manufacture (DFM) pieces of information to make design decisions. Essential…
Abstract
Purpose
The purpose of this paper is to address the capture and documentation of essential design for manufacture (DFM) pieces of information to make design decisions. Essential manufacturing information is that which can affect the fulfilment of functional requirements and product constraints. The hierarchical structure of the main components for the open architecture‐process planning model (PPM), manufacturing activity model (MAM) and manufacturing resource model (MRM) are discussed The aim of the approach is to define manufacturing knowledge structures and develop a knowledge‐based application for DFM.
Design/methodology/approach
This work addresses the capture and documentation of essential DFM pieces of information to make design decisions. Essential manufacturing information is that which can affect the fulfilment of functional requirements and product constraints. The hierarchical structure of the main components for the open architecture‐PPM, MAM and MRM are discussed. The aim of the approach is to define manufacturing knowledge structures and develop a knowledge‐based application for DFM.
Findings
This paper gives details of the application framework development by integrating object‐oriented technology and component‐based development. This will help to achieve large‐scale software reuse for manufacturing application development projects. This paper also gives an overview of a computer system for automated concurrent engineering, and more particularly, to a method for the concurrent design of parts, tools and processes.
Originality/value
The workability of this approach was tested in a machine‐tool manufacturing firm and the same has been presented as a case.
Details
Keywords
Jason Ellis, Mark Cropley and Sarah Hampson
Although ageing itself does not lead to insomnia, changes in sleep architecture (the ‘typical’ physiological progression from wakefulness to deep sleep) and health status create a…
Abstract
Although ageing itself does not lead to insomnia, changes in sleep architecture (the ‘typical’ physiological progression from wakefulness to deep sleep) and health status create a vulnerability to the development of insomnia, which can be precipitated by a trigger event. This review highlights some of the problems associated with insomnia in older people and offers insights into the possible approaches to stop insomnia from becoming a ‘rite of passage’. The main conclusion from this review however, is that sleep research focusing specifically on the ageing population is badly needed, alongside a unified diagnostic system and research structure (Leger, 2000). These findings are also discussed in relation to both healthcare policy and practice.
Details
Keywords
Kejia Chen, Ping Chen, Lixi Yang and Lian Jin
The purpose of this paper is to propose a grey clustering evaluation model based on analytic hierarchy process (AHP) and interval grey number (IGN) to solve the clustering…
Abstract
Purpose
The purpose of this paper is to propose a grey clustering evaluation model based on analytic hierarchy process (AHP) and interval grey number (IGN) to solve the clustering evaluation problem with IGNs.
Design/methodology/approach
First, the centre-point triangular whitenisation weight function with real numbers is built, and then by using interval mean function, the whitenisation weight function is extended to IGNs. The weights of evaluation indexes are determined by AHP. Finally, this model is used to evaluate the flight safety of a Chinese airline. The results indicate that the model is effective and reasonable.
Findings
When IGN meets certain conditions, the centre-point triangular whitenisation weight function based on IGN is not multiple-cross and it is normative. It provides a certain standard and basis for obtaining the effective evaluation indexes and determining the scientific evaluation of the grey class.
Originality/value
The traditional grey clustering model is extended to the field of IGN. It can make full use of all the information of the IGN, so the result of the evaluation is more objective and reasonable, which provides supports for solving practical problems.
Details
Keywords
Quratulain Mohtashim, Muriel Rigout and Sheraz Hussain Siddique Hussain Yousfani
Sulphur dyes provide an inexpensive medium to dye cellulosic fibres with heavy shade depths. They offer moderate to good fastness to light and wet treatments. However, owing to…
Abstract
Purpose
Sulphur dyes provide an inexpensive medium to dye cellulosic fibres with heavy shade depths. They offer moderate to good fastness to light and wet treatments. However, owing to the environmental hazard produced by the use of sodium sulphide, the practical implication of these dyes is steadily decreasing. Moreover, these dyes are prone to oxidation causing pronounced fading on exposure to laundering. This paper aims to present the green processing of sulphur dyes by using a biodegradable reducing agent in place of sodium sulphide to dye cotton fabrics. The study also proposes after-treatments with tannin to improve the fastness properties of the dyeings.
Design/methodology/approach
In this study, dyeings were produced on cotton fabric with a range of C.I. Leuco Sulphur dyes, which were reduced with sodium sulphide and glucose. Sulphur dyeings were after-treated with an environment-friendly tannin-based product (Bayprotect CL (BP)); subsequently, the after-treated samples were evaluated for colour strength, wash, light and rubbing fastness.
Findings
A novel after-treatment method was developed, which substantially improved the wash fastness of C.I. Leuco Sulphur Black 1 dyeing to ISO 105 C06/C09 washing. However, the degree of this improvement varied for the other sulphur dyes used. The surface morphology and the possible mechanisms for the improved fastness properties were also discussed.
Research limitations/implications
The effect of after-treatment was significant for improving the wash fastness of sulphur black dyeings in particular, while the effect on other colours was minor. Significant improvements were observed for light and wet rub fastness for most of the dyeings, which signifies the importance of tannins as a finishing agent.
Practical implications
It is observed that the tannin-based product, BP, is found to provide the photoprotective effect by improving the lightfastness of the dyeings. Future research may involve the exploration of various tannins as a finishing agent to sulphur dyeings.
Originality/value
This novel finishing technique is found significant for improving the wash fastness of sulphur black 1 dyeings for both the reducing systems. Improvements were also observed for light and wet rub fastnesses for most of the dyeings.
Details
Keywords
David Martín-Moncunill, Miguel-Ángel Sicilia-Urban, Elena García-Barriocanal and Salvador Sánchez-Alonso
Large terminologies usually contain a mix of terms that are either generic or domain specific, which makes the use of the terminology itself a difficult task that may limit the…
Abstract
Purpose
Large terminologies usually contain a mix of terms that are either generic or domain specific, which makes the use of the terminology itself a difficult task that may limit the positive effects of these systems. The purpose of this paper is to systematically evaluate the degree of domain specificity of the AGROVOC controlled vocabulary terms as a representative of a large terminology in the agricultural domain and discuss the generic/specific boundaries across its hierarchy.
Design/methodology/approach
A user-oriented study with domain-experts in conjunction with quantitative and systematic analysis. First an in-depth analysis of AGROVOC was carried out to make a proper selection of terms for the experiment. Then domain-experts were asked to classify the terms according to their domain specificity. An evaluation was conducted to analyse the domain-experts’ results. Finally, the resulting data set was automatically compared with the terms in SUMO, an upper ontology and MILO, a mid-level ontology; to analyse the coincidences.
Findings
Results show the existence of a high number of generic terms. The motivation for several of the unclear cases is also depicted. The automatic evaluation showed that there is not a direct way to assess the specificity degree of a term by using SUMO and MILO ontologies, however, it provided additional validation of the results gathered from the domain-experts.
Research limitations/implications
The “domain-analysis” concept has long been discussed and it could be addressed from different perspectives. A resume of these perspectives and an explanation of the approach followed in this experiment is included in the background section.
Originality/value
The authors propose an approach to identify the domain specificity of terms in large domain-specific terminologies and a criterion to measure the overall domain specificity of a knowledge organisation system, based on domain-experts analysis. The authors also provide a first insight about using automated measures to determine the degree to which a given term can be considered domain specific. The resulting data set from the domain-experts’ evaluation can be reused as a gold standard for further research about these automatic measures.
Details
Keywords
Tressy Thomas and Enayat Rajabi
The primary aim of this study is to review the studies from different dimensions including type of methods, experimentation setup and evaluation metrics used in the novel…
Abstract
Purpose
The primary aim of this study is to review the studies from different dimensions including type of methods, experimentation setup and evaluation metrics used in the novel approaches proposed for data imputation, particularly in the machine learning (ML) area. This ultimately provides an understanding about how well the proposed framework is evaluated and what type and ratio of missingness are addressed in the proposals. The review questions in this study are (1) what are the ML-based imputation methods studied and proposed during 2010–2020? (2) How the experimentation setup, characteristics of data sets and missingness are employed in these studies? (3) What metrics were used for the evaluation of imputation method?
Design/methodology/approach
The review process went through the standard identification, screening and selection process. The initial search on electronic databases for missing value imputation (MVI) based on ML algorithms returned a large number of papers totaling at 2,883. Most of the papers at this stage were not exactly an MVI technique relevant to this study. The literature reviews are first scanned in the title for relevancy, and 306 literature reviews were identified as appropriate. Upon reviewing the abstract text, 151 literature reviews that are not eligible for this study are dropped. This resulted in 155 research papers suitable for full-text review. From this, 117 papers are used in assessment of the review questions.
Findings
This study shows that clustering- and instance-based algorithms are the most proposed MVI methods. Percentage of correct prediction (PCP) and root mean square error (RMSE) are most used evaluation metrics in these studies. For experimentation, majority of the studies sourced the data sets from publicly available data set repositories. A common approach is that the complete data set is set as baseline to evaluate the effectiveness of imputation on the test data sets with artificially induced missingness. The data set size and missingness ratio varied across the experimentations, while missing datatype and mechanism are pertaining to the capability of imputation. Computational expense is a concern, and experimentation using large data sets appears to be a challenge.
Originality/value
It is understood from the review that there is no single universal solution to missing data problem. Variants of ML approaches work well with the missingness based on the characteristics of the data set. Most of the methods reviewed lack generalization with regard to applicability. Another concern related to applicability is the complexity of the formulation and implementation of the algorithm. Imputations based on k-nearest neighbors (kNN) and clustering algorithms which are simple and easy to implement make it popular across various domains.
Details
Keywords
M. Anil Ramesh and Madhusudan Kumar Kota
COCO TANG India is an innovation-driven company. It takes inspiration from the humble coconut water that all of us are very familiar with and have drunk right from our childhood…
Abstract
COCO TANG India is an innovation-driven company. It takes inspiration from the humble coconut water that all of us are very familiar with and have drunk right from our childhood. The founders of the company, Dr Neelima, a dentist by profession and her husband Chaitanya who is a pharmacist have hit upon the idea of a coconut-based drink quite by accident.
When Dr Neelima was pregnant with her first child, the doctor advised her to take fresh coconut water. It was then that Dr Neelima discovered that fresh coconut water was a paradox, the water from a tender coconut is supposed to be fresh but in many cases is not as fresh as it should be. Coconuts are harvested from remote farms in Andhra Pradesh and sent to Hyderabad. And to top it all, the nutrition value of the coconut past its ideal window of consumption leaves a lot to desire. The price factor too is a dampener. It costs Rs. 25 to have tender coconut water in a metropolis like Hyderabad.
Dr Neelima and her husband developed the product idea from their search for a nutritious, healthy drink. Fresh, tender coconut pulp-based shakes, packed with nutrition, taste, health and at the same time make an aspirational product for the young, bubbly and restless youth of India.
This case deals with the problems, the trials and tribulations that these young first-time entrepreneurs faced and details the marketing efforts the young company is putting into survive in the dog eat dog world of fruit drink industry.
The case details the specific marketing-related problems the company faces and examines what the promoters are doing to overcome these problems, specifically related to the four Ps, that is, product, price, place and promotion. It looks in depth at the innovative marketing practices that COCO TANG India is deploying, including the use of the social media that enabled the COCO TANG India’s founder to win Junior Chamber International – Business Excellence Award for the year 2017–2018.
COCO TANG India is also the recipient of the Telugu book of records ‘certificate of national record’ as being the first brand to introduce Tender Coconut-based Mocktails and Milkshakes (A1).