Search results
1 – 10 of over 2000The madman Huhu in Peer Gynt suffered melancholia because ‘droves of people die misunderstood’, and his plan was to lift the restraints of speech altogether by reintroducing the…
Abstract
The madman Huhu in Peer Gynt suffered melancholia because ‘droves of people die misunderstood’, and his plan was to lift the restraints of speech altogether by reintroducing the ‘grin, growl, gibber and gape of the orang‐outang’. Even in rational, philosophical circles it has been suggested that knowledge is incommunicable. The problem before this Conference may be described as being how to communicate the, possibly, incommunicable so that we shall not die misunderstood.
The origins of statistical information relating to the textile industry as to statistics generally are to be found mainly in the needs of Governments which, especially in the…
Abstract
The origins of statistical information relating to the textile industry as to statistics generally are to be found mainly in the needs of Governments which, especially in the field of export and import trade, have collected information from the earliest days and for which, in the case of the United Kingdom, systematic trade statistics tabulated by goods and countries date from 1696.
Although textiles is one of the oldest crafts and goes back to prehistory—it is believed that weaving grew up in the neolithic or later stone age—our modern civilization is…
Abstract
Although textiles is one of the oldest crafts and goes back to prehistory—it is believed that weaving grew up in the neolithic or later stone age—our modern civilization is producing such rapid and numerous developments in so many aspects of the subject that the individual is hard put to keep up with only a fraction of them.
The last of the London meetings for the winter session 1955–6 was held on 13th April, 1956, when Mr. C. N. Kington, Group Manager, British Iron and Steel Research Association, and…
Abstract
The last of the London meetings for the winter session 1955–6 was held on 13th April, 1956, when Mr. C. N. Kington, Group Manager, British Iron and Steel Research Association, and Director of Research, Cutlery Research Council, spoke on the problem of helping small firms to make use of scientific research. Many of the steel‐using firms are too small to have information departments of their own and, moreover, have a strong tradition of craftsmanship which is often slow to appreciate the value of new techniques. Mr. Kington has had first‐hand experience of the special approach that is necessary if these firms are to be kept in touch with scientific progress. His paper is printed in full in this issue, together with an account of questions and answers in the discussion which followed. The Chair was taken by Dr. M. A. Vernon, of the Department of Scientific and Industrial Research, a branch of the Government that is particularly interested in this problem.
The purpose of this paper is to address the issue of optimal management of ecosystems by developing a dynamic model of strategic behavior by users/communities of an ecosystem such…
Abstract
Purpose
The purpose of this paper is to address the issue of optimal management of ecosystems by developing a dynamic model of strategic behavior by users/communities of an ecosystem such as a lake, which is subject to pollution resulting from the users. More specifically, it builds a model of two ecosystems that are spatially connected.
Design/methodology/approach
The paper uses the techniques of optimal control theory and game theory.
Findings
The paper uncovers sufficient conditions under which the analysis of the dynamic game can be converted to an optimal problem for a pseudo authority. It is shown that if the discount rate on the future is high enough relative to ecological self‐restoration parameters then multiple stable states appear. In this case, if the pollution level is high enough it is too costly in terms of what must be given up today to restore the damaged system. By using computational methods, the paper evaluates the relative strengths of lack of coordination, strength of ecosystem self‐cleaning forces, size of discount rates, etc.
Originality/value
The methodology as well as findings can help to devise an optimal management strategy over time for ecosystems.
Details
Keywords
The focus of this paper is a survey of web‐mining research related to areas of societal benefit. The article aims to focus particularly on web mining which may benefit societal…
Abstract
Purpose
The focus of this paper is a survey of web‐mining research related to areas of societal benefit. The article aims to focus particularly on web mining which may benefit societal areas by extracting new knowledge, providing support for decision making and empowering the effective management of societal issues.
Design/methodology/approach
E‐commerce and e‐business are two fields that have been empowered by web mining, having many applications for increasing online sales and doing intelligent business. Have areas of social interest also been empowered by web mining applications? What are the current ongoing research and trends in e‐services fields such as e‐learning, e‐government, e‐politics and e‐democracy? What other areas of social interest can benefit from web mining? This work will try to provide the answers by reviewing the literature for the applications and methods applied to the above fields.
Findings
There is a growing interest in applications of web mining that are of social interest. This reveals that one of the current trends of web mining is toward the connection between intelligent web services and societal benefit applications, which denotes the need for interdisciplinary collaboration between researchers from various fields.
Originality/value
On the one hand, this work presents to the web‐mining community an overview of research opportunities in societal benefit areas. On the other hand, it presents to web researchers from various disciplines an approach for improving their web studies by considering web mining as a powerful research tool.
Details
Keywords
Mayank Kumar Jha, Sanku Dey and Yogesh Mani Tripathi
The purpose of this paper is to estimate the multicomponent reliability by assuming the unit-Gompertz (UG) distribution. Both stress and strength are assumed to have an UG…
Abstract
Purpose
The purpose of this paper is to estimate the multicomponent reliability by assuming the unit-Gompertz (UG) distribution. Both stress and strength are assumed to have an UG distribution with common scale parameter.
Design/methodology/approach
The reliability of a multicomponent stress–strength system is obtained by the maximum likelihood (MLE) and Bayesian method of estimation. Bayes estimates of system reliability are obtained by using Lindley’s approximation and Metropolis–Hastings (M–H) algorithm methods when all the parameters are unknown. The highest posterior density credible interval is obtained by using M–H algorithm method. Besides, uniformly minimum variance unbiased estimator and exact Bayes estimates of system reliability have been obtained when the common scale parameter is known and the results are compared for both small and large samples.
Findings
Based on the simulation results, the authors observe that Bayes method provides better estimation results as compared to MLE. Proposed asymptotic and HPD intervals show satisfactory coverage probabilities. However, average length of HPD intervals tends to remain shorter than the corresponding asymptotic interval. Overall the authors have observed that better estimates of the reliability may be achieved when the common scale parameter is known.
Originality/value
Most of the lifetime distributions used in reliability analysis, such as exponential, Lindley, gamma, lognormal, Weibull and Chen, only exhibit constant, monotonically increasing, decreasing and bathtub-shaped hazard rates. However, in many applications in reliability and survival analysis, the most realistic hazard rates are upside-down bathtub and bathtub-shaped, which are found in the unit-Gompertz distribution. Furthermore, when reliability is measured as percentage or ratio, it is important to have models defined on the unit interval in order to have plausible results. Therefore, the authors have studied the multicomponent stress–strength reliability under the unit-Gompertz distribution by comparing the MLEs, Bayes estimators and UMVUEs.
Details
Keywords
Elías Moreno and Luís Raúl Pericchi
We put forward the idea that for model selection the intrinsic priors are becoming a center of a cluster of a dominant group of methodologies for objective Bayesian Model…
Abstract
We put forward the idea that for model selection the intrinsic priors are becoming a center of a cluster of a dominant group of methodologies for objective Bayesian Model Selection.
The intrinsic method and its applications have been developed in the last two decades, and has stimulated closely related methods. The intrinsic methodology can be thought of as the long searched approach for objective Bayesian model selection and hypothesis testing.
In this paper we review the foundations of the intrinsic priors, their general properties, and some of their applications.
Details
Keywords
Femi Emmanuel Ayo, Olusegun Folorunso, Friday Thomas Ibharalu and Idowu Ademola Osinuga
Hate speech is an expression of intense hatred. Twitter has become a popular analytical tool for the prediction and monitoring of abusive behaviors. Hate speech detection with…
Abstract
Purpose
Hate speech is an expression of intense hatred. Twitter has become a popular analytical tool for the prediction and monitoring of abusive behaviors. Hate speech detection with social media data has witnessed special research attention in recent studies, hence, the need to design a generic metadata architecture and efficient feature extraction technique to enhance hate speech detection.
Design/methodology/approach
This study proposes a hybrid embeddings enhanced with a topic inference method and an improved cuckoo search neural network for hate speech detection in Twitter data. The proposed method uses a hybrid embeddings technique that includes Term Frequency-Inverse Document Frequency (TF-IDF) for word-level feature extraction and Long Short Term Memory (LSTM) which is a variant of recurrent neural networks architecture for sentence-level feature extraction. The extracted features from the hybrid embeddings then serve as input into the improved cuckoo search neural network for the prediction of a tweet as hate speech, offensive language or neither.
Findings
The proposed method showed better results when tested on the collected Twitter datasets compared to other related methods. In order to validate the performances of the proposed method, t-test and post hoc multiple comparisons were used to compare the significance and means of the proposed method with other related methods for hate speech detection. Furthermore, Paired Sample t-Test was also conducted to validate the performances of the proposed method with other related methods.
Research limitations/implications
Finally, the evaluation results showed that the proposed method outperforms other related methods with mean F1-score of 91.3.
Originality/value
The main novelty of this study is the use of an automatic topic spotting measure based on naïve Bayes model to improve features representation.
Details
Keywords
Efthimia Mavridou, Konstantinos M. Giannoutakis, Dionysios Kehagias, Dimitrios Tzovaras and George Hassapis
Semantic categorization of Web services comprises a fundamental requirement for enabling more efficient and accurate search and discovery of services in the semantic Web era…
Abstract
Purpose
Semantic categorization of Web services comprises a fundamental requirement for enabling more efficient and accurate search and discovery of services in the semantic Web era. However, to efficiently deal with the growing presence of Web services, more automated mechanisms are required. This paper aims to introduce an automatic Web service categorization mechanism, by exploiting various techniques that aim to increase the overall prediction accuracy.
Design/methodology/approach
The paper proposes the use of Error Correcting Output Codes on top of a Logistic Model Trees-based classifier, in conjunction with a data pre-processing technique that reduces the original feature-space dimension without affecting data integrity. The proposed technique is generalized so as to adhere to all Web services with a description file. A semantic matchmaking scheme is also proposed for enabling the semantic annotation of the input and output parameters of each operation.
Findings
The proposed Web service categorization framework was tested with the OWLS-TC v4.0, as well as a synthetic data set with a systematic evaluation procedure that enables comparison with well-known approaches. After conducting exhaustive evaluation experiments, categorization efficiency in terms of accuracy, precision, recall and F-measure was measured. The presented Web service categorization framework outperformed the other benchmark techniques, which comprise different variations of it and also third-party implementations.
Originality/value
The proposed three-level categorization approach is a significant contribution to the Web service community, as it allows the automatic semantic categorization of all functional elements of Web services that are equipped with a service description file.
Details