R. Romagnoli, R.O. Batic, V.F. Vetere, J.D. Sota, I.T. Lucchini and R.O. Carbonari
Hardened cement paste is a heterogeneous system resulting from the grouping of particles, films, microcrystals and other solid structural elements bounded in a porous mass. The…
Abstract
Hardened cement paste is a heterogeneous system resulting from the grouping of particles, films, microcrystals and other solid structural elements bounded in a porous mass. The cement paste microstructure must be understood firstly due to its influence on concrete properties. The behaviour of concrete greatly depends on the conformation of localised special structures rather than on general structures found in the mass cement paste. The objective of this paper was to study the cement paste microstructure, as a function of the water–cement ratio, in order to interpret the variations of the steel–mortar bond strength and the developing of the corrosion process in steel–mortar specimens kept in tap water and 3 percent sodium chloride solutions for 1 year. A description of the steel–mortar interface was also provided.
O.R. Batic, J.D. Sota, J.L. Fernández, N. Bellotti and R. Romagnoli
This research aims to study the influence of limestone filler on rebar corrosion.
Abstract
Purpose
This research aims to study the influence of limestone filler on rebar corrosion.
Design/methodology/approach
Mortar samples containing 35% calcareous filler and with a rebar inserted in the axis, were cast. Specimens were cured at the open air and during 28 days in lime water. After curing, they were submerged in two electrolytes (tap water and 3% NaCl) and corrosion parameters (corrosion potential and corrosion current) were monitored over time by d.c. techniques. Simultaneously, electrochemical noise measurements were carried out. After corrosion tests, rebars were pulled out by lateral compression, and their surface observed by scanning electron microscopy.
Findings
In general, carbonate additions impaired mortar protective properties, especially in the presence of chloride and changed the nature of the protective layer on rebars. The curing process did not introduce significant differences except for mortars with a high water cement ratio cured in lime water for which the beneficial effects of the simultaneous presence of carbonate and lime in the pore solution could be appreciated. The role of carbonate additions is to provide carbonate anions to passivate rebars. This passivation process caused corrosion rates not to be so high. Carbonate anions also deposited on oxide spots which were rendered passive but this process was not uniform. Certain areas on the rebar underwent intense carbonation while others showed increased corrosion rates.
Originality/value
There are not many corrosion studies about the influence of limestone filler on rebars corrosion. Particularly, this paper deals with mortars containing high percentages of carbonate additions. Results showed that the presence of this type of admixture changes the structure of the passive layer and, sometimes, may increase corrosion rates.
Details
Keywords
Bushi Chen, Xunyu Zhong, Han Xie, Pengfei Peng, Huosheng Hu, Xungao Zhong and Qiang Liu
Autonomous mobile robots (AMRs) play a crucial role in industrial and service fields. The paper aims to build a LiDAR-based simultaneous localization and mapping (SLAM) system…
Abstract
Purpose
Autonomous mobile robots (AMRs) play a crucial role in industrial and service fields. The paper aims to build a LiDAR-based simultaneous localization and mapping (SLAM) system used by AMRs to overcome challenges in dynamic and changing environments.
Design/methodology/approach
This research introduces SLAM-RAMU, a lifelong SLAM system that addresses these challenges by providing precise and consistent relocalization and autonomous map updating (RAMU). During the mapping process, local odometry is obtained using iterative error state Kalman filtering, while back-end loop detection and global pose graph optimization are used for accurate trajectory correction. In addition, a fast point cloud segmentation module is incorporated to robustly distinguish between floor, walls and roof in the environment. The segmented point clouds are then used to generate a 2.5D grid map, with particular emphasis on floor detection to filter the prior map and eliminate dynamic artifacts. In the positioning process, an initial pose alignment method is designed, which combines 2D branch-and-bound search with 3D iterative closest point registration. This method ensures high accuracy even in scenes with similar characteristics. Subsequently, scan-to-map registration is performed using the segmented point cloud on the prior map. The system also includes a map updating module that takes into account historical point cloud segmentation results. It selectively incorporates or excludes new point cloud data to ensure consistent reflection of the real environment in the map.
Findings
The performance of the SLAM-RAMU system was evaluated in real-world environments and compared against state-of-the-art (SOTA) methods. The results demonstrate that SLAM-RAMU achieves higher mapping quality and relocalization accuracy and exhibits robustness against dynamic obstacles and environmental changes.
Originality/value
Compared to other SOTA methods in simulation and real environments, SLAM-RAMU showed higher mapping quality, faster initial aligning speed and higher repeated localization accuracy.
Details
Keywords
Chengxi Yan, Xuemei Tang, Hao Yang and Jun Wang
The majority of existing studies about named entity recognition (NER) concentrate on the prediction enhancement of deep neural network (DNN)-based models themselves, but the…
Abstract
Purpose
The majority of existing studies about named entity recognition (NER) concentrate on the prediction enhancement of deep neural network (DNN)-based models themselves, but the issues about the scarcity of training corpus and the difficulty of annotation quality control are not fully solved, especially for Chinese ancient corpora. Therefore, designing a new integrated solution for Chinese historical NER, including automatic entity extraction and man-machine cooperative annotation, is quite valuable for improving the effectiveness of Chinese historical NER and fostering the development of low-resource information extraction.
Design/methodology/approach
The research provides a systematic approach for Chinese historical NER with a three-stage framework. In addition to the stage of basic preprocessing, the authors create, retrain and yield a high-performance NER model only using limited labeled resources during the stage of augmented deep active learning (ADAL), which entails three steps—DNN-based NER modeling, hybrid pool-based sampling (HPS) based on the active learning (AL), and NER-oriented data augmentation (DA). ADAL is thought to have the capacity to maintain the performance of DNN as high as possible under the few-shot constraint. Then, to realize machine-aided quality control in crowdsourcing settings, the authors design a stage of globally-optimized automatic label consolidation (GALC). The core of GALC is a newly-designed label consolidation model called simulated annealing-based automatic label aggregation (“SA-ALC”), which incorporates the factors of worker reliability and global label estimation. The model can assure the annotation quality of those data from a crowdsourcing annotation system.
Findings
Extensive experiments on two types of Chinese classical historical datasets show that the authors’ solution can effectively reduce the corpus dependency of a DNN-based NER model and alleviate the problem of label quality. Moreover, the results also show the superior performance of the authors’ pipeline approaches (i.e. HPS + DA and SA-ALC) compared to equivalent baselines in each stage.
Originality/value
The study sheds new light on the automatic extraction of Chinese historical entities in an all-technological-process integration. The solution is helpful to effectively reducing the annotation cost and controlling the labeling quality for the NER task. It can be further applied to similar tasks of information extraction and other low-resource fields in theoretical and practical ways.
Details
Keywords
The product life cycle is the cornerstone to understanding product/market behaviour, but it is not well suited to the analysis of industrial markets. In the industrial marketplace…
Abstract
The product life cycle is the cornerstone to understanding product/market behaviour, but it is not well suited to the analysis of industrial markets. In the industrial marketplace technology can be isolated as the dominant variable, explaining variations in sales over time for a product category. New “high‐tech” firms must make the critical decision early in their life as to whether to follow their technology through its life cycle, adapting to the needs of each stage it goes through, or whether to specialise in one particular stage of technology and continually develop new products to replace those that progress to later stages of the technological life cycle.
Details
Keywords
Weixing Wang, Yixia Chen and Mingwei Lin
Based on the strong feature representation ability of the convolutional neural network (CNN), generous object detection methods in remote sensing (RS) have been proposed one after…
Abstract
Purpose
Based on the strong feature representation ability of the convolutional neural network (CNN), generous object detection methods in remote sensing (RS) have been proposed one after another. However, due to the large variation in scale and the omission of relevant relationships between objects, there are still great challenges for object detection in RS. Most object detection methods fail to take the difficulties of detecting small and medium-sized objects and global context into account. Moreover, inference time and lightness are also major pain points in the field of RS.
Design/methodology/approach
To alleviate the aforementioned problems, this study proposes a novel method for object detection in RS, which is called lightweight object detection with a multi-receptive field and long-range dependency in RS images (MFLD). The multi-receptive field extraction (MRFE) and long-range dependency information extraction (LDIE) modules are put forward.
Findings
To concentrate on the variability of objects in RS, MRFE effectively expands the receptive field by a combination of atrous separable convolutions with different dilated rates. Considering the shortcomings of CNN in extracting global information, LDIE is designed to capture the relationships between objects. Extensive experiments over public datasets in RS images demonstrate that our MFLD method surpasses the state-of-the-art methods. Most of all, on the NWPU VHR-10 dataset, our MFLD method achieves 94.6% mean average precision with 4.08 M model volume.
Originality/value
This paper proposed a method called lightweight object detection with multi-receptive field and long-range dependency in RS images.
Details
Keywords
Jiawei Liu, Zi Xiong, Yi Jiang, Yongqiang Ma, Wei Lu, Yong Huang and Qikai Cheng
Fine-tuning pre-trained language models (PLMs), e.g. SciBERT, generally require large numbers of annotated data to achieve state-of-the-art performance on a range of NLP tasks in…
Abstract
Purpose
Fine-tuning pre-trained language models (PLMs), e.g. SciBERT, generally require large numbers of annotated data to achieve state-of-the-art performance on a range of NLP tasks in the scientific domain. However, obtaining fine-tuning data for scientific NLP tasks is still challenging and expensive. In this paper, the authors propose the mix prompt tuning (MPT), which is a semi-supervised method aiming to alleviate the dependence on annotated data and improve the performance of multi-granularity academic function recognition tasks.
Design/methodology/approach
Specifically, the proposed method provides multi-perspective representations by combining manually designed prompt templates with automatically learned continuous prompt templates to help the given academic function recognition task take full advantage of knowledge in PLMs. Based on these prompt templates and the fine-tuned PLM, a large number of pseudo labels are assigned to the unlabelled examples. Finally, the authors further fine-tune the PLM using the pseudo training set. The authors evaluate the method on three academic function recognition tasks of different granularity including the citation function, the abstract sentence function and the keyword function, with data sets from the computer science domain and the biomedical domain.
Findings
Extensive experiments demonstrate the effectiveness of the method and statistically significant improvements against strong baselines. In particular, it achieves an average increase of 5% in Macro-F1 score compared with fine-tuning, and 6% in Macro-F1 score compared with other semi-supervised methods under low-resource settings.
Originality/value
In addition, MPT is a general method that can be easily applied to other low-resource scientific classification tasks.
Details
Keywords
Nouhaila Bensalah, Habib Ayad, Abdellah Adib and Abdelhamid Ibn El Farouk
The paper aims to enhance Arabic machine translation (MT) by proposing novel approaches: (1) a dimensionality reduction technique for word embeddings tailored for Arabic text…
Abstract
Purpose
The paper aims to enhance Arabic machine translation (MT) by proposing novel approaches: (1) a dimensionality reduction technique for word embeddings tailored for Arabic text, optimizing efficiency while retaining semantic information; (2) a comprehensive comparison of meta-embedding techniques to improve translation quality; and (3) a method leveraging self-attention and Gated CNNs to capture token dependencies, including temporal and hierarchical features within sentences, and interactions between different embedding types. These approaches collectively aim to enhance translation quality by combining different embedding schemes and leveraging advanced modeling techniques.
Design/methodology/approach
Recent works on MT in general and Arabic MT in particular often pick one type of word embedding model. In this paper, we present a novel approach to enhance Arabic MT by addressing three key aspects. Firstly, we propose a new dimensionality reduction technique for word embeddings, specifically tailored for Arabic text. This technique optimizes the efficiency of embeddings while retaining their semantic information. Secondly, we conduct an extensive comparison of different meta-embedding techniques, exploring the combination of static and contextual embeddings. Through this analysis, we identify the most effective approach to improve translation quality. Lastly, we introduce a novel method that leverages self-attention and Gated convolutional neural networks (CNNs) to capture token dependencies, including temporal and hierarchical features within sentences, as well as interactions between different types of embeddings. Our experimental results demonstrate the effectiveness of our proposed approach in significantly enhancing Arabic MT performance. It outperforms baseline models with a BLEU score increase of 2 points and achieves superior results compared to state-of-the-art approaches, with an average improvement of 4.6 points across all evaluation metrics.
Findings
The proposed approaches significantly enhance Arabic MT performance. The dimensionality reduction technique improves the efficiency of word embeddings while preserving semantic information. Comprehensive comparison identifies effective meta-embedding techniques, with the contextualized dynamic meta-embeddings (CDME) model showcasing competitive results. Integration of Gated CNNs with the transformer model surpasses baseline performance, leveraging both architectures' strengths. Overall, these findings demonstrate substantial improvements in translation quality, with a BLEU score increase of 2 points and an average improvement of 4.6 points across all evaluation metrics, outperforming state-of-the-art approaches.
Originality/value
The paper’s originality lies in its departure from simply fine-tuning the transformer model for a specific task. Instead, it introduces modifications to the internal architecture of the transformer, integrating Gated CNNs to enhance translation performance. This departure from traditional fine-tuning approaches demonstrates a novel perspective on model enhancement, offering unique insights into improving translation quality without solely relying on pre-existing architectures. The originality in dimensionality reduction lies in the tailored approach for Arabic text. While dimensionality reduction techniques are not new, the paper introduces a specific method optimized for Arabic word embeddings. By employing independent component analysis (ICA) and a post-processing method, the paper effectively reduces the dimensionality of word embeddings while preserving semantic information which has not been investigated before especially for MT task.
Details
Keywords
Yi Xiang, Chengzhi Zhang and Heng Zhang
Highlights in academic papers serve as condensed summaries of the author’s key work, allowing readers to quickly grasp the paper’s focus. However, many journals do not currently…
Abstract
Purpose
Highlights in academic papers serve as condensed summaries of the author’s key work, allowing readers to quickly grasp the paper’s focus. However, many journals do not currently offer highlights for their articles. To address this gap, some scholars have explored using supervised learning methods to extract highlights from academic papers. A significant challenge in this approach is the need for substantial amounts of training data.
Design/methodology/approach
This study examines the effectiveness of prompt-based learning for generating highlights. We develop task-specific prompt templates, populate them with paper abstracts and use them as input for language models. We employ both locally inferable pre-trained models, such as GPT-2 and T5, and the ChatGPT model accessed via API.
Findings
By evaluating the model’s performance across three datasets, we find that the ChatGPT model performed comparably to traditional supervised learning methods, even in the absence of training samples. Introducing a small number of training samples further enhanced the model’s performance. We also investigate the impact of prompt template content on model performance, revealing that ChatGPT’s effectiveness on specific tasks is highly contingent on the information embedded in the prompts.
Originality/value
This study advances the field of automatic highlights generation by pioneering the application of prompt learning. We employ several mainstream pre-trained language models, including the widely used ChatGPT, to facilitate text generation. A key advantage of our method is its ability to generate highlights without the need for training on domain-specific corpora, thereby broadening its applicability.
Details
Keywords
A. Valli Bhasha and B.D. Venkatramana Reddy
The problems of Super resolution are broadly discussed in diverse fields. Rather than the progression toward the super resolution models for real-time images, operating…
Abstract
Purpose
The problems of Super resolution are broadly discussed in diverse fields. Rather than the progression toward the super resolution models for real-time images, operating hyperspectral images still remains a challenging problem.
Design/methodology/approach
This paper aims to develop the enhanced image super-resolution model using “optimized Non-negative Structured Sparse Representation (NSSR), Adaptive Discrete Wavelet Transform (ADWT), and Optimized Deep Convolutional Neural Network”. Once after converting the HR images into LR images, the NSSR images are generated by the optimized NSSR. Then the ADWT is used for generating the subbands of both NSSR and HRSB images. The residual image with this information is obtained by the optimized Deep CNN. All the improvements on the algorithms are done by the Opposition-based Barnacles Mating Optimization (O-BMO), with the objective of attaining the multi-objective function concerning the “Peak Signal-to-Noise Ratio (PSNR), and Structural similarity (SSIM) index”. Extensive analysis on benchmark hyperspectral image datasets shows that the proposed model achieves superior performance over typical other existing super-resolution models.
Findings
From the analysis, the overall analysis of the suggested and the conventional super resolution models relies that the PSNR of the improved O-BMO-(NSSR+DWT+CNN) was 38.8% better than bicubic, 11% better than NSSR, 16.7% better than DWT+CNN, 1.3% better than NSSR+DWT+CNN, and 0.5% better than NSSR+FF-SHO-(DWT+CNN). Hence, it has been confirmed that the developed O-BMO-(NSSR+DWT+CNN) is performing well in converting LR images to HR images.
Originality/value
This paper adopts a latest optimization algorithm called O-BMO with optimized Non-negative Structured Sparse Representation (NSSR), Adaptive Discrete Wavelet Transform (ADWT) and Optimized Deep Convolutional Neural Network for developing the enhanced image super-resolution model. This is the first work that uses O-BMO-based Deep CNN for image super-resolution model enhancement.