Search results

1 – 10 of 14
Article
Publication date: 2 July 2024

Partha Sarathi Mandal and Sukumar Mandal

The purpose of this study is to investigate a practical strategy for integrating application programming interfaces (APIs) and standard interchange protocols (SIPs) within library…

Abstract

Purpose

The purpose of this study is to investigate a practical strategy for integrating application programming interfaces (APIs) and standard interchange protocols (SIPs) within library and information services. This study will seek to determine how such an integration strategy can improve access to resources, enhance the user experience, optimize library operations and improve the overall efficiency of library services.

Design/methodology/approach

A qualitative approach to research will be used in this study. This study will be based on the review of relevant literature sources, case studies and real examples. The data analyzes to determine the practical application of SIP and API integration and identify the major methods, approaches and processes used by libraries to successfully implement integration projects.

Findings

This study explores that library and information services may achieve numerous benefits from API and SIP integration. The cases describe how libraries have managed to improve access, user experience, operational efficiency and general performance. Libraries have integrated APIs and SIP to create seamless search experiences, establish communication networks in real-time, and develop automated workflows and customer services. API and SIP integration will transform libraries in future.

Originality/value

The originality of this study is the focus of the API and SIP integration. While other authors have discussed the concept of integration from a theoretical standpoint, this study presents practical recommendations and implementation advice for librarians and researchers. This study uses real cases and examples to illustrate how libraries today have managed to improve their operations with the help of APIs and SIP integration.

Details

Library Hi Tech News, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0741-9058

Keywords

Article
Publication date: 6 August 2024

Sukumar Mandal

The purpose of this study is to navigate the process of transforming Wikipedia articles into audio files for library readers. The system provides a feasible manner of listening to…

Abstract

Purpose

The purpose of this study is to navigate the process of transforming Wikipedia articles into audio files for library readers. The system provides a feasible manner of listening to Wikipedia content, accommodating diverse learning preferences and enlarging knowledge in education society.

Design/methodology/approach

This framework has been constructed using the Python programming languages in the Linux operating platform. Application programming interface and Google text-to-speech (TTS) are required as additional software packages to design this prototype system. Transform any Wikipedia pages into audio files through Wikitrola for libraries and information centers. Wikipedia articles are directly transformed into audio, as these integrate the content seamlessly for the user experience. The whole system has been designed and configured on the basis of machine learning to provide dynamic services among the readers.

Findings

The viewer could use the machine learning system to turn Wikipedia articles into audio files, allowing them to listen to Wikipedia content in audio format. This would make information more accessible and adaptable to diverse learning modes, allowing written content to be engaged in novel and visionary ways.

Originality/value

The insightful observation in connection with the paper is that it shows how to convert text-based material into audio through the Google TTS and machine learning Python programming and finally incorporate them in Wikipedia articles. A harmonious system of information dissemination and technical education is established. This approach shows the effectiveness of imagination and the use of programming tools to enhance learning and knowledge-seeking processes.

Details

Library Hi Tech News, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0741-9058

Keywords

Article
Publication date: 18 October 2021

Sujan Saha and Sukumar Mandal

These projects aim to improve library services for users in the future by combining Link Open Data (LOD) technology with data visualization. It displays and analyses search…

Abstract

Purpose

These projects aim to improve library services for users in the future by combining Link Open Data (LOD) technology with data visualization. It displays and analyses search results in an intuitive manner. These services are enhanced by integrating various LOD technologies into the authority control system.

Design/methodology/approach

The technology known as LOD is used to access, recycle, share, exchange and disseminate information, among other things. The applicability of Linked Data technologies for the development of library information services is evaluated in this study.

Findings

Apache Hadoop is used for rapidly storing and processing massive Linked Data data sets. Apache Spark is a free and open-source data processing tool. Hive is a SQL-based data warehouse that enables data scientists to write, read and manage petabytes of data.

Originality/value

The distributed large data storage system Apache HBase does not use SQL. This study’s goal is to search the geographic, authority and bibliographic databases for relevant links found on various websites. When data items are linked together, all of the data bits are linked together as well. The study observed and evaluated the tools and processes and recorded each data item’s URL. As a result, data can be combined across silos, enhanced by third-party data sources and contextualized.

Details

Library Hi Tech News, vol. 38 no. 6
Type: Research Article
ISSN: 0741-9058

Keywords

Article
Publication date: 31 March 2022

Sukumar Mandal

The purpose of this study is to look into the three major new innovative components such as collection tree, tags cloud and geolocation for developing digital library system. This…

Abstract

Purpose

The purpose of this study is to look into the three major new innovative components such as collection tree, tags cloud and geolocation for developing digital library system. This study aims to develop and design an integrated framework for enhancing this services.

Design/methodology/approach

This study will develop a single purpose-driven framework for the domain. It will vary user-friendly architecture on expanding the collection tree based on a high-level operating system and plugins. Now software programs are available in the Omeka Web repository. The whole integrated framework has been designed based on the Linux Operating platform and LAMP architecture towards depends on proper installation and configuration of the “Collection Tree” module with Omeka for both the administration and user interfaces.

Findings

With this integrated structure, keyword cloud users will have easy access to objects and full-text content. Because it can save readers time, the collection tree is helpful in them. This integrated framework for constructing and designing the collection tree for the digital library allows geolocation-based searches from multiple collections.

Originality/value

The integrated domain-specific framework has been designed and developed for the libraries. So, it is feasible to provide better library services in inter-operability and crosswalk through the Omeka collection tree interface. It increases the advanced search mechanism for users using this innovative module and techniques towards creating collection trees based on tag clouds and geolocation for library professionals and advanced level users from multiple collections of hierarchy.

Details

Library Hi Tech News, vol. 39 no. 5
Type: Research Article
ISSN: 0741-9058

Keywords

Article
Publication date: 28 February 2022

Sukumar Mandal and Sujan Saha

There are various online library databases available to access, but none of them include a geographic search option. The purpose of this study is to discover a suitable solution…

Abstract

Purpose

There are various online library databases available to access, but none of them include a geographic search option. The purpose of this study is to discover a suitable solution to solve this problem. To access information, it is critical to search using the geographic locations available in bibliographic databases. Geographic search functionality is available in Web-scale discovery system and services.

Design/methodology/approach

The approach and methods for incorporating geographic search capabilities into VuFind open source discovery software for suitable and appropriate information resource discovery among users are straightforward. Library mash-up tools and techniques were integrated by using geographic locations into a Web-scale library discovery system.

Findings

As a result, bibliographic metadata descriptions help all library users to identify and access documents more quickly and easily. It can assist users and librarians in making better use of library resources by integrating various databases with VuFind and by providing them in a single window-based interface. This system has developed a standardised geographic search architecture, and it is entirely based on the Ubuntu operating system. Furthermore, the 034 MARC 21 tag will be adjusted using the latitude and longitude website.

Originality/value

Web-scale library services are provided by OCLC WorldCat Discovery Services, Summon Web-scale Discovery, EBSCO Discovery Services, Primo Central and others. This Web-scale discovery services platform relies heavily on geographic search interface functionalities.

Details

Library Hi Tech News, vol. 39 no. 3
Type: Research Article
ISSN: 0741-9058

Keywords

Article
Publication date: 16 February 2022

Sukumar Mandal and Sujan Saha

The purpose of this paper is to investigate connected data through the use of open-source technology. It demonstrates the transformation process from library bibliographic data to…

Abstract

Purpose

The purpose of this paper is to investigate connected data through the use of open-source technology. It demonstrates the transformation process from library bibliographic data to linked data, which allows for easy searching across numerous collections of information.

Design/methodology/approach

In generating this file, a high-level operating system such as Ubuntu, which is based on the LAMP architecture, is used. It is required to use open-source strategies in building the relevant information. LODRefine is being used to convert all of Koha's bibliographic data into linked data that is now available on the Web. This framework has been conceptualized and formulated based on linked data principles and search algorithms accordingly.

Findings

Linked data services have been made publicly available to library users by using a variety of different forms of data. Information may be sought quickly and easily using this interface built on numerous search structures. Aside from that, it also meets the needs of users who use the linked data search mechanism to find information. Through modern scripts and algorithms, it is now possible for library users to easily search the linked data enables services.

Originality/value

This paper demonstrates how quickly and easily related data from bibliographic details may be developed and generated using a spreadsheet. The entire procedure culminates in the presence of specialists in the library setting. A further advantage of the SPARQL system is that it allows visitors to group distinct concepts and aspects using independent URIs and URLs instead of the SPARQL endpoint.

Details

Library Hi Tech News, vol. 39 no. 3
Type: Research Article
ISSN: 0741-9058

Keywords

Article
Publication date: 28 March 2022

Sujan Saha and Sukumar Mandal

The automation of libraries has made a significant difference in the quality of service. All libraries seek to improve the quality of their services, and the library seems to be a…

Abstract

Purpose

The automation of libraries has made a significant difference in the quality of service. All libraries seek to improve the quality of their services, and the library seems to be a frequent target for modernization efforts throughout the globe. Local dialects need specific translations to use library management software. A variety of languages are available to create open-source techniques. The integrated library management system Koha is gaining traction. The Koha software has been translated and made accessible to a worldwide audience in many languages.

Design/methodology/approach

This experiment was based on a controlled setting and a realistic observation of the Koha translation process. A sample of a population is selected to study and draw conclusions about the entire population.

Findings

Results are based on a statistical report made available on the Koha translation site for the gathering of sample data, which is based on sample data that has been collected and brought to light. This analysis demonstrates how to translate Koha software on a pootle server step by step using sting data inputs on different po files accessible in translation databases.

Originality/value

The authors of this paper explain the procedure of translating Koha software and provide a global overview of Koha software translation into different languages throughout the world. Therefore, it can be expected that the importance of Koha software will increase tremendously for the people of different languages through the method of translation.

Details

Library Hi Tech News, vol. 39 no. 6
Type: Research Article
ISSN: 0741-9058

Keywords

Open Access
Article
Publication date: 21 July 2020

Mrinmay Mandal and Nilanjana Das Chatterjee

Ecologically habitat is an area of a particular species wherefrom its play every relationship with the surrounding. Therefore, every species hold habitat that supports to survive…

1503

Abstract

Purpose

Ecologically habitat is an area of a particular species wherefrom its play every relationship with the surrounding. Therefore, every species hold habitat that supports to survive its life. The large terrestrial herbivore animal elephant (Elephas maximus) requires deferent kind of habitat for their biological behaviour. Forest habitat one of the landscapes entire their home range is very much responsible for selecting suitable habitat. The nature of habitat selection by an elephant is deeply concerned with landscape attributes.

Design/methodology/approach

The present study started in this opinion. The study area Panchet Forest Division (PFD) has 28 forest patches are not in same size. Generally, forest patches are the most suitable habitat for elephant in every forest landscape as well as in PFD. But which forest patch will be highly suitable that depends on ecological function of other geospatial attributes like patch shape complexity, patch core, road intervention intensity, amount of water body and composition of the forest. The present study measures these attributes by different sequential steps such as field inquiry, satellite image processing and GIS application by using ERDAS 9.3 and ArcGIS 10.3 version software.

Findings

After measuring these attributes value, Habitat Suitability Index is assessed through combined weighted principle method and prepared a suitability map. This map signifies that Joypur-I and II, Upper Peardoba, Brindabanpur, Kalabagan forest patches have good condition for elephant to prefer as a suitable habitat in PFD.

Originality/value

Spatial classification of elephant habitat in PFD helps society and managing authority. It facilitates better management and reducing the chance of human – elephant frequent contact.

Details

Ecofeminism and Climate Change, vol. 1 no. 3
Type: Research Article
ISSN: 2633-4062

Keywords

Article
Publication date: 18 January 2023

Namjoo Choi

The purpose of this paper is to revisit and update Palmer and Choi (2014), which conducted a descriptive literature review on open source software (OSS) studies published by the…

Abstract

Purpose

The purpose of this paper is to revisit and update Palmer and Choi (2014), which conducted a descriptive literature review on open source software (OSS) studies published by the end of February 2013 in the library context.

Design/methodology/approach

The same article search and filtering procedures used in Palmer and Choi (2014) were used, resulting in a total sample size of 105 articles. These articles were then examined based on the same six variables (i.e. publication year, publication outlet, software type, article type, library type and article topic) from Palmer and Choi (2014) along with two new variables (i.e., study country and prolific authors).

Findings

The volume of research articles was found to be in a downwards trend since 2010. As suggested by Palmer and Choi (2014), survey research increased and was found to be the second most popular article type. Regarding library types, the proportion of articles in the context of academic and research libraries was found to have expanded even further. As to article topics, perceptions, which investigates users’ (or non-users’) various perceptions towards OSS, was newly added and was ranked fourth. Given the maturity of the research stream, two new variables (i.e., study country and prolific authors) were examined, and the findings from analyzing them are also presented.

Originality/value

By examining library OSS articles published between March 2013 and February 2022, this study uncovers changes and developments in the research since Palmer and Choi (2014), which provides a picture of where the research stands now with several updated and new implications.

Details

The Electronic Library, vol. 41 no. 1
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 2 August 2023

Aurojyoti Prusty and Amirtham Rajagopal

This study implements the fourth-order phase field method (PFM) for modeling fracture in brittle materials. The weak form of the fourth-order PFM requires C1 basis functions for…

Abstract

Purpose

This study implements the fourth-order phase field method (PFM) for modeling fracture in brittle materials. The weak form of the fourth-order PFM requires C1 basis functions for the crack evolution scalar field in a finite element framework. To address this, non-Sibsonian type shape functions that are nonpolynomial types based on distance measures, are used in the context of natural neighbor shape functions. The capability and efficiency of this method are studied for modeling cracks.

Design/methodology/approach

The weak form of the fourth-order PFM is derived from two governing equations for finite element modeling. C0 non-Sibsonian shape functions are derived using distance measures on a generalized quad element. Then these shape functions are degree elevated with Bernstein-Bezier (BB) patch to get higher-order continuity (C1) in the shape function. The quad element is divided into several background triangular elements to apply the Gauss-quadrature rule for numerical integration. Both fourth-order and second-order PFMs are implemented in a finite element framework. The efficiency of the interpolation function is studied in terms of convergence and accuracy for capturing crack topology in the fourth-order PFM.

Findings

It is observed that fourth-order PFM has higher accuracy and convergence than second-order PFM using non-Sibsonian type interpolants. The former predicts higher failure loads and failure displacements compared to the second-order model due to the addition of higher-order terms in the energy equation. The fracture pattern is realistic when only the tensile part of the strain energy is taken for fracture evolution. The fracture pattern is also observed in the compressive region when both tensile and compressive energy for crack evolution are taken into account, which is unrealistic. Length scale has a certain specific effect on the failure load of the specimen.

Originality/value

Fourth-order PFM is implemented using C1 non-Sibsonian type of shape functions. The derivation and implementation are carried out for both the second-order and fourth-order PFM. The length scale effect on both models is shown. The better accuracy and convergence rate of the fourth-order PFM over second-order PFM are studied using the current approach. The critical difference between the isotropic phase field and the hybrid phase field approach is also presented to showcase the importance of strain energy decomposition in PFM.

Details

Engineering Computations, vol. 40 no. 6
Type: Research Article
ISSN: 0264-4401

Keywords

1 – 10 of 14