Search results

1 – 10 of 15
Article
Publication date: 3 April 2017

Mahsa Nikzad, Nadjla Hariri, Fahimeh Babalhavaeji and Fatemeh Nooshinfard

This study aims to apply some concepts of actuarial statistics to the authorship of Iranian ISI papers in the field of chemistry based on Price’s model. The study determines…

Abstract

Purpose

This study aims to apply some concepts of actuarial statistics to the authorship of Iranian ISI papers in the field of chemistry based on Price’s model. The study determines scientific birth rate, death rate, infant mortality rate, natural increase rate and life expectancy.

Design/methodology/approach

Price maintained that authors in each given period in any field fall into four categories including newcomers, transients, continuants and terminators. He suggested that actuarial statistics could be applied to authorship to calculate death rate and birth rate in scientific fields. A total 25,573 papers written by 59,661 Iranian chemistry authors between 1973 and 2012 were downloaded from Web of Science (WoS) and were subjected to statistical analysis.

Findings

The average birth rate was 66.7 per cent, the average death rate was 19.4 per cent, infant mortality rate was 51.2 per cent, average natural increase was 47.3 per cent, the average life expectancy was 1.98 years and the longest scientific age was 22 years. The results show that although a large number of people start their scientific activity, the number of those who terminate their activity in the same year as they start (infant mortality rate) is also large and little continuity exists in the publishing activities of Iranian chemists.

Research limitations/implications

The findings have implications for the planning of human resources in science. They could help maintain a stable scientific labor force and decide for instance whether a larger number of scientists should be trained and hired, or the barriers should be removed so the existing scientists can work for more years. The limitation is that the study is restricted to ISI articles, although they are not the only kind of scientific output.

Originality/value

This is the first study of its kind on Iranian scientific output. It shows that the overall labor force in the field of chemistry in Iran was not satisfactory, as the majority of authors in each period are transients. There is a need for better planning for the labor force.

Details

The Electronic Library, vol. 35 no. 2
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 6 February 2017

Zeinab Papi, Saeid Rezaei Sharifabadi, Sedigheh Mohammadesmaeil and Nadjla Hariri

This study aims to determine the technical requirements for copyright protection of theses and dissertations for proposing a model for applying in Iran’s National System for…

Abstract

Purpose

This study aims to determine the technical requirements for copyright protection of theses and dissertations for proposing a model for applying in Iran’s National System for Theses and Dissertations (INSTD).

Design/methodology/approach

This study used a mixed research methodology. The grounded theory was used in the qualitative phase, and a researcher-made checklist was applied in the quantitative phase for surveying the status of the INSTD. Research population included INSTD as well as six information specialists and copyright experts. Data were analysed by using open, axial and selective coding.

Findings

Based on data extracted from the completed checklists, some technical requirements had been provided in the system. The technical requirements that interviewees pointed out included the following two main classes: technical components and technical-software infrastructures, explored in the phase of the grounded theory. The individual categories included access control, copy control, technical-software challenges, protecting standards, hypertext transfer protocol secure, certificate authority, documentation of thesis and dissertation information, the use of digital object identifiers, copy detection systems, thesis and dissertation integrated systems, digital rights management systems and electronic copyright management systems.

Research limitations/implications

Considering the subject of this study, only the technical aspect was investigated, and other aspects were not included. In addition, electronic theses and dissertation (ETD) providers were not well aware of copyright issues.

Practical implications

Using the technical requirements with high security is effective in the INSTD to gain the trust of the authors and encourage them to deposit their ETDs.

Social implications

The increased use of the system encourages the authors to be more innovative in conducting their research.

Originality/value

Considering the continued violation of copyright in electronic databases, applying technical requirements for copyright protection and regulating users’ access to the information of theses and dissertations are needed in the INSTD.

Article
Publication date: 6 February 2024

Somayeh Tamjid, Fatemeh Nooshinfard, Molouk Sadat Hosseini Beheshti, Nadjla Hariri and Fahimeh Babalhavaeji

The purpose of this study is to develop a domain independent, cost-effective, time-saving and semi-automated ontology generation framework that could extract taxonomic concepts…

Abstract

Purpose

The purpose of this study is to develop a domain independent, cost-effective, time-saving and semi-automated ontology generation framework that could extract taxonomic concepts from unstructured text corpus. In the human disease domain, ontologies are found to be extremely useful for managing the diversity of technical expressions in favour of information retrieval objectives. The boundaries of these domains are expanding so fast that it is essential to continuously develop new ontologies or upgrade available ones.

Design/methodology/approach

This paper proposes a semi-automated approach that extracts entities/relations via text mining of scientific publications. Text mining-based ontology (TmbOnt)-named code is generated to assist a user in capturing, processing and establishing ontology elements. This code takes a pile of unstructured text files as input and projects them into high-valued entities or relations as output. As a semi-automated approach, a user supervises the process, filters meaningful predecessor/successor phrases and finalizes the demanded ontology-taxonomy. To verify the practical capabilities of the scheme, a case study was performed to drive glaucoma ontology-taxonomy. For this purpose, text files containing 10,000 records were collected from PubMed.

Findings

The proposed approach processed over 3.8 million tokenized terms of those records and yielded the resultant glaucoma ontology-taxonomy. Compared with two famous disease ontologies, TmbOnt-driven taxonomy demonstrated a 60%–100% coverage ratio against famous medical thesauruses and ontology taxonomies, such as Human Disease Ontology, Medical Subject Headings and National Cancer Institute Thesaurus, with an average of 70% additional terms recommended for ontology development.

Originality/value

According to the literature, the proposed scheme demonstrated novel capability in expanding the ontology-taxonomy structure with a semi-automated text mining approach, aiming for future fully-automated approaches.

Details

The Electronic Library , vol. 42 no. 2
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 30 July 2018

Marzieh Yari Zanganeh and Nadjla Hariri

The purpose of this paper is to identify the role of emotional aspects in information retrieval of PhD students from the web.

1416

Abstract

Purpose

The purpose of this paper is to identify the role of emotional aspects in information retrieval of PhD students from the web.

Design/methodology/approach

From the methodological perspective, the present study is experimental and the type of study is practical. The study population is PhD students of various fields of science. The study sample consists of 50 students as selected by the stratified purposive sampling method. The information aggregation is performed by observing the records of user’s facial expressions, log file by Morae software, as well as pre-search and post-search questionnaire. The data analysis is performed by canonical correlation analysis.

Findings

The findings showed that there was a significant relationship between emotional expressions and searchers’ individual characteristics. Searchers satisfaction of results, frequency internet search, experience of search, interest in the search task and familiarity with similar searches were correlated with the increased happy emotion. The examination of user’s emotions during searching performance showed that users with happiness emotion dedicated much time in searching and viewing of search solutions. More internet addresses with more queries were used by happy participants; on the other hand, users with anger and disgust emotions had the lowest attempt in search performance to complete search process.

Practical implications

The results imply that the information retrieval systems in the web should identify emotional expressions in a set of perceiving signs in human interaction with computer, similarity, face emotional states, searching and information retrieval from the web.

Originality/value

The results explicit in the automatic identification of users’ emotional expressions can enter new dimensions into their moderator and information retrieval systems on the web and can pave the way of design of emotional information retrieval systems for the successful retrieval of users of the network.

Details

Online Information Review, vol. 42 no. 4
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 3 February 2022

Omm Al-Banin Feyzbakhsh, Fahimeh Babalhavaeji, Navid Nezafati, Nadjla Hariri and Fatemeh Nooshinfard

This study aimed to present a model for open-data management for developing innovative information flow in Iranian knowledge-based companies (businesses).

1162

Abstract

Purpose

This study aimed to present a model for open-data management for developing innovative information flow in Iranian knowledge-based companies (businesses).

Design/methodology/approach

The method was mixed (qualitative-quantitative) and data collection tools were interview and questionnaire. The qualitative part was done to identify the influential components in open data management (ecosystem) using the grounded theory method. A questionnaire was developed based on the results of the qualitative section and the theoretical foundations, and the quantitative section was conducted by analytical survey method and the model was extracted using factor analysis and the integration of the qualitative section.

Findings

Seven categories of entrepreneurial incentives, sustainable value, innovative features, challenges and barriers, actors, business model and requirements are the main categories that should be considered in open data management (ecosystem) with all categories of research have a significant relationship with open data management.

Originality/value

The study focused on open data management from an innovation paradigm perspective and its role in developing innovative information flow. The study aimed to identify the key components of the open data ecosystem, open-data value creation, and the need to use the “open data” approach to develop data-driven and knowledge-based businesses in Iran–an emerging approach largely ignored.

Details

Aslib Journal of Information Management, vol. 74 no. 3
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 1 April 2019

Hadi Harati, Fatemeh Nooshinfard, Alireza Isfandyari-Moghaddam, Fahimeh Babalhavaeji and Nadjla Hariri

The purpose of this paper is to identify and design the axial coding pattern of the factors affecting the unplanned use behavior of users of the academic libraries and information…

2488

Abstract

Purpose

The purpose of this paper is to identify and design the axial coding pattern of the factors affecting the unplanned use behavior of users of the academic libraries and information centers.

Design/methodology/approach

The study as an applied research with a qualitative approach employed the grounded theory. The data collection tool was a deep and semi-structured interview. The interviews data were analyzed and coded during three stages of open, axial and selective coding using the MAXQDA 10 qualitative analysis software. The research population consisted of faculty members and experts in three areas of library and information science, management and psychology. Using the combined targeted sampling method (targeted and then the snowball), 12 subjects were selected as the sample size.

Findings

According to the research findings, the factors affecting the unplanned behavior of users in the use of academic libraries resources and services were identified as factors related to technology, environmental factors, information resources, information services, human resources, individual features, time position factor, cultural factors and social factors. Accordingly, the axial coding pattern of this type of behaviors was designed.

Research limitations/implications

The research limitations include the lack of theoretical basis related to the unplanned behavior issue in the field of library and information science and the lack of full familiarity of most of the experts in the field of library and information science with this topic. These factors lead to the necessity of explaining the subject under discussion.

Originality/value

The unplanned behavior of clients can be utilized to persuade users to use the information resources and library services so that the costs spent on their preparation and collection will be justifiable. The current research addressed this aspect of the unplanned information-seeking behavior.

Details

Aslib Journal of Information Management, vol. 71 no. 2
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 11 March 2014

Sayyed Mahdi Taheri, Nadjla Hariri and Sayyed Rahmatollah Fattahi

The aim of this research was to examine the use of the data island method for creating metadata records based on DCXML, MARCXML, and MODS with indexability and visibility of…

Abstract

Purpose

The aim of this research was to examine the use of the data island method for creating metadata records based on DCXML, MARCXML, and MODS with indexability and visibility of element tag names in web search engines.

Design/methodology/approach

A total of 600 metadata records were developed in two groups (300 HTML-based records in an experimental group with special structure embedded in the < pre> tag of HTML based on the data island method, and 300 XML-based records as the control group with the normal structure). These records were analyzed through an experimental approach. The records of these two groups were published on two independent websites, and were submitted to Google and Bing search engines.

Findings

Findings show that all the tag names of the metadata records created based on the data island method relating to the experimental group indexed by Google and Bing were visible in the search results. But the tag names in the control group's metadata records were not indexed by the search engines. Accordingly it is possible to index and retrieve the metadata records by their tag name in the search engines. But the records of the control group are accessible by the element values only. The research suggests some patterns to the metadata creators and the end users for better indexing and retrieval.

Originality/value

The research used the data island method for creating the metadata records, and deals with the indexability and visibility of the metadata element tag names for the first time.

Details

Library Hi Tech, vol. 32 no. 1
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 9 August 2011

Nadjla Hariri

The main purpose of this study is to evaluate the effectiveness of relevance ranking on Google by comparing the system's assessment of relevance with the users' views. The…

3731

Abstract

Purpose

The main purpose of this study is to evaluate the effectiveness of relevance ranking on Google by comparing the system's assessment of relevance with the users' views. The research aims to find out whether the presumably objective relevance ranking of Google based on the PageRank and some other factors in fact matches users' subjective judgments of relevance.

Design/methodology/approach

This research investigated the relevance ranking of Google's retrieved results using 34 searches conducted by users in real search sessions. The results pages 1‐4 (i.e. the first 40 results) were examined by the users to identify relevant documents. Based on these data the frequency of relevant documents according to the appearance order of retrieved documents in the first four results pages was calculated. The four results pages were also compared in terms of precision.

Findings

In 50 per cent and 47.06 per cent of the searches the documents ranked 5th and 1st, (i.e. from the first pages of the retrieved results) respectively, were most relevant according to the users' viewpoints. Yet even in the fourth results pages there were three documents that were judged most relevant by the users in more than 40 per cent of the searches. There were no significant differences between the precision of the four results pages except between pages 1 and 3.

Practical implications

The results will help users of search engines, especially Google, to decide how many pages of the retrieved results to examine.

Originality/value

Search engine design will benefit from the results of this study as it experimentally evaluates the effectiveness of Google's relevance ranking.

Details

Online Information Review, vol. 35 no. 4
Type: Research Article
ISSN: 1468-4527

Keywords

Article
Publication date: 3 August 2012

Sayyed Mahdi Taheri and Nadjla Hariri

The purpose of this research was to assess and compare the indexing and ranking of XML‐based content objects containing MARCXML and XML‐based Dublin Core (DCXML) metadata elements…

1150

Abstract

Purpose

The purpose of this research was to assess and compare the indexing and ranking of XML‐based content objects containing MARCXML and XML‐based Dublin Core (DCXML) metadata elements by general search engines (Google and Yahoo!), in a comparative analytical study.

Design/methodology/approach

One hundred XML content objects in two groups were analyzed: those with MARCXML elements (50 records) and those with DCXML (50 records) published on two web sites (www.dcmixml.islamicdoc.org and www.marcxml.islamicdoc.org).The web sites were then introduced to the Google and Yahoo! search engines.

Findings

The indexing of metadata records and the difference between their indexing and ranking were examined using descriptive statistics and a non‐parametric Mann‐Whitney U test. The findings show that the visibility of content objects was possible by all their metadata elements. There was no significant difference between two groups' indexing, but a difference was observed in terms of ranking.

Practical implications

The findings of this research can help search engine designers in the optimum use of metadata elements to improve their indexing and ranking process with the aim of increasing availability. The findings can also help web content object providers in the proper and efficient use of metadata systems.

Originality/value

This is the first research to examine the interoperability between XML‐based metadata and web search engines, and compares the MARC format and DCMI in a research approach.

Details

The Electronic Library, vol. 30 no. 4
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 20 June 2008

Nadjla Hariri

One of the relevance feedback techniques used in search engines is providing a link to similar documents for each retrieved document in a results page. The purpose of this paper…

Abstract

Purpose

One of the relevance feedback techniques used in search engines is providing a link to similar documents for each retrieved document in a results page. The purpose of this paper is to assess whether the “similar pages” relevance feedback feature of Google is truly effective in retrieving documents relevant to the information needs of users.

Design/methodology/approach

The effectiveness of the “similar pages” feature of Google was investigated using 30 paired searches conducted by 30 users with real information needs. The precision ratio of the results of the initial searches and of the searches conducted by clicking the “similar pages” links of the four most relevant results of each initial search were compared. The time spent and the overlapped results of the two kinds of searches were also compared.

Findings

The mean values for precision of and time spent on the “similar pages” searches were significantly less than those for the initial searches. Although, the number of overlapping documents in the “similar pages” searches was higher than that for the initial searches, the difference was not statistically significant.

Practical implications

The findings of this research would be useful for search engine designers as well as the numerous users of common search engines, especially Google, to decide if “similar pages” features truly enhance the quality of information retrieval on the web.

Originality/value

The experimental evidence provided in this paper relates to system design of information retrieval systems on the web.

Details

Online Information Review, vol. 32 no. 3
Type: Research Article
ISSN: 1468-4527

Keywords

1 – 10 of 15