Chia‐Hung Lin, Chia‐Wei Yen, Jen‐Shin Hong and Samuel Cruz‐Lara
The purpose of this paper is to show how previous studies have demonstrated that non‐professional users prefer using event‐based conceptual descriptions, such as “a woman wearing…
Abstract
Purpose
The purpose of this paper is to show how previous studies have demonstrated that non‐professional users prefer using event‐based conceptual descriptions, such as “a woman wearing a hat”, to describe and search images. In many art image archives, these conceptual descriptions are manually annotated in free‐text fields. This study aims to explore technologies to automate event‐based knowledge extractions from these free‐text image descriptions.
Design/methodology/approach
This study presents an approach based on semantic role labeling technologies for automatically extracting event‐based knowledge, including subject, verb, object, location and temporal information from free‐text image descriptions. A query expansion module is applied to further improve the retrieval recall. The effectiveness of the proposed approach is evaluated by measuring the retrieval precision and recall capabilities for experiments with real life art image collections in museums.
Findings
Evaluations results indicate that the proposed method can achieve a substantially higher retrieval precision than conventional keyword‐based approaches. The proposed methodology is highly applicable for large‐scale collections where the image retrieval precision is more critical than the recall.
Originality/value
The study provides the first attempt in literature for automating the extraction of event‐based knowledge from free‐text image descriptions. The effectiveness and ease of implementation of the proposed approach make it feasible for practical applications.
Details
Keywords
Elaine Menard and Margaret Smithglass
The purpose of this paper is to present the results of the first phase of a research project that aims to develop a bilingual interface for the retrieval of digital images. The…
Abstract
Purpose
The purpose of this paper is to present the results of the first phase of a research project that aims to develop a bilingual interface for the retrieval of digital images. The main objective of this extensive exploration was to identify the characteristics and functionalities of existing search interfaces and similar tools available for image retrieval.
Design/methodology/approach
An examination of 159 resources that offer image retrieval was carried out. First, general search functionalities offered by content-based image retrieval systems and text-based systems are described. Second, image retrieval in a multilingual context is explored. Finally, the search functionalities provided by four types of organisations (libraries, museums, image search engines and stock photography databases) are investigated.
Findings
The analysis of functionalities offered by online image resources revealed a very high degree of consistency within the types of resources examined. The resources found to be the most navigable and interesting to use were those built with standardised vocabularies combined with a clear, compact and efficient user interface. The analysis also highlights that many search engines are equipped with multiple language support features. A translation device, however, is implemented in only a few search engines.
Originality/value
The examination of best practices for image retrieval and the analysis of the real users' expectations, which will be obtained in the next phase of the research project, constitute the foundation upon which the search interface model that the authors propose to develop is based. It also provides valuable suggestions and guidelines for search engine researchers, designers and developers.