This paper presents an approach inspired by the Functional Grammar (fg) of S.C. Dik, which has been rethought to adapt it to the context of automatic analysis of bureaucratic…
Abstract
This paper presents an approach inspired by the Functional Grammar (fg) of S.C. Dik, which has been rethought to adapt it to the context of automatic analysis of bureaucratic documents. The research is part of a larger ongoing project for the automated processing of land registry documents and the model presented has been demonstrated for automatic content analysis of documents concerning technical operations of real estate (division and regrouping of lots) and is to be extended also to mortgages. The first paragraphs examine the theoretical principles of the application; the process of understanding the document content through the production of predications is then presented and the content analysis strategy briefly sketched.
This review reports on the current state and the potential of tools and systems designed to aid online searching, referred to here as online searching aids. Intermediary…
Abstract
This review reports on the current state and the potential of tools and systems designed to aid online searching, referred to here as online searching aids. Intermediary mechanisms are examined in terms of the two stage model, i.e. end‐user, intermediary, ‘raw database’, and different forms of user — system interaction are discussed. The evolution of the terminology of online searching aids is presented with special emphasis on the expert/non‐expert division. Terms defined include gateways, front‐end systems, intermediary systems and post‐processing. The alternative configurations that such systems can have and the approaches to the design of the user interface are discussed. The review then analyses the functions of online searching aids, i.e. logon procedures, access to hosts, help features, search formulation, query reformulation, database selection, uploading, downloading and post‐processing. Costs are then briefly examined. The review concludes by looking at future trends following recent developments in computer science and elsewhere. Distributed expert based information systems (debis), the standard generalised mark‐up language (SGML), the client‐server model, object‐orientation and parallel processing are expected to influence, if they have not done so already, the design and implementation of future online searching aids.
The objective of the paper is to amalgamate theories of text retrieval from various research traditions into a cognitive theory for information retrieval interaction. Set in a…
Abstract
The objective of the paper is to amalgamate theories of text retrieval from various research traditions into a cognitive theory for information retrieval interaction. Set in a cognitive framework, the paper outlines the concept of polyrepresentation applied to both the user's cognitive space and the information space of IR systems. The concept seeks to represent the current user's information need, problem state, and domain work task or interest in a structure of causality. Further, it implies that we should apply different methods of representation and a variety of IR techniques of different cognitive and functional origin simultaneously to each semantic full‐text entity in the information space. The cognitive differences imply that by applying cognitive overlaps of information objects, originating from different interpretations of such objects through time and by type, the degree of uncertainty inherent in IR is decreased. Polyrepresentation and the use of cognitive overlaps are associated with, but not identical to, data fusion in IR. By explicitly incorporating all the cognitive structures participating in the interactive communication processes during IR, the cognitive theory provides a comprehensive view of these processes. It encompasses the ad hoc theories of text retrieval and IR techniques hitherto developed in mainstream retrieval research. It has elements in common with van Rijsbergen and Lalmas' logical uncertainty theory and may be regarded as compatible with that conception of IR. Epistemologically speaking, the theory views IR interaction as processes of cognition, potentially occurring in all the information processing components of IR, that may be applied, in particular, to the user in a situational context. The theory draws upon basic empirical results from information seeking investigations in the operational online environment, and from mainstream IR research on partial matching techniques and relevance feedback. By viewing users, source systems, intermediary mechanisms and information in a global context, the cognitive perspective attempts a comprehensive understanding of essential IR phenomena and concepts, such as the nature of information needs, cognitive inconsistency and retrieval overlaps, logical uncertainty, the concept of ‘document’, relevance measures and experimental settings. An inescapable consequence of this approach is to rely more on sociological and psychological investigative methods when evaluating systems and to view relevance in IR as situational, relative, partial, differentiated and non‐linear. The lack of consistency among authors, indexers, evaluators or users is of an identical cognitive nature. It is unavoidable, and indeed favourable to IR. In particular, for full‐text retrieval, alternative semantic entities, including Salton et al.'s ‘passage retrieval’, are proposed to replace the traditional document record as the basic retrieval entity. These empirically observed phenomena of inconsistency and of semantic entities and values associated with data interpretation support strongly a cognitive approach to IR and the logical use of polyrepresentation, cognitive overlaps, and both data fusion and data diffusion.
One of the emerging roles of management accountants in organizations is the design and operation of their organization's knowledge management system (KMS) that ensures the…
Abstract
One of the emerging roles of management accountants in organizations is the design and operation of their organization's knowledge management system (KMS) that ensures the strategic utilization and management of its knowledge resources. Knowledge-based organizations face identifiable general risks but those whose primary product is knowledge, knowledge-products organizations (KPOs), additionally face unique risks. The management accountants’ role in the management of knowledge is even more critical in such organizations. We review the literature and survey a small convenient sample of knowledge-products organizations to identify the general risks knowledge-based organizations face and the additional risks unique to KPOs. The general risks of managing knowledge include inappropriate corporate information policies, employee turnover, and lack of data transferability. Additional risks unique to KPOs include the short life span (shelf-life) of knowledge products, the challenging nature of knowledge experts, and the vulnerability of intellectual property. The paper includes recommendations for management accountants in KPOs to develop and maintain competitive advantage through their KMS. These include developing enterprise-wide knowledge policies, fostering collaboration and documentation, addressing knowledge security, and evaluating the effectiveness of the KMS.
This paper is devoted to a manipulation theory for industrial robots. The proposed knowledge representation model is based on a simple algebraic formalism and is shown to be…
Abstract
This paper is devoted to a manipulation theory for industrial robots. The proposed knowledge representation model is based on a simple algebraic formalism and is shown to be adequate and suitable for actual applications in the field of assembly robots and manipulators. A FORTRAN system is illustrated which supports the proposed model and is implemented on a LABEN 70 minicomputer used for on‐line control of the SUPERSIGMA multipurpose assembly robot developed at the Milan Polytechnic Artificial Intelligence Project. The experimental work done is reported.
Per Erik Andersson, Katarina Arbin and Christopher Rosenqvist
The main purpose of this study is to enhance knowledge regarding the early stages of planning for and adopting artificial intelligence (AI) in governmental public procurement…
Abstract
Purpose
The main purpose of this study is to enhance knowledge regarding the early stages of planning for and adopting artificial intelligence (AI) in governmental public procurement. While there are numerous studies on AI and procurement in private companies, there is limited information on AI and public procurement.
Design/methodology/approach
The empirical data consists of information obtained from 18 semi-structured interviews with procurement managers and individuals involved in the development of procurement at governmental agencies. Additionally, a workshop was conducted with the respondents to discuss and validate the study’s findings.
Findings
Findings indicate a generally low level of AI maturity in previous research and within the investigated governmental agencies. The perceived benefits of AI primarily revolve around improved operational capabilities, potential for certain process efficiencies and the ability to enhance monitoring through AI. Various challenges related to organizational, process, technological and data management were highlighted. Findings also indicate that perceived benefits and value created by AI can be viewed from a short-term perspective to a long-term perspective.
Social implications
The study provides insights into societal values that can be achieved using AI in public procurement.
Originality/value
This study provides a new perspective on AI in public procurement by focusing on governmental agencies. It explores the perceived benefits, interests and challenges associated with AI implementation in public procurement. Furthermore, this study discusses the potential outcomes of incorporating AI in public procurement and the impact it may have on the values created by the public service, both short- and long term.
Details
Keywords
BRIAN VICKERY and ALINA VICKERY
There is a huge amount of information and data stored in publicly available online databases that consist of large text files accessed by Boolean search techniques. It is widely…
Abstract
There is a huge amount of information and data stored in publicly available online databases that consist of large text files accessed by Boolean search techniques. It is widely held that less use is made of these databases than could or should be the case, and that one reason for this is that potential users find it difficult to identify which databases to search, to use the various command languages of the hosts and to construct the Boolean search statements required. This reasoning has stimulated a considerable amount of exploration and development work on the construction of search interfaces, to aid the inexperienced user to gain effective access to these databases. The aim of our paper is to review aspects of the design of such interfaces: to indicate the requirements that must be met if maximum aid is to be offered to the inexperienced searcher; to spell out the knowledge that must be incorporated in an interface if such aid is to be given; to describe some of the solutions that have been implemented in experimental and operational interfaces; and to discuss some of the problems encountered. The paper closes with an extensive bibliography of references relevant to online search aids, going well beyond the items explicitly mentioned in the text. An index to software appears after the bibliography at the end of the paper.
It has often been suggested that propositional analysis (PA) can provide a consistent basis for modelling the indexing process. Describes and exemplifies the application of…
Abstract
It has often been suggested that propositional analysis (PA) can provide a consistent basis for modelling the indexing process. Describes and exemplifies the application of indexing of PA, together with the associated Kintsch/Van Dijk model of text comprehension and production. Discusses limitations of the technique.
Details
Keywords
Bernard J. Jansen and Udo Pooch
Much previous research on improving information retrieval applications has focused on developing entirely new systems with advanced searching features. Unfortunately, most users…
Abstract
Much previous research on improving information retrieval applications has focused on developing entirely new systems with advanced searching features. Unfortunately, most users seldom utilize these advanced features. This research explores the use of a software agent that assists the user during the search process. The agent was developed as a separate, stand‐alone component to be integrated with existing information retrieval systems. The performance of an information retrieval system with the integrated agent was subjected to an evaluation with 30 test subjects. The results indicate that agents developed using both results from previous user studies and rapidly modeling user information needs can result in an improvement in precision. Implications for information retrieval system design and directions for future research are outlined.