Many of us have used speadsheets like Lotus 1–2–3 to do statistical analyses, often using data collected from various sources. For example, to manage book approval plans, subject…
Abstract
Many of us have used speadsheets like Lotus 1–2–3 to do statistical analyses, often using data collected from various sources. For example, to manage book approval plans, subject specialists at our library tabulate the approved, rejected, and firm orders, as well as keep track of encumbrances against the budget. Our collection development officer then collects the data from the subject specialists for analysis.
Considers the use of dBASE to combine SaveScreen output with OCLCsoftware to produce custom reports. Discusses the problems with the tworeport programs, ILLSORT and ILLCOUNT…
Abstract
Considers the use of dBASE to combine SaveScreen output with OCLC software to produce custom reports. Discusses the problems with the two report programs, ILLSORT and ILLCOUNT, processing with ILLFILE, transfer into dBASE, reporting by dBASE, and archiving dBASE records.
Details
Keywords
A small academic library in northeastern Ohio, with monograph holdings of about 85,000 titles, serving a student population of about 1,000, recently asked Marsha Hunt of OHIONET…
Abstract
A small academic library in northeastern Ohio, with monograph holdings of about 85,000 titles, serving a student population of about 1,000, recently asked Marsha Hunt of OHIONET to perform a hit rate study on CAT CD450 using a representative sample of their collection to determine if this product might be useful in their retrospective conversion effort. The 100‐title sample included older books on such diverse subjects as religion, Ohio history, and business writing. Marsha found 77 of the 100 titles—a 77% hit rate. Of the 77, 16 were DLC/DLC input, 45 were DLC/member input, 14 were of unknown origin/member input, and 2 were original record/member input. Anyone interested in receiving a list of the titles searched can contact me.
Farzad Sabetzadeh and Eric Tsui
The purpose of this paper is to introduce a new knowledge quality assessment framework based on interdependencies between content and schema as knowledge resources to enhance the…
Abstract
Purpose
The purpose of this paper is to introduce a new knowledge quality assessment framework based on interdependencies between content and schema as knowledge resources to enhance the quality of the knowledge that is being generated, disseminated and stored in a collaborative environment.
Design/methodology/approach
A knowledge elaboration approach is based on intervening factors of schematic clustering applied to a trial wiki bulletin board. Through this schematic intervention in the form of group creation within a wiki environment, a user-centric mechanism is created to substantiate, compose and narrate the generated contents in a self-organizing way.
Findings
Through this approach, quality in content can be enhanced by means of a favourably manipulated collaboration schema adopted by the knowledge management system (KMS) users instead of applying knowledge mining tools.
Research limitations/implications
With consideration to trust as a significant factor in this study, the verification and referral process may vary for KMS structures that are of larger scale or in low-trust collaborative environments.
Originality/value
This study demonstrates transition to higher quality knowledge with less time spent on the original content refinement and composition by paying due consideration to the interdependencies between knowledge resource content and its schema. Validation is done via a clustered group structure in a specially designed wiki which had been used as a discussion bulletin board on directed topics over an extended period.