New & Noteworthy

Library Hi Tech News

ISSN: 0741-9058

Article publication date: 7 March 2008

766

Citation

(2008), "New & Noteworthy", Library Hi Tech News, Vol. 25 No. 2/3. https://doi.org/10.1108/lhtn.2008.23925bab.001

Publisher

:

Emerald Group Publishing Limited

Copyright © 2008, Emerald Group Publishing Limited


New & Noteworthy

Article Type: New & Noteworthy From: Library Hi Tech News, Volume 25, Issue 2/3.

OCLC Acquires EZproxy Authentication and Access Software

EZproxy, the leading software solution for serving library patrons remotely, has been acquired by OCLC from Useful Utilities of Peoria, Arizona. Useful Utilities founder Chris Zagar will join OCLC as a full-time consultant. Mr Zagar will help ensure a smooth transition of EZproxy operations to OCLC, the world's largest library service and research organization, and assist OCLC in developing state-of-the-art authentication services for the cooperative.

Mr Zagar, a librarian at the Maricopa Community Colleges in Arizona, developed EZproxy to provide libraries a better solution for authenticating remote user access to licensed databases. EZproxy software allows libraries to manage access and authentication configurations through a proxy server so that library users do not have to make any configuration changes to their personal Web browsers. More than 2,400 institutions in over 60 countries have purchased EZproxy software. OCLC will honor EZproxy's current service arrangement for existing and new customers whereby licensees continue to enjoy access to new releases of EZproxy and technical support at no additional charge.

OCLC will also continue to develop and support EZproxy by working with commercial vendors to create new connectors to authentication systems and online content resources to libraries. EZproxy version 4.1 is scheduled for release in March 2008.

Additionally, OCLC is planning to connect local instances of EZproxy to WorldCat.org, creating new value for licensees and their users. By surfacing EZproxy in WorldCat.org, end users outside of the library will have better access to library collections and services through WorldCat.

In 1999, Mr Zagar announced the launch of EZproxy with a single posting to a listserv, believing that it would be of use to other community colleges. "When Harvard and MIT were among the first to sign up for EZproxy, I knew I had underestimated the problem of authentication and access to these licensed materials", said Mr Zagar. Since then, his user base has grown to more than 2,400 institutions.

www.oclc.org/ezproxy/

OpenTranslators Federated Search/Metasearch Software Product Released

CARE Affiliates announced at ALA Midwinter 2008, in conjunction with its strategic partners, Index Data and WebFeat, a new product called OpenTranslators. OpenTranslators is intended to reshape the way libraries select and use federated search and metasearch technology.

OpenTranslators will allow libraries to use the federated search interface of their choice to access over 10,000 databases using SRU/SRW/Z39.50. The databases consist of: licensed databases, free databases, catalogs, Z39.50, Telnet and proprietary databases. Libraries that already have a Z39.50 client in their OPAC will be able to connect to, not only library catalogs, but also thousands of additional databases. Those libraries that are building or already using an open source federated search tool will now be able to expand the world of information that can be accessed. Finally, for those institutions/organizations building new mashup clients, this will allow them to access and use vast amounts of additional content.

Use of the WebFeat translators in this solution provides a unique capability in that this technology structures and parses unstructured citations, even for databases that don't natively support such functionality. This means it is possible to sort results by date, title and author and support citation exports into a variety of formats. In addition results usage can be tracked in compliance with the COUNTER standards.

In addition, the WebFeat Administrative Console, which is known for its ease of use, flexibility and speed in managing translators, is included in the solution. It takes what could be a tedious and time-consuming task for library staff and virtually eliminates it. CARE has combined these capabilities with Index Data's expertise and resources to provide a SRU/SRW/Z39.50 gateway to the WebFeat translators. Index Data, long a pioneer in information retrieval technology, designed and built a sophisticated gateway that provides a seamless level of connectivity for end users.

OpenTranslators is a hosted service, so new databases can be added on request and does not require purchase of servers or software.

CARE website: www.care-affiliates.com/care-products.html

ebrary Announces Public Beta of New Java-Based Reader

ebrary has rolled out the first stage of its public beta for its new Java-based Reader. The new ebrary Reader, which will replace the company's current proprietary plug-in, is now available to new ebrary subscription customers on Windows and Linux platforms. All ebrary end-users will have access to the new ebrary Reader in the first quarter of 2008.

The ebrary Reader enables materials to be viewed online and streams pages, providing faster access to large documents. It also gives the ebrary platform all of its rich functionality including ebrary InfoTools, which enables integration between multiple online resources and instant, contextual linking when end-users select words of interest in a document.

Key enhancements and new features include the following:

  • (1) Better annotating features, with any combination of:

  • multiple highlights and notes per page;

  • resizable and movable notes;

  • highlights with or without notes attached; and

  • color coding of notes and highlights.

  • (2) Ability to transform text into a hyperlink to a URL of the end-user's choice.

  • (3) Improved keyboard shortcuts to assist end-users with special accessibility needs.

  • (4) Transparent updates, eliminating the need for end-user or IT intervention.

  • (5) Ability to display, print, and copy/paste text from documents in any language.

  • (6) Improved support for color images.

  • (7) Better text handling when copying text.

As a provider of e-content services and technology, ebrary helps libraries, publishers, and other organizations disseminate valuable information to end users, while improving end user research and document interaction. The company has developed a flexible e-content platform, which customers may use in a number of different, integrated capacities: ebrary customers may purchase or subscribe to e-books and other content under a variety of pricing and access models, and they may license the ebrary platform to distribute, sell, and market their own content online. All options are delivered using a customizable interface and include the ebrary Reader with InfoTools software, which enable integration with other resources to provide an economical and efficient way to utilize information. ebrary currently offers a growing selection of more than 120,000 e-books and other titles from more than 285 leading publishers and aggregators.

More about the ebrary Reader Available

The 12th release of the Fedora software is now available for testing. The first beta version of Fedora 3.0 featuring a Content Model Architecture (CMA), an integrated structure for persisting and delivering the essential characteristics of digital objects in Fedora, is available at: www.fedora-commons.org/. The Fedora CMA plays a central role in the Fedora architecture, in many ways forming the over-arching conceptual framework for future development of Fedora Repositories. The Fedora CMA builds on the Fedora architecture to simplify use while unlocking potential.

Dan Davis, Chief Software Architect, Fedora Commons, explains the CMA in the context of Fedora 3.0, "It's a hybrid. The Fedora CMA handles content models that are used by publishers and others, and is also a computer model that describes an information representation and processing architecture". By combining these viewpoints, Fedora CMA has the potential to provide a way to build an interoperable repository for integrated information access within organizations and to provide durable access to our intellectual works.

The Fedora community is encouraged to download and experiment with Fedora 3.0 Beta 1. It is particularly important to receive comments while the software is still being developed to help ensure this important update to the Fedora architecture meets the needs of the community. Please contribute observations and comments to fedora-commons-developers@lists.sourceforge.net or fedora-commons-users@lists.sourceforge.net. Fedora 2.2.1 will remain available for all production repository instances.

Overview of new Features in Fedora 3.0 Beta 1 Release

  • CMA provides a model-driven approach for persisting and delivering the essential characteristics of digital content in Fedora.

  • API-M LITE an experimental interface for the creating, modifying, and managing digital objects using a Web interface in a REST style.

  • Mulgara support Fedora now supports the Mulgara Semantic Triplestore replacing Kowari.

  • Migration utility provides an update utility to convert existing collections for CMA compatibility.

  • Relational index simplification the Fedora schema was simplified making changes easier with having to reload the database.

  • Dynamic behaviors object may added or removed dynamically from the system moving system checks into runtime errors.

  • Error reporting provides improved runtime error details.

  • Owner as a CSV string enables using a CSV string as ownerID and in XACML policies.

  • Java 6 compatibility Fedora may be optionally compiled using Java 6.

Fedora Commons website: www.fedora-commons.org/

WALDO Commissions LibLime to Substantially Enhance Koha

LibLime, a leader in open-source solutions for libraries, and WALDO (Westchester Academic Library Directors Organization) announced in January 2008 that 15 academic WALDO libraries have selected Koha ZOOM for their next integrated library system (ILS) and union catalog.

As part of the agreement, WALDO has commissioned LibLime to substantially enhance Koha to meet the requirements of academic libraries. Enhancement specifications were developed jointly by LibLime and WALDO during the course of a scoping study completed in the Spring of 2007 and include over 35 functional areas. LibLime's Koha ZOOM solution will provide WALDO members with a complete integrated library automation solution, including acquisitions and serials modules, as well as a union catalog. The migration of the full members will begin with a pilot implementation at St. John's University, Queens, New York, in the Spring/Summer of 2008 followed by the migration of the other 14 libraries pending success of the pilot.

WALDO has incorporated funding of on-going development into its commitment to open source. Participating libraries will initially contribute 20 per cent of operating costs to a special development fund to insure the ability to keep up with the pace of technology and with the accelerating expectations of academic library users. "Such a commitment is now possible financially because the lower overall costs of operating on the open source model", Theresa Maylone, University Librarian at St. John's University explains.

The initial development project and phased migration of the WALDO academic consortium members is part of a broader WALDO/LibLime agreement that provides for migration, support, hosting and development services at reduced prices for all participating WALDO members.

LibLime website: http://liblime.com/

WALDO website: www.waldolib.org

NewGenLib ILS Goes Open Source

The Kesavan Institute of Information and Knowledge Management and Verus Solutions Pvt. Ltd. have announced that their NewGenLib ILS, a next generation library automation and networking solution from India for the developing world, is now an open source system.

NewGenLib Open Source Features:

  1. 1.

    NewGenLib (www.newgenlib.com) is now open source under the most widely used open source software licensing system called GNU GPL (General Public License).

  2. 2.

    The open source binaries and source code can be downloaded from www.sourceforge.net/projects/newgenlib. Installation notes for Linux and Windows are also available at the site. The user manual is also downloadable.

  3. 3.

    Librarians/developers who download the software can post their views, problems, solutions, discussions, etc., on https://sourceforge.net/forum/?group_id=210780.

  4. 4.

    NewGenLib is web-based and has a multi-tier architecture (www.kiikm.org/newgenlib_architecture.jpg); uses Java (a swing-based librarian's GUI) and the JBoss (J2EE-based Application Server). The default backend used is the open source PostgreSQL.

  5. 5.

    NewGenLib functional modules: acquisitions management (monographs and serials); technical processing; circulation control; system configuration; a desktop reports application and an end-of-day process (scheduler) application.

  6. 6.

    NewGenLib is compliant with MARC-21 format.

  7. 7.

    has a MARC editor; and

  8. 8.

    allows seamless bibliographic and authority data import into cataloging templates.

  9. 9.

    Form letter templates are configurable using openOffice 2.0 as ODT and htm.

  10. 10.

    SMTP mail servers can be configured for emails that can be sent from functional modules.

  11. 11.

    NewGenLib allows creation of institutional open access (OA) repositories compliant with the OAI-PMH.

  12. 12.

    NewGenLib servers are SRU/W compliant supporting MARC-21 and MODS 3.0 metadata formats. CQL (level 1) with both Bath and Dublin Core Profiles are supported.

  13. 13.

    NewGenLib is Unicode 3.0 compliant. It is an internationalized application. English and Arabic interfaces are already available.

  14. 14.

    Is radio frequency identification (RFID) ready.

    NewGenLib website: www.newgenlib.com/

    Source Forge download site: http://sourceforge.net/projects/newgenlib

Next Generation Cataloging: New Cataloging and Metadata Pilot

OCLC is conducting a pilot project to explore the viability and efficiency of capturing metadata from publishers and vendors upstream and enhancing that metadata in WorldCat, an approach that could provide added value to libraries and publishers by enhancing and delivering data that can work in multiple contexts and systems.

The pilot will begin in January 2008 and involves libraries and the publisher supply chain. Public and academic libraries will be represented in the pilot along with a variety of publishers and vendors. OCLC will announce participants as the project gets under way.

"It is crucial to the future of cataloging to find collaborative ways to take advantage of publisher ONIX metadata, and we must find efficient and centralized ways to store, enhance and normalize the metadata for the benefit of both library and publishing communities", said Renee Register, OCLC Global Product Manager, Cataloging and Metadata Services. "Librarians can and should participate in raising the quality of metadata in the marketplace, where we participate whenever we select and purchase materials".

The next generation cataloging and metadata service pilot follows release of a "Report on the Future of Bibliographic Control" by the Working Group on the Future of Bibliographic Control, formed by the Library of Congress to address changes in how libraries must do their work in the digital information era. The ability to leverage upstream publisher data effectively is central to the Working Group's recommendations.

How the pilot works:

  • The role of publishers and vendors publisher and vendor pilot partners provide OCLC with title information in ONIX format. OCLC crosswalks the data to MARC for addition to WorldCat and, where possible, enriches the data in automated ways through data mining and data mapping. Enriched metadata is returned to publishers and vendors in ONIX format for evaluation of OCLC enhancements.

  • The role of libraries library pilot partners evaluate the quality of metadata added to WorldCat through this process and provide feedback on its suitability for use in library technical services workflows.

  • The role of other partners publishing industry partners such as BISG (Book Industry Study Group, Inc.) assist OCLC with publisher industry data standards and terminologies, as well as providing a forum in which to share ideas and results with the industry.

  • Advisory Board an advisory board consisting of leaders from the library and publishing communities is assisting in pilot design and evaluation of results and will advise OCLC on next steps after pilot completion.

The next generation cataloging and metadata services pilot advisory board includes: Paul DeAnna, Cataloging Section, Technical Services Division, National Library of Medicine; Phil Schreur, Head of Cataloging and Metadata Services, Stanford University Libraries; David Williamson, Cataloging in Publication Division, Library of Congress; John Chapman, Metadata Librarian, Technical Services, University of Minnesota Libraries; Michael Norman, Head of Content Access Management, University of Illinois, Urbana-Champaign; Laura Dawson, Consultant to the publishing industry; Nora Rawlinson, Consultant to public libraries on collection development.

More information about the pilot: www.oclc.org/productworks/nextgencataloging.htm

LC Working Group on the Future of Bibliographic Control Final Report Released

The Library of Congress Working Group on the Future of Bibliographic Control submitted its final report to Associate Librarian of Congress for Library Services Deanna Marcum on 9 January.

The working group makes five general recommendations:

  1. 1.

    Increase the efficiency of bibliographic production for all libraries through cooperation and sharing.

  2. 2.

    Transfer effort into high-value activity. Examples include providing access to hidden, unique materials held by libraries.

  3. 3.

    Position technology recognizing that the World Wide Web is libraries' technology platform as well as the appropriate platform for standards.

  4. 4.

    Position the library community for the future by adding evaluative, qualitative and quantitative analyses of resources; work to realize the potential provided by the FRBR framework.

  5. 5.

    Strengthen the library and information science profession through education and through the development of metrics that will inform decision-making now and in the future.

More information on the Library of Congress Working Group on the Future of Bibliographic Control is available at a special public Website, <www.loc.gov/bibliographic-future> [December 2007]. A Webcast of the presentation to Library of Congress staff is available at www.loc.gov/today/cyberlc/feature_wdesc.php?rec=4180 [December 2007]. Comments on the report (both final draft and final versions) maybe found by using the tag WoGroFuBiCo in del.icio.us: http://del.icio.us/tag/WoGroFuBiCo

WoGroFuBiCo website and final report: www.loc.gov/bibliographic-future/news/

Best Practices for Use of RFID in Library Applications

The National Information Standards Organization (NISO) has issued RFID in US Libraries, containing Recommended Practices to facilitate the use of RFID in library applications. The scope of the document is limited to item identification that is, the implementation of RFID for books and other materials and specifically excludes its use with regard to the identification of people. RFID in US Libraries (NISO RP-6-2008), which is freely available from the NISO website, was prepared by NISO's RFID Working Group, chaired by Dr Vinod Chachra, CEO of VTLS Inc., and composed of RFID hardware manufacturers, solution providers (software and integration), library RFID users, book jobbers and processors, and related organizations. Members of the Working Group included: Livia Bitner (Baker & Taylor), Brian Green (EDItEUR), Jim Lichtenberg (Book Industry Study Group), Alastair McArthur (Tagsys), Allan McWilliams (Baltimore County Public Library), Louise Schaper (Fayetteville Public Library), Paul Sevcik (3M Library Systems), Paul Simon (Checkpoint Systems, Inc.), and Marty Withrow (OCLC).

"The RFID Working Group took on a very difficult challenge", said Todd Carpenter, NISO Managing Director. "The best outcome would be one that achieves true interoperability while protecting personal privacy, supporting advanced functionality, facilitating security, protecting against vandalism, and allowing the RFID tag to be used in the entire lifecycle of the book and other library materials".

The NISO recommendations for best practices aim to promote procedures that do the following:

  • Allow an RFID tag to be installed at the earliest point and used throughout the lifecycle of the book, from publisher/printer to distributor, jobber, library (shelving, circulating, sorting, reshelving, inventory, and theft deterrence), and interlibrary loan, and continuing on to secondary markets such as secondhand books, returned books, and discarded/recycled books.

  • Allow for true interoperability among libraries, where a tag in one library can be used seamlessly by another, even if the libraries have different suppliers for tags, hardware, and software.

  • Protect the personal privacy of individuals while supporting the functions that allow users to reap the benefits of this technology.

  • Permit the extension of these standards and procedures for global interoperability.

  • Remain relevant and functional with evolving technologies.

RFID in US Libraries: www.niso.org/standards/resources/RP-6-2008.pdf

ARL Releases White Paper on Educational Fair Use

The Association of Research Libraries (ARL) has released a white paper, "Educational Fair Use Today", by Jonathan Band, JD. Band discusses three recent appellate decisions concerning fair use that should give educators and librarians greater confidence and guidance for asserting this important privilege. The paper is freely available for download from the ARL Web site.

In all three decisions discussed in the paper, the courts permitted extensive copying and display in the commercial context because the uses involved repurposing and recontextualization. The reasoning of these opinions could have far-reaching implications in the educational environment.

Band summarizes the three cases Blanch vs Koons, Perfect 10 vs Amazon.com, and Bill Graham Archives vs Dorling Kindersley and analyses the significance of the appellate decisions in the educational context.

Educational Fair Use Today: www.arl.org/bm~doc/educationalfairusetoday.pdf

Making the Web Easy for Users with Disabilities: New Nielsen Norman Report

The Nielsen Norman Group has made Beyond ALT Text: Making the Web Easy to Use for Users With Disabilities 75 Best Practices for Design of Websites and Intranets, Based on Usability Studies with People Who Use Assistive Technology, their 148 page report, by Kara Pernice and Jakob Nielsen, on online accessibility available for free download.

The report contains:

  1. 1.

    Results of usability tests of 19 websites with users with several different types of disabilities who are using a range of assistive technology.

  2. 2.

    Test data collected mainly in the USA.

  3. 3.

    Seventy-five detailed design guidelines.

The report is illustrated with 46 screenshots of designs that worked well or that caused difficulties for users with disabilities in the usability tests as well as 23 photos of assistive technology devices. The examples and guidelines are directly based on empirical observation of actual user behavior. The report addresses the usability of websites and intranets. The report should be used together with the standards for technical accessibility of web pages.

Report description: www.nngroup.com/reports/accessibility/

Report (pdf 7 MB): www.nngroup.com/reports/accessibility/beyond_ALT_text.pdf

Framework for Good Digital Collections: Version 3 Released by NISO, IMLS

The NISO announces the release of version 3 of the Framework of Guidance for Building Good Digital Collections. The third version, the development of which was generously supported by the Institute of Museum and Library Services (IMLS), is now available on the NISO website. In addition, a community version of this document is being developed to allow for ongoing contributions from the community of librarians, archivists, curators, and other information professionals. The Framework establishes principles for creating, managing, and preserving digital collections, digital objects, metadata, and projects. It also provides links to relevant standards that support the principles and additional resources.

IMLS, which developed the first version of the Framework in 2000, transferred maintenance with continuing strong support to NISO in September 2003. A NISO advisory group issued the second edition in 2004 and the NISO Framework Working Group was formed in 2006 to create the third edition and oversee the community version. The third edition updates and revises the Framework, with the intention to incorporate it into a website for use by library and museum practitioners. This will encourage community participation in the framework, soliciting feedback, annotations, resources, and discussion.

The Working Group for this revision includes Priscilla Caplan, Florida College for Library Automation (chair); Grace Agnew, Rutgers; Murtha Baca, Getty Research Institute; Tony Gill, Center for Jewish History; Carl Fleischhauer, Library of Congress; Ingrid Hsieh-Yee, Catholic University of America; Jill Koelling, Northern Arizona University; and Christie Stephenson, American Museum. In addition, outside reviewers contributed their expertise to this updated Framework.

"We believe that the Framework will help museums and libraries create and preserve high quality digital content", said Anne-Imelda Radice, PhD, director of the IMLS. "The Framework will permit users to submit comments and case studies about specific standards and principles through a wiki-style interactive format an invaluable guide for practitioners and educators". IMLS is the primary source of federal support for the nation's 122,000 libraries and 17,500 museums. The Institute's mission is to create strong libraries and museums that connect people to information and ideas (www.imls.gov).

The Framework of Guidance is intended for two audiences: cultural heritage organizations planning and implementing initiatives to create digital collections, and funding organizations that want to encourage the development of good digital collections. It has three purposes:

  • To provide an overview of some of the major components and activities involved in creating good digital collections.

  • To identify existing resources that support the development of sound local practices for creating and managing good digital collections.

  • To encourage community participation in the ongoing development of best practices for digital collection building.

The Framework provides criteria for goodness organized around four core types of entities: collections (organized groups of objects), objects (digital materials), metadata (information about objects and collections), and initiatives (programs or projects to create and manage collections).

Edition three of the framework acknowledges that digital collections increasingly contain born-digital objects, as opposed to digital objects that were derived through the digitization of analogue source materials. It also acknowledges that digital collection development has moved from being an ad hoc "extra" activity to a core service in many cultural heritage institutions.

Framework of Guidance, version 3: www.niso.org/framework/framework3.pdf

IPI Wins Grants for Two Major Image Preservation Research Projects

The DP3 Project: the Digital Print Preservation Portal

The Image Permanence Institute at Rochester Institute of Technology (RIT) has received a $314,215 grant from the IMLS for a major research and development project about the preservation of digital prints.

Inkjets, electrophotographic and dye diffusion thermal transfer materials account for the overwhelming majority of desktop documents and an increasing number of short-run publications and monographs in institutional collections today. Collection care professionals need guidance first, to determine which objects in their collections have been digitally printed and, second, to understand the nature and preservation needs of such materials.

"Virtually all forms of individual scholarly communication and artistic image creation now depend on only a few technologies for producing hard-copy output", says James Reilly, director of RIT's Image Permanence Institute. "Because these technologies haven't been systematically studied, a balanced overview of their strengths and weaknesses from the point of view of collection preservation doesn't exist. We have already observed that the newer media are vulnerable to damage in ways that photographic materials or output from older text recording systems were not. We can't assume that what is good for traditional materials will be good for digital materials".

The IMLS funds will support a two-year study of the potentially harmful effects of enclosures and physical handling on digital prints, as well as their vulnerability to damage due to flood. Another component of the project is an in-depth investigation of the stability of digitally printed materials when they are exposed to light, air-borne pollutants, heat and humidity. A $606,000 grant from the Andrew W. Mellon Foundation will fund this portion of the project.

Project results will be published on a unique Web site, The DP3 Project: Digital Print Preservation Portal. The site will contain information and tools to aid in the identification of digital prints and in understanding their chemical and physical nature; it will offer scientifically sound recommendations for storage, display and handling; and it will guide users in assessing the risk of damage to these materials in the event of flood so that they might revise their institutional disaster response plans.

DP3 Project website: www.dp3project.org/

The WebERA (Web Environmental Risk Analysis) Project

IPI has also received a $332,760 grant from IMLS to create a novel Web-based system that will enable collections staff in museums and libraries to efficiently move large volumes of environmental data directly to the Web for automated analysis and reporting.

Museums and libraries are unable to manage environments effectively and efficiently due to lack of staff time and in-house expertise, the difficulty of determining the degree of risk or benefit to collections and the challenge of organizing, maintaining and reporting on mountains of data.

The premise for the two-year research and development project called WebERA, or Web Environmental Risk Analysis, is that environmental risks can be managed and mitigated if they can be identified, quantified and then communicated to museum leadership and facilities managers. The idea of using the Web to store and share environmental data directly is new, but is firmly rooted in the environmental research and development conducted by RIT's Image Permanence Institute over the past 13 years.

Project activities will include programming the WebERA web server application and working with a selected pilot group of 10 museums and five libraries to test the design and function of the WebERA system. Project results will be made available to the preservation community through conference presentations and a Web publication.

More on the WebERA Project: www.imagepermanenceinstitute.org/shtml_sub/weberainfo.asp

The Image Permanence Institute, part of RIT's College of Imaging Arts and Sciences, is a university-based, non-profit research laboratory devoted to scientific research in the preservation of visual and other forms of recorded information.

IPI home page: www.imagepermanenceinstitute.org/

LC's Digital Preservation Network Adds New Partners

In January 2008, 21 states, working in four multistate demonstration projects, joined the Library of Congress's National Digital Information Infrastructure and Preservation Program (NDIIPP) in an initiative to catalyze collaborative efforts to preserve important state government information in digital form. States face formidable challenges in caring for digital records with long-term legal and historical value. A series of Library-sponsored workshops held in 2005 and involving all states revealed that the large majority of states lack the resources to ensure that the information they produce in digital form only, such as legislative records, court case files and executive agency records, is preserved for long-term access. The workshops made clear that much state government digital information-including content useful to Congress and other policymakers-is at risk of loss if it is not now saved.

These partnerships expand the NDIIPP network to include state government agencies. In August 2007, the network added partners from the private sector in an initiative called Preserving Creative America. With these new partners, the NDIIPP network now comprises well over 100 members, including government agencies, educational institutions, research laboratories and commercial entities.

The projects will collect several significant categories of digital information such as geospatial data, legislative records, court case files, Web-based publications and executive agency records. Each project will also work to share tools, services and best practices to help every state make progress in managing its digital heritage. The total amount of the funds being made available to the new partners is $2.25 million.

Following are the lead entities and the focus areas of the projects:

Arizona State Library, Archives and Public Records, Persistent Digital Archives and Library System. Arizona will lead this project to establish a low-cost, highly automated information network that reaches across multiple states. Results will include techniques for ingesting mass quantities of state data as well as developing a strong data management infrastructure. Content will include digital publications, agency records and court records. States working in this project are Arizona, Florida, New York and Wisconsin.

Minnesota Historical Society, a Model Technological and Social Architecture for the Preservation of State Government Digital Information. The project will work with legislatures in several states to explore enhanced access to legislative digital records. This will involve implementing a trustworthy information management system and testing the capacity of different states to adopt the system for their own use. Content will include bills, committee reports, floor proceedings and other legislative materials. States working in this project are Minnesota, California, Kansas, Tennessee, Mississippi, Illinois and Vermont.

North Carolina Center for Geographic Information and Analysis, Multistate Geospatial Content Transfer and Archival Demonstration. Work will focus on replicating large volumes of geospatial data among several states to promote preservation and access. The project will work closely with federal, state and local governments to implement a geographically dispersed content exchange network. Content will include state and local geospatial data. States working in this project are North Carolina, Utah and Kentucky.

Washington State Archives, Multistate Preservation Consortium. The Washington State Archives will use its advanced digital archives framework to implement a centralized regional repository for state and local digital information. Outcomes will include establishment of a cost-effective interstate technological archiving system, as well as efforts to capture and make available larger amounts of at-risk digital information. Content will include vital records, land ownership and use documentation, court records and Web-based state and local government reports. States working in this project are Washington, Colorado, Oregon, Alaska, Idaho, Montana, California and Louisiana.

NDIIPP website: www.digitalpreservation.gov

Technical Report to Koninklijke Bibliotheek on Digital Preservation Now Available

In December 2006 the Koninklijke Bibliotheek (KB), National Library of the Netherlands, commissioned the RAND Corporation/RAND Europe to analyse KB's e-Depot strategy, following up on the positive assessment and advice of an international Evaluation Committee in 2005. The Technical Report "Addressing the uncertain future of preserving the past. Towards a robust strategy for digital archiving and preservation" was presented on 2 November 2007, during the International Conference on Digital Preservation Tools and Trends. The KB has also released a Response to the recommendations of the RAND Report.

The whole text of the Report can be accessed on the KB website at: www.kb.nl/hrd/dd/dd_links_en_publicaties/publicaties/rand_report_e-depot_TR510_3c_Cover.pdf

The Response of the KB can be accessed at: www.kb.nl/hrd/dd/dd_rand/rand_report_response-en.html

Academic Commons Special Issue Devoted to Cyberinfrastructure

Academic Commons, sponsored by the Center of Inquiry in the Liberal Arts at Wabash College, released a December 2007 special issue devoted to Cyberinfrastructure and the Liberal Arts. Edited by David L. Green (Principal at Knowledge Culture), the issue is dedicated to the memory of Roy Rosenzweig (1950-2007), a historian who inspired a generation of fellow historians and others working at the intersection of the humanities and new technologies (http://thanksroy.org/).

Cyberinfrastructure offers the liberal arts new resources and new ways of working with revolutionary computing capabilities, massive data resources and distributed human expertise. This collection of essays, interviews and reviews captures the perspectives of scholars, scientists, information technologists and administrators on the challenges and opportunities cyberinfrastructure presents for the liberal arts and liberal arts colleges. What difference will cyberinfrastructure make and how should we prepare?

Academic Commons seeks to form a community of faculty, academic technologists, librarians, administrators, and other academic professionals who will help create a comprehensive web resource focused on liberal arts education. Academic Commons aims to share knowledge, develop collaborations, and evaluate and disseminate digital tools and innovative practices for teaching and learning with technology. If successful, the Academic Commons site will advance opportunities for collaborative design, open development, and rigorous peer critique of such resources.

Academic Commons website: www.academiccommons.org

Table of Contents for Special issue: www.academiccommons.org/commons/announcement/table-of-contents

E-Science in Research Libraries: New ARL Report to Research Libraries

The ARL Joint Task Force on Library Support for E-Science has released its final report, an "Agenda for Developing E-Science in Research Libraries", which is freely available for download. The report states, "E-science has the potential to be transformational within research libraries by impacting their operations, functions, and possibly even their mission. The [task force] focused its attention on the implications of trends in e-science for research libraries, exploring the dimensions that impact collections, services, research infrastructure, and professional development".

The task force concluded that "ARL's engagement in the issues of E-science is best focused on educational and policy roles, while partnering with other relevant organizations to contribute in strategic areas of technology development and new genres of publication. These types of strategic collaborations will also provide opportunities to re-envision the research library's role and contribution as twenty-first century science takes shape".

The members of the 2006-2007 Joint Task Force on Library Support for E-Science were: chair Wendy Lougee (University of Minnesota), Sayeed Choudhury (Johns Hopkins University), Anna Gold (Massachusetts Institute of Technology), Chuck Humphrey (University of Alberta), Betsy Humphreys (National Library of Medicine), Richard Luce (Emory University), Clifford Lynch (Coalition for Networked Information), James Mullins (Purdue University), Sarah Pritchard (Northwestern University), and Peter Young (National Agricultural Library). The staff liaisons to the task force were Julia Blixrud, Assistant Executive Director, External Relations, ARL, and Neil Rambo, Visiting Program Officer, ARL, and Director, Cyberinfrastructure Initiatives, University of Washington Libraries.

ARL E-science resources: www.arl.org/rtl/escience/eresource.shtml

Report (pdf): www.arl.org/bm~doc/ARL_EScience_final.pdf

2008 Horizon Report on Emerging Technologies Identifies Seven Metatrends

The 2008 annual Horizon Report describing the continuing work of the NMC's Horizon Project, a research oriented effort that seeks to identify and describe emerging technologies likely to have a large impact on teaching, learning, or creative expression within higher education is now available for download. The 2008 Horizon Report is the fifth edition in this annual series. Again this year, as in years past, the report reflects an ongoing collaboration between the New Media Consortium and the EDUCAUSE Learning Initiative (ELI), an EDUCAUSE program.

The core of the report describes six areas of emerging technology (grassroots video, collaborative webs, mobile broadband, data mashups, collective intelligence, and social operating systems) that will impact higher education within three adoption horizons over the next one to five years. To identify these areas, the project draws on an ongoing conversation among knowledgeable persons in the fields of business, industry, and education; on published resources, current research and practice; and on the expertise of the NMC and ELI communities.

With the fifth edition there was some looking back over the past five years to identify trends covered in the reports and seven metatrends have been identified:

  1. 1.

    communication between humans and machines;

  2. 2.

    the collective sharing and generation of knowledge;

  3. 3.

    games as pedagogical platforms;

  4. 4.

    computing in three dimensions;

  5. 5.

    connecting people via the network;

  6. 6.

    the shifting of content production to users; and

  7. 7.

    the evolution of a ubiquitous platform.

Report (pdf): www.nmc.org/pdf/2008-Horizon-Report.pdf

Horizon Project wiki: horizon.nmc.org/wiki/Main_Page

Virtual Worlds, Real Leaders: Gaming and Leadership in the Business World

IBM and Seriosity have released a report summary of research which studies management practices in online games. Together, IBM and Seriosity have done in-depth research to understand how multiplayer online game environments in the virtual world apply to the business world to enhance productivity, innovation and leadership. The report summary is entitled: Virtual Worlds, Real Leaders: Online games put the future of business leadership on display.

IBM partnered with Seriosity Inc., a software company that develops enterprise products and services inspired by online games, to study how leaders operate in these increasingly popular games. Together with experts from Stanford University and MIT, the team captured 50h of online game play, surveyed hundreds of gamers, and conducted several interviews of gaming leaders. The objective of the study was twofold:

  • to better understand how successful leaders behave in online games; and

  • to learn what aspects of game environments leaders leverage to be more effective.

The results are fascinating. Things learned included that the transparent environments created in online games made leadership easier to assume. And that leadership in online games is more temporary and flexible than it is in the business world. And finally, online games give leaders the freedom to fail, and experiment with different approaches and techniques, something that any Fortune 500 company that hopes to innovate needs to understand.

What specific features of game environments should business adopt?

  • Incentive structures that motivate workers immediately and longer term.

  • Virtual economies that create a marketplace for information and collaboration.

  • Transparency of performance and capabilities.

  • Recognition for achievements.

  • Visibility into networks of communication across an organization.

"If you want to see what business leadership may look like in three to five years, look at what's happening in online games". Byron Reeves, PhD, the Paul C. Edwards Professor of Communication at Stanford University and Co-founder of Seriosity, Inc.

Seriosity has developed solutions inspired by multiplayer games. Their flagship product, Attent creates a virtual economy for enterprise collaboration and a solution to information overload. Using Serios, the virtual currency of the Attent ecosystem, the solution enables users to assign values to messages based on importance. Attent also provides a variety of tools that enable everyone to track and analyse communication patterns and information exchanges across the enterprise.

IBM site and report summary: http://domino.watson.ibm.com/comm/www_innovate.nsf/pages/world.gio.gaming.html

Seriosity website, including full research report: www.seriosity.com/leadership.html

New Study Asserts Google Generation to be a Myth

A new study overturns the common assumption that the "Google Generation" youngsters born or brought up in the Internet age is the most web-literate. The first ever virtual longitudinal study carried out by the CIBER research team at University College London (UCL) claims that, although young people demonstrate an apparent ease and familiarity with computers, they rely heavily on search engines, view rather than read and do not possess the critical and analytical skills to assess the information that they find on the web.

The report Information Behaviour of the Researcher of the Future, authored by Ian Rowlands and Professor David Nicholas, from CIBER at UCL, also shows that research-behavior traits that are commonly associated with younger users impatience in search and navigation, and zero tolerance for any delay in satisfying their information needs are now becoming the norm for all age-groups, from younger pupils and undergraduates through to professors.

Commissioned by the British Library and JISC (Joint Information Systems Committee), the study calls for libraries to respond urgently to the changing needs of researchers and other users. Going virtual is critical and learning what researchers want and need crucial if libraries are not to become obsolete, it warns. "Libraries in general are not keeping up with the demands of students and researchers for services that are integrated and consistent with their wider internet experience", says Dr Ian Rowlands, the lead author of the report.

In the absence of a longitudinal study tracking a group of young people through schooling to academic careers, CIBER developed a methodology which has created a unique "virtual longitudinal study" based on the available literature and new primary data about the ways in which the British Library and JISC websites are used. This is the first time for the information seeking behavior of the virtual scholar to have been profiled by age.

Report (pdf 1.67 MB): www.bl.uk/news/pdf/googlegen.pdf

Download full report documentation (from UCL website): www.ucl.ac.uk/slais/research/ciber/downloads/

A recording of the Wednesday 16 January 2008 launch event is available for download: www.bl.uk/onlinegallery/whatson/downloads/files/googlegeneration.mp3 (MP3, 77 min 22 s, 66.3 MB).

Travelers in the Middle East Archive Provides Unprecedented Access to Rare Materials

Rice University has released the Travelers in the Middle East Archive (TIMEA), an open, multimedia digital research collection focused on Western travel to the Middle East, particularly Egypt, between the eighteenth and early twentieth centuries. Researchers will have unprecedented access to the rare materials, all digitally preserved, searchable and centralized in the archive.

With more than 25,000 pages, TIMEA provides images and full transcriptions to preserve the distinct impression of the books and to facilitate search and analysis. Many of these works travel guidebooks, travel narratives, museum catalogs and scholarly treatments have not been available previously either online or at a single location.

TIMEA provides access to:

  • Nearly 1,000 images, including stereographs, postcards and book illustrations.

  • More than 150 historical maps representing the Middle East as it was in the nineteenth and early twentieth centuries.

  • Interactive geographical information systems (GIS) maps that serve as an interface to the collection and present detailed information about features such as waterways, elevation and populated places.

  • Successive editions of classic travel guides and major museum collection catalogues.

  • Convenient educational modules that set materials from the collection in historical and geographic context and explore the research process.

TIMEA is able to offer seamless access for researchers by providing a common user interface to digital objects housed in three repositories. Texts, historical maps and images reside in DSpace, an open-source digital repository system. Educational research modules are presented within Connexions (http://cnx.org/), an open-content commons and publishing platform for educational materials. TIMEA also uses Google Maps and ESRI's ArcIMS map server.

The archive also uses interlinking to enable easy access to related materials. For example, links to resources associated with particular place names are available in the texts, allowing readers to locate the site on a map, view photos and drawings of the site, and find other texts that mention it. Links to relevant educational modules are provided within the records for images and texts.

TIMEA is supported by the IMLS, the Computer and Information Technology Institute, the School of Humanities, School of Engineering and Rice University.

http://timea.rice.edu/

eCompile Service Allows Publishers to Deliver Book "Mash-ups" on Demand

At the O'Reilly Tools of Change for Publishing Conference in February, LibreDigital, a division of NewsStand, Inc., announced the availability of its next-generation eCompile Service, a technology enhancement to the LibreDigital Internet Warehouse for Publishers that empowers forward-thinking publishers to provide consumers with book "mash-ups", or custom books made from content compiled from different book titles in publisher portfolios. The updated service allows publishers to instantly re-compile and repurpose their content so readers can order a single book with content from multiple titles, while allowing publishers to protect copyrights.

"Whether it's music, software applications or books, consumers today want instant gratification and access to dynamic content that's customized for their particular needs", said Craig Miller, Vice President and General Manager of LibreDigital. "For book publishers, digital strategies are quickly evolving to include delivery of both on- and off-line content in the format consumers choose. Our enhanced eCompile Service is designed to help publishers do just that while creating new opportunities to re-monetize their content across a variety of digital, print and mobile platforms".

The LibreDigital eCompile Service is designed for publishers looking to compose discrete and "on-the-fly" books from content taken from multiple sources. The newest version is unique in that it makes it easy to compile such items with rights and permissions intact protecting authors and copyrights.

LibreDigital solutions are used by some of the world's top publishers, including Bloomsbury, HarperCollins and Hachette. Publishers are using the new eCompile service to offer consumers an easy way to custom-order, assemble and print content from some of the most respected books, college textbooks and professional directories on the market. The LibreDigital eCompile Service is available to LibreDigital Warehouse clients.

More information: www.libredigital.com/warehouse

Smashwords: Your eBook, Your Way

Smashwords, a new digital publishing startup, previewed a breakthrough ebook publishing platform for authors and publishers at the O'Reilly Tools of Change publishing conference in February. The company has begun accepting applications for a limited number of beta testers.

Smashwords allows anyone to become a published ebook author in minutes. The site is ideal for full length novels, short fiction, essays, poetry, personal memoirs, non-fiction and screenplays. Authors receive 85 per cent the net sales proceeds from their works, and retain full control over sampling, pricing and marketing. The site offers authors free viral marketing tools to build readership, such as per cent-based sampling; dedicated pages for author profiles and book profiles; support for embedded YouTube book trailers, author interviews and video blogs; widgets for off-site marketing; reader reviews; and reader "favoriting".

"We plan to do for ebook authors what YouTube did for amateur video producers", said Mark Coker, founder and CEO of Smashwords, based in Los Gatos, Calif. "We make digital publishing simple and profitable for authors and publishers".

For fans of the written word, Smashwords provides an opportunity to discover new voices in literature, poetry and non-fiction. The site offers useful tools for search, discovery, and personal library-building.

It's easy to publish on Smashwords. Authors simply upload the word processing file of their work to the site; assign sampling rights to make up to 99 per cent of the book available as a free sample; and then assign pricing (including the "Radiohead" honor system option where authors let customers decide what to pay). Smashwords automatically converts the book into multiple DRM-free ebook formats (.txt, .rtf, .mobi, .epub, .pdf), making the book available for download or online reading.

"The traditional book publishing industry is broken from the perspective of authors, readers and publishers", Coker says. "Most authors are never published. Authors lucky enough to land a book deal rarely sell enough books to earn royalties beyond their initial $5,000 to $10,000 advance. Trade publishers lose money on nearly 80 per cent of the books they publish because of the high costs of production, warehousing, distribution and marketing. Further dampening profitability for authors and publishers is the fact that bookstores often return up to 50 per cent of their unsold inventory for a full refund".

Coker concluded that in today's digital age, there's no reason why authors shouldn't be able to publish anything they want and readers should determine what's worth reading, not just publishers. He also determined that ebooks were the solution to enable low-cost, universal publishing for the benefit of authors and readers around the world. Thus Smashwords was born. "I think ebooks will become increasingly important to the book publishing industry, and will make books more affordable to a worldwide audience", continued Coker. "By digitizing a book, authors and publishers can immortalize their works, making them permanently discoverable to new audiences. For authors and publishers of out of print books, ebooks offer a great way to bring these works back to life".

www.smashwords.com

Twine: A Semantic Web Application

Radar Networks, a pioneer of semantic web technology, announced in late 2007 the invite-beta of Twine, a new service that gives users a smarter way to share, organize, and find information with people they trust.

Twine provides a smarter way for people to leverage and contribute to the combined brainpower of their relationships. "We call this `knowledge networking"', said Radar Networks Founder and CEO Nova Spivack. "It's the next evolution of collective intelligence on the Web. Unlike social networking and community tools, Twine is not just about who you know, it's about what you know. Twine is the ultimate tool for gathering and sharing knowledge on the Web".

Twine helps people band together to share, organize and find information and knowledge around common interests and goals. Individuals can use Twine to share and keep track of information, regardless of where that information is primarily stored. Groups and teams can use Twine to collaborate and manage knowledge more productively.

Twine is unique because it understands the meaning of information and relationships and automatically helps to organize and connect related items. Using the Semantic Web, natural language processing, and artificial intelligence, Twine automatically enriches information and finds patterns that individuals cannot easily see on their own. Twine transforms any information into Semantic Web content, a richer and ultimately more useful and portable form of knowledge. Users of Twine also can locate information using powerful new patent-pending social and semantic search capabilities so that they can find exactly what they need, from people and groups they trust.

Twine pools and connects all types of information in one convenient online location, including contacts, email, bookmarks, RSS feeds, documents, photos, videos, news, products, discussions, notes, and anything else. Users can also author information directly in Twine like they do in weblogs and wikis. Twine is designed to become the center of a user's digital life.

"Web 3.0 is best-defined as the coming decade of the Web, during which time semantic technologies will help to transform the Web from a global file-server into something that is more like a worldwide database. By making information more machine-understandable, connected and reusable, the Semantic Web will enable software and websites to grow smarter", said Spivack. "Yahoo! was the leader of Web 1.0. Google is the leader of Web 2.0. We don't yet know who will be the leader of Web 3.0. It's a bold new frontier, but Twine is a strong first step, and we're very excited about it".

Twine is built on Radar Networks' patent-pending platform for Internet-scale social Semantic Web applications and services. The platform is built around W3C open-standards for the Semantic Web and enables a range of API's and features for outside applications and services to connect with Twine.

Individuals can sign-up to be invited to the beta testing phase on the Twine homepage, and users will be let into the service in waves.

For more information on Radar Networks and Twine, visit: www.twine.com

ProQuest Acquires WebFeat, Plans Merger with Serials Solutions

ProQuest, a Cambridge Information Group Company, has acquired WebFeat, acclaimed pioneer of federated search, a technology that enables simultaneous search of all an organization's databases. ProQuest plans to merge WebFeat with Serials Solutions, its Seattle-based business unit and developer of e-resource access and management tools for libraries.

Under the leadership of Serials Solutions' general manager, Jane Burke, the strengths of WebFeat's and Serials Solutions' federated search platforms will be combined to create a single, market-leading solution. The new platform will debut in early 2009, providing libraries with more power and efficiency in accessing their data pools. The current search platforms from both Serials Solutions and WebFeat will continue to be supported as this development proceeds.

"With more and more e-resources in collections, librarians are looking hard at the tools that will deliver the greatest level of "discovery" and federated search is one of the most important", said Ms Burke. "Merging Serials Solutions and WebFeat will combine the best of this technology and create a superior tool for access".

Founded by information industry veteran Todd Miller, WebFeat introduced the first federated search technology in 1998 and has continued as a market leader, serving more than 16,500 libraries, conducting more than 172 million database searches annually. Mr Miller, who holds four patents in the field of federated search, will remain with WebFeat briefly as a consultant. While WebFeat staff will become part of Serials Solutions, its customer support and all development activities will continue uninterrupted.

Serials Solutions and WebFeat Customer FAQ: www.serialssolutions.com/downloads/WebFeat-FAQ.pdf

Omeka Version 0.9.0 Released as a Public Beta

The Center for History and New Media (CHNM), George Mason University has announced the public beta release of Omeka, version 0.9.0, which is now available for download. Omeka is a free, open source web platform for publishing collections and exhibitions online. The CHNM is partnering with the Minnesota Historical Society to develop Omeka as a next-generation online display platform for museums, historical societies, scholars, collectors, educators, and more. Designed for cultural institutions, enthusiasts, and educators, Omeka is easy to install and modify and facilitates community-building around collections and exhibits.

Omeka will come loaded with the following features:

  1. 1.

    Dublin Core metadata structure and standards-based design that is fully accessible and interoperable.

  2. 2.

    Professional-looking exhibit sites that showcase collections without hiring outside designers.

  3. 3.

    Theme-switching for changing the look and feel of an exhibit in a few clicks.

  4. 4.

    Plug-ins for geolocation, bi-lingual sites, and a host of other possibilities.

  5. 5.

    Web 2.0 Technologies, including:

    • tagging: allow users to add keywords to items in a collection or exhibit;

    • blogging: keep in touch with users through timely postings about collections and events; and

    • syndicating: update users about content with RSS feeds.

Development for Omeka is funded by grants from the Institute of Museums and Library Services and the Alfred P. Sloan Foundation.

Omeka home page: http://omeka.org/

Library of Congress Pilot Project Offers Historical Photograph Collections on Flickr

A recent posting from the Library of Congress blog:

"It is so exciting to let people know about the launch of a brand-new pilot project the Library of Congress is undertaking with Flickr, the enormously popular photo-sharing site that has been a Web 2.0 innovator. If all goes according to plan, the project will help address at least two major challenges: how to ensure better and better access to our collections, and how to ensure that we have the best possible information about those collections for the benefit of researchers and posterity. In many senses, we are looking to enhance our metadata (one of those Web 2.0 buzzwords that 90 per cent of our readers could probably explain better than me).

"The project is beginning somewhat modestly, but we hope to learn a lot from it. Out of some 14 million prints, photographs and other visual materials at the Library of Congress, more than 3,000 photos from two of our most popular collections are being made available on our new Flickr page, to include only images for which no copyright restrictions are known to exist".

"The real magic comes when the power of the Flickr community takes over. We want people to tag, comment and make notes on the images, just like any other Flickr photo, which will benefit not only the community but also the collections themselves. For instance, many photos are missing key caption information such as where the photo was taken and who is pictured. If such information is collected via Flickr members, it can potentially enhance the quality of the bibliographic records for the images".

"We're also very excited that, as part of this pilot, Flickr has created a new publication model for publicly held photographic collections called "The Commons". Flickr hopes as do we that the project will eventually capture the imagination and involvement of other public institutions, as well".

"From the Library's perspective, this pilot project is a statement about the power of the Web and user communities to help people better acquire information, knowledge and most importantly wisdom. One of our goals, frankly, is to learn as much as we can about that power simply through the process of making constructive use of it".

More information: www.loc.gov/rr/print/flickr_pilot.html

Library of Congress photos on Flickr FAQs: www.loc.gov/rr/print/flickr_pilot_faq.html

The Commons on Flickr: www.flickr.com/commons

Interactive NFS Site on the Birth and Growth of the Internet

The Internet is now a part of modern life, but how was it created&quest; The National Science Foundation (NFS), an independent US federal agency that supports fundamental research and education across all fields of science and engineering, has created a multimedia/interactive website to show how the technology behind the Internet was created and how NSFNET, a network created to help university researchers in the 1980s, grew to become the Internet we know today.

NFS Multimedia/Interactive website: www.nsf.gov/news/special_reports/nsf-net/

NFS text-only website: www.nsf.gov/news/special_reports/nsf-net/textonly/

GPO Releases White Paper on Web Harvesting Project

GPO has announced the release of a white paper on the results of the recently completed Web Harvesting pilot project to capture official Environmental Protection Agency (EPA) publications in scope of GPO's information dissemination programs. In addition to the white paper there are four attachments freely available from the Project page, covering:

  • Statement of Work, Attachment 1 PDF;

  • Criteria and Parameters, Attachment 2 PDF;

  • Blue Angel Technologies Rules, Attachment 3 PDF; and

  • Information International Associates, Inc. Rules, Attachment 4 PDF.

The goal of Web publication harvesting is to discover and capture newly identified online publications that are within scope of GPO's information dissemination programs. GPO's publication harvesting activities include planning for and discovering, making scope determination assessment, capturing, and archiving online US Government publications determined to be within scope of GPO's information dissemination programs. GPO is working to accomplish this using increasingly automated technologies being developed in conjunction with the Future Digital System (FDsys).

The white paper reports on the specific context of the results of the pilot, including a summary of analysis done on the work performed, an assessment of lessons learned, and planned future direction and next steps for further development of the harvesting function to be implemented during Release 2 of GPO's FDsys. The first public release of FDsys is scheduled for November 2008.

As a first step in learning about automated Web publication discovery and harvesting technologies and methodologies, GPO contracted with two private companies on this pilot. They collaborated to develop rules and instructions that would determine whether EPA content discovered was in scope for GPO's dissemination programs. Three separate crawls were conducted on the sites over a six-month period, and harvester rules and instructions were refined and revised between crawls.

GPO Web Harvesting Pilot Project: www.access.gpo.gov/su_docs/fdlp/harvesting/

Web Harvesting White Paper (pdf): www.access.gpo.gov/su_docs/fdlp/harvesting/whitepaper.pdf

Digital Repositories and Related Services: DRIVER Project Studies and Papers

The EC-funded DRIVER project is leading the way as the largest initiative of its kind in helping to enhance repository development worldwide. Its main objective is to build a virtual, European scale network of existing institutional repositories using technology that will manage the physically distributed repositories as one large scale virtual content source. As part of the DRIVER project, three strategic and coordinated studies have been conducted on digital repositories and related services. They are aimed at repository managers, content providers, research institutions and decision makers all key stakeholders who are taking an active part in the creation of the digital repository infrastructure for e-research and e-learning. SURFfoundation is the Dutch partner in the DRIVER project, and responsible for the publication of the studies.

The "European Repository Landscape" is a study on different aspects of the European repository infrastructure. The study presents a complete inventory of the state of digital repositories in the 27 countries of the European Union and provides a basis to contemplate the next steps in driving forward an interoperable infrastructure at a European level.

A key question in the development of institutional repositories is how to make a digital repository and related services work for an institution. This question is addressed in the study "A DRIVER's Guide to Repositories", edited by Kasja Weenink, Leo Waaijers and Karen van Godtsenhoven. It focuses on five issues which are essential to the establishment, development or sustainability of a digital repository. These are covered by the contribution of Alma Swan (Key Perspectives Ltd.) who provides guidelines that are significant to business modeling for digital repositories and related services; Wilma Mossink (SURF) who addresses Intellectual Property Rights issues; Vanessa Proudman (Tilburg University) who offers insight into the populating of repositories; and René van Horik (DANS, Data archiving networked systems) and Barbara Sierman (KB, National Library of the Netherlands) who address issues concerning data curation and long term preservation. This study focuses on inter- and transnational approaches which go beyond local interests.

The "Investigative Study of Standards for Digital Repositories and Related Services" by Muriel Foulonneau and Francis André (CNRS-ISTI) reviews the current standards, protocols and applications in the domain of digital repositories. The authors also explore possible future issues that is to say, which steps need to be taken now in order to comply with future demands.

For more information and free accessible downloads of the studies, please see: www.driver-community.eu or www.driver-support.eu/en/studies.html

Additionally, on 16 and 17 January 2008, DRIVER II successfully carried out its first Summit in Goettingen, Germany. Approximately 100 invited representatives from the European Community, including representatives of the European Commission, over 20 spokespersons of European repository initiatives as well as experts in different repository related fields from Europe, the USA, Canada and South Africa came together to discuss their experiences and concrete actions with respect to the further building of cross-country repository infrastructures.

Those interested in learning more about DRIVER and the topics discussed at the First DRIVER Summit can now view the presentations online. The presentations are available on the Summit webpage: www.driver-support.eu/multi/DRIVERSummit.php

Blue Ribbon Task Force on a Sustainable Digital Future

International leaders offering a variety of interests and areas expertise have been named to a distinguished Blue Ribbon Task Force to develop actionable recommendations for the economic sustainability of preservation of and persistent access to digital information for future generations.

The Blue Ribbon Task Force on Sustainable Digital Preservation and Access is co-chaired by Fran Berman, director of the San Diego Supercomputer Center at University of California, San Diego and a pioneer in data cyberinfrastructure; and Brian Lavoie, a research scientist and economist with OCLC, the world's largest library service and research organization.

"This Task Force is uniquely composed of people with economic and technical expertise", said Lucy Nowell, Program Director for the Office of Cyberinfrastructure at NFS, in announcing the members of the task force. "We believe this rare combination will enable the group to explore dimensions of the sustainability challenge that have never been addressed before, and make the societal and institutional cases for supporting data repositories".

The Task Force members include:

  • Francine Berman, Director, San Diego Supercomputer Center and High Performance Computing Endowed Chair, UC San Diego. [co-Chair].

  • Brian Lavoie, Research Scientist, OCLC [co-Chair].

  • Paul Ayris, Director of Library Services, the University College of London, England.

  • G. Sayeed Choudhury, Associate Dean of Libraries, Johns Hopkins University, Baltimore, Maryland.

  • Elizabeth Cohen, Academy of Motion Pictures Arts and Sciences, Stanford University, Stanford, California.

  • Paul Courant, University Librarian, University of Michigan, Ann Arbor, Michigan.

  • Lee Dirks, Director of Scholarly Communications, Microsoft Corp.

  • Amy Friedlander, Director of Programs, Council on Library and Information Resources (CLIR), Washington, DC.

  • Vijay Gurbaxani, Senior Associate Dean, Paul Merage School of Business, University of California at Irvine.

  • Anita Jones, Professor of Engineering and Applied Science, University of Virginia, Charlottesville, Virginia.

  • Ann Kerr, Independent Consultant, AK Consulting, La Jolla, California.

  • Clifford Lynch, Executive Director, Coalition for Networked Information, Washington, DC.

  • Dan Rubinfeld, Professor of Law and Professor of Economics, University of California at Berkeley.

  • Chris Rusbridge, Director, Digital Curation Centre, University of Edinburgh.

  • Roger Schonfeld, Manager of Research, Ithaka, Inc.

  • Abby Smith, Historian and Consulting Analyst to the Library of Congress, (based in San Francisco, California)

  • Anne Van Camp, Director, Smithsonian Institution Archives, Washington, DC.

Using its members as a gateway, the Task Force will convene a broad set of international experts from the academic, public and private sectors who will participate in quarterly discussion panels. The group will publish two substantial reports with their findings, including a final report in late 2009 that will include a set of actionable recommendations for digital preservation, taking into account a general economic framework to establish those objectives.

With literally every "bit" of information now being digitally processed and stored, our computer-centric society is faced with one of the greatest challenges of our time: how best to preserve and efficiently access these vast amounts of digital data well into the future and do so in an economically sustainable manner.

While the Information Age has created a global network society in which access to digital information via the internet and other means has revolutionized science, education, commerce, government, and other aspects of our lives, this technology has also spawned some unwanted side-effects. Unlike earlier mediums of information including stone, parchment and paper miniscule electronic "data banks" often stored in memory sticks, hard drives, and on magnetic tape are far more fragile and susceptible to obsolescence and loss.

The challenge is considerable on several fronts, and epic in proportion. For this reason, the Blue Ribbon Task Force on Sustainable Digital Preservation was launched and funded by NFS and the Andrew W. Mellon Foundation in partnership with the Library of Congress, the Joint Information Systems Committee of the United Kingdom, the CLIR, and the National Archives and Records Administration. Their two-year mission: to develop a viable economic sustainability strategy to ensure that today's data will be available for further use, analysis and study.

Simply keeping pace with the rapid change in technology that has made things such as VCRs and cassette tapes curious antiquities in our own lifetimes is just one concern to be addressed by the task force. Establishing economically sustainable infrastructures and global standards to ensure that massive quantities of digital information are efficiently migrated to new mediums is also critical. Yet another challenge will be managing the storage of such vast amounts of data so that a wealth of scientific, educational and business information is not lost. Last but not least, how will we determine which digital data should be saved?

"NSF and other organizations, both national and international, are funding research programs to address these technical and cyberinfrastructure issues", said Nowell. "This is the only group I know of that is chartered to help us understand the economic issues surrounding sustainable repositories and identify candidate solutions".

The first meeting of the Blue Ribbon Task Force on Sustainable Digital Preservation and Access was held in Washington, DC, 29-30 January 2008. The group will also establish a public Web site to solicit comments and encourage dialogue on the issue of digital preservation.

Preservation in the Age of Large-Scale Digitization: White Paper from CLIR

The digitization of millions of books under programs such as Google Book Search and Microsoft Live Search Books is dramatically expanding our ability to search and find information. The aim of these large-scale projects to make content accessible is interwoven with the question of how one keeps that content, whether digital or print, fit for use over time. A new report from the CLIR addresses this question and offers recommendations.

Preservation in the Age of Large-Scale Digitization by Oya Y. Rieger examines large-scale digital initiatives to identify issues that will influence the availability and usability, over time, of the digital books these projects create. Ms Rieger is interim assistant university librarian for digital library and information technologies at the Cornell University Library.

The paper describes four large-scale projects Google Book Search, Microsoft Live Search Books, Open Content Alliance, and the Million Book Project and their digitization strategies. It then discusses a range of issues affecting the stewardship of the digital collections they create: selection, quality in content creation, technical infrastructure, and organizational infrastructure. The paper also attempts to foresee the likely impacts of large-scale digitization on book collections.

White paper download available at: www.clir.org/pubs/abstract/pub141abst.html

Pathbreaking Series on Digital Media and Learning Available for Free Download

Six new volumes of The MacArthur Series on Digital Media and Learning examine the effect of digital media tools on how people learn, network, communicate, and play, and how growing up with these tools may affect a person's sense of self, how they express themselves, and their ability to learn, exercise judgment, and think systematically. The full text of each volume in the Series is provided for free and OA thanks to the generous support of the MacArthur Foundation.

The field-defining volumes were compiled by some of the best minds in digital media and learning Lance Bennett, David Buckingham, Anna Everett, Andrew J. Flanagin, Tara McPherson, Miriam J. Metzger, and Katie Salen and contain the ideas of 56 authors, an advisory panel of leading experts, and even the voices of the young people themselves.

Lance Bennett points out that the future of democracy is in the hands of the young people of the digital age in Civic Life Online: Learning How Digital Media Can Engage Youth. This volume looks at how online networks might inspire conventional political participation but also how creative uses of digital technologies are expanding the boundaries of politics and public issues. Stanford University's Howard Rheingold looks at using participatory media and public voice to encourage civic engagement, Kathryn C. Montgomery from American University looks at the intersection of practice, policy, and the marketplace, and Michael Xenox and Kristen Foot tackle the generational gap in online politics. As they point out, it's "not your father's internet anymore".

The contributors to Digital Media, Youth, and Credibility look particularly at youth audiences and experiences, considering the implications of wide access and the questionable credibility of information for youth and learning. Miriam J. Metzger and Andrew J. Flanagin and the contributors look at the implications of wide access and questionable credibility today. Specifically, R. David Lankes looks at how we trust the internet and a new approach to credibility tools, Frances Jacobson Harris looks at the challenges teachers and schools now face, and Matthew S. Eastin outlines a cognitive developmental approach to youth perceptions of credibility.

In The Ecology of Games, noted game designer Katie Salen of the Parsons New School of Design has gathered essays not only from those who study games and learning but from those who create such worlds. Jim Gee's essay cuts through the debate on "are games good or bad?" to describe the ways in which well-designed games foster learning both within and beyond the play, while Mimi Ito chronicles the cultural history of children's software and the role educational games have played in it. At the same time the volume contains an article on participatory culture by Cory Ondrejka who as CTO of Linden Labs helped create Second Life and a case study on collective intelligence gaming by Jane McGonigal, premier puppet master of the new genre Alternate Reality Games.

Youth, Identity, and Digital Media, edited by David Buckingham explores how young people use digital media to share ideas and creativity and to participate in networks that are small and large, local and global, intimate and anonymous. The contributors look at the emergence of new genres and forms from SMS and instant messaging to home pages, blogs, and social networking sites. For example, University of California, Berkeley professor danah boyd uses case studies to look at the influence of social network sites like MySpace and Facebook on the social lives of teenagers.

The range of topics touched on in Tara McPherson's volume Digital Youth, Innovation, and the Unexpected is perhaps the widest of all in the collection. Lest we forget lessons learned from other eras she includes essays by Justine Cassell and Meg Cramer of Northwestern on moral panic in the early days of the telegraph and telephone and Christian Sandvig of Illinois and Oxford evokes the collective imagination applied in the early days of wireless technology and analogizes it to that of the era of short wave radio. Sarita Yardi of Georgia Tech describes how Berkeley students use back channels in the classroom to enhance the instructional experience and Henry Lowood of Stanford traces the roots of a new genre of movie making which uses game engines, "machine cinema" or machinima.

Anna Everett of the UC Santa Barbara draws on the work a diverse group of scholars including Chela Sandoval and Guisela Latorre from her own campus, Raiford Guins of the University of the West of England, Anotonio Lopez of World Bridger Media, Jessie Daniels of Hunter College and Doug Thomas of USC and others who in Learning Race and Ethnicity draw out lessons from Chicana/o activism, Hip Hop, and digital media in native America as well as hate speech and racism in online games.

Beginning in 2008, the new International Journal of Learning and Media will continue the investigation of the effects of digital media on young people and learning. Supported by the MacArthur Foundation, the new journal will be published quarterly by The MIT Press in partnership with the Monterey Institute for Technology and Education. Funds also have been provided to support an on-line community for discussing the articles in the journal and the issues that are central to the emerging field. The series and the journal are part of MacArthur's $50 million digital media and learning initiative that is gathering evidence about the impact of digital media on young people's learning and what it means for education. The initiative is marshaling what is already known about the field and seeding innovation for continued growth.

MacArthur Foundation Series on Digital Media and Learning downloads: www.mitpressjournals.org/loi/dmal

CrossRef Passes 30 Millionth DOI

The CrossRef linking service has announced that it recently registered its 30 millionth DOI. While the majority of CrossRef's Digital Object Identifiers (DOIs) are assigned to online journal articles, there are now over 2.5 million DOI names assigned to other types of publications, including conference proceedings, dissertations, books, datasets, and technical reports. CrossRef's dues-paying membership exceeds 500, with over 2,400 publishers and societies participating in CrossRef linking.

The 30 millionth DOI, doi:10.1103/PhysRevE.76.055201 was registered by The American Physical Society for their journal Physical Review E.

CrossRef adds an average of 550,000 new items every month to its DOI registry and linking service. Of the over 6.5 million DOIs created and assigned during the past year, a large number are associated with archival, or back-file, journal articles, as several large publishers have recently undertaken extensive retro-digitization projects. These include the Royal Society, Elsevier, Springer, Sage, Kluwer, Wiley, Blackwell, and the American Association for the Advancement of Science. In 2007 alone, the scholarly journal archive JSTOR added a total of 325,251 DOIs to CrossRef.

Other noteworthy deposit events during the past year:

  • The Office of Scientific and Technical Information of the US Department of Energy deposited more than 86,000 DOE technical reports with CrossRef, the earliest report dating from 1933.

  • SciELO The Scientific Electronic Library Online based in Brazil added over 26,000 DOIs.

  • The Persee Program, established by the Ministry of State Education, Higher Education and Research in France, deposited over 30,000 DOIs, of which over 14,000 were for legacy content.

CrossRef website: www.crossref.org

NISO and UKSG Launch KBART Working Group

The NISO and UK Serials Group (UKSG) have launched the Knowledge Base and Related Tools (KBART) working group. The group will comprise representatives from publishers, libraries, link resolver and ERM vendors, subscription agents and other parties involved in the creation of, provision of data to, and implementation of knowledge bases. These key components of the OpenURL supply chain play a critical role in the delivery of the appropriate copy to end-users of content in a networked environment.

The establishment of the group follows last year's publication of the UKSG-sponsored research report, Link Resolvers and the Serials Supply Chain. The report identified inefficiencies in the supply and manipulation of journal article data that impact the efficacy and potential of OpenURL linking. The KBART working group will progress the report's recommendations; its mandate has been extended beyond the serials supply chain to consider best practice for supply of data pertaining to e-resources in general.

"Knowledge bases are the key to successful OpenURL linking and as such are already an essential facilitator within a sophisticated, valuable technology", comments NISO co-chair Peter McCracken, Director of Research at Serials Solutions. "But they have the potential to achieve so much more if we can smooth out the problems with inaccurate, obsolete data. We plan to create simple guidelines to help data providers understand how they can optimize their contribution to the information supply chain".

"Many content providers are simply unaware of the benefits to them of supplying knowledge bases with accurate data in a timely manner, so a key objective for us is education", adds UKSG co-chair Charlie Rapple, Group Marketing Manager at Publishing Technology. "We'll be taking a back-to-basics approach that helps stakeholders to learn about Ð and to embrace Ð the various technologies that depend on their data. All parties in the information supply chain can benefit from the improvements we hope to make".

For more information about KBART, visit: www.uksg.org/kbart/

NISO website: www.niso.org

UKSG website: www.uksg.org

SERU Becomes NISO Recommended Practice

Slightly more than one year after the Shared E-Resource Understanding (SERU) Working Group was formed, the NISO has issued the SERU: a shared electronic resource understanding as part of its recommended practice series (NISO-RP-7-2008). The SERU document codifies best practices and is freely available from the NISO website. Publication follows a six-month trial use period, during which time librarians and publishers reported on their experiences using the draft document.

SERU offers publishers and librarians the opportunity to save both the time and the costs associated with a negotiated and signed license agreement by agreeing to operate within a framework of shared understanding and good faith.

Co-chair Judy Luther, President of informed strategies, added, "Based on a decade of licensing experience, SERU represents widely adopted practices already in place in North America, and is both library and publisher friendly".

In accordance with plans laid out by the SERU Working Group, which concluded its work with publication of the Recommended Practice, NISO will produce additional materials to help publishers and libraries adopt a SERU approach, maintain a registry of participants, and continue to promote, educate, and plan for regular review and evaluation of SERU.

SERU document: www.niso.org/committees/seru/

NISO website: www.niso.org

COUNTER Develops Two New Reports for Library Consortia

The advent of the SUSHI protocol has greatly facilitated the handling of large volumes of usage data, which is a particular advantage for consortial reporting. For this reason, COUNTER, a multi-agency initiative whose objective is to provide a set of internationally accepted, extendible Codes of Practice that will allow the usage of online information products and services to be measured more consistently, has, with the co-operation of the international library consortium community, developed two new reports for library consortia that are specified only in XML format. These reports, Consortium Report 1 and Consortium Report 2 are described below:

  • Consortium report 1: number of successful full-text requests by month, (XML only). This report is a single XML file, broken down by consortium member, which contains the full-text usage data for every online journal and book taken by individual consortium members.

  • Consortium report 2: total searches by month and database (XML only). This report is a single XML file, broken down by consortium member, which contains the search counts for every each database taken by individual consortium members.

Vendors are not required to provide the new consortium reports to be compliant with Release 2 of the COUNTER Code of Practice, but are urged to provide them where possible. Not only will this allow vendors to provide an extra, valuable service to their library consortium customers, but it will also enable them to gain more experience with handling usage data in XML format, which is likely to become the preferred format in the future.

Further information on the new XML COUNTER consortium reports may be found on the COUNTER website at: www.projectcounter.org/r2/R2_Appendix_H_Optional_Additional_Usage_Reports.doc and also on the NISO/SUSHI website at: www.niso.org/schemas/sushi/#COUNTER, where the XML schema covering these reports is available.

SUSHI protocol: www.niso.org/committees/SUSHI/SUSHI_comm.html

Related articles