New & Noteworthy

Library Hi Tech News

ISSN: 0741-9058

Article publication date: 25 January 2008

511

Citation

(2008), "New & Noteworthy", Library Hi Tech News, Vol. 25 No. 1. https://doi.org/10.1108/lhtn.2008.23925aab.001

Publisher

:

Emerald Group Publishing Limited

Copyright © 2008, Emerald Group Publishing Limited


New & Noteworthy

Article Type: New & Noteworthy From: Library Hi Tech News Volume 25, issue 1.

The Really Modern Library

A recent posting from if:book, the blog of the Institute for the Future of the Book:

"We're in the very early stages of devising, in partnership with Peter Brantley and the Digital Library Federation, what could become a major initiative around the question of mass digitization. It's called `The Really Modern Library'."

"Over the course of this month (October 2007), starting Thursday in Los Angeles, we're holding a series of three invited brainstorm sessions (the second in London, the third in New York) with an eclectic assortment of creative thinkers from the arts, publishing, media, design, academic and library worlds to better wrap our minds around the problems and sketch out some concrete ideas for intervention."

"The goal of this project is to shed light on the big questions about future accessibility and usability of analog culture in a digital, networked world."

"We are in the midst of a historic `upload,' a frenetic rush to transfer the vast wealth of analog culture to the digital domain. Mass digitization of print, images, sound and film/video proceeds apace through the efforts of actors public and private, and yet it is still barely understood how the media of the past ought to be preserved, presented and interconnected for the future. How might we bring the records of our culture with us in ways that respect the originals but also take advantage of new media technologies to enhance and reinvent them?"

"Our aim with the Really Modern Library project is not to build a physical or even a virtual library, but to stimulate new thinking about mass digitization and, through the generation of inspiring new designs, interfaces and conceptual models, to spur innovation in publishing, media, libraries, academia and the arts."

"The meeting in October will have two purposes. The first is to deepen and extend our understanding of the goals of the project and how they might best be achieved. The second is to begin outlining plans for a major international design competition calling for proposals, sketches, and prototypes for a hypothetical `really modern library.' This competition will seek entries ranging from the highly particular (for e.g., designs for digital editions of analog works, or new tools and interfaces for handling pre-digital media) to the broadly conceptual (ideas of how to visualize, browse and make use of large networked collections)."

"This project is animated by a strong belief that it is the network, more than the simple conversion of atoms to bits, that constitutes the real paradigm shift inherent in digital communication. Therefore, a central question of the Really Modern Library project and competition will be: how does the digital network change our relationship with analog objects? What does it mean for readers/researchers/learners to be in direct communication in and around pieces of media? What should be the *social* architecture of a really modern library?"

www.futureofthebook.org/blog/archives/2007/10/the_really_modern_library.html com

Institute for the Future of the Book: www.futureofthebook.org/

Digital Library Federation: http://diglib.org/

OCA and Boston Library Consortium Partner on Digitization Project

The Boston Library Consortium, Inc. (BLC) announced in October 2007 that it will partner with the Open Content Alliance (OCA) to build a freely accessible library of digital materials from all 19-member institutions. The BLC is the first large-scale consortium to embark on such a self-funded digitization project with the OCA. The BLC's digitization efforts will be based in a new scanning center, the Northeast Regional Scanning Center, unveiled in October at the Boston Public Library.

The Consortium will offer high-resolution, downloadable, reusable files of public domain materials. Using Internet Archive technology, books from all 19 libraries will be scanned at a cost of just 10 cents per page. Collectively, the BLC member libraries provide access to over 34 million volumes. The scanning center at the heart of the BLC/OCA partnership is located at the Boston Public Library.

The BLC is an association of academic and research libraries located in Massachusetts, Connecticut, New Hampshire, and Rhode Island, dedicated to sharing human and information resources to advance the research and learning of its constituency. The members of the BLC are Boston College, Boston Public Library, Boston University, Brandeis University, Brown University, the Marine Biological Laboratory and Woods Hole Oceanographic Institution, MIT, Northeastern University, the State Library of Massachusetts, Tufts University, University of Connecticut, University of Massachusetts Amherst, University of Massachusetts Boston, University of Massachusetts Dartmouth, University of Massachusetts Lowell, University of Massachusetts Medical Center, University of New Hampshire, Wellesley College, and Williams College.

According to Doron Weber, Program Director, Universal Access to Recorded Knowledge, at the Alfred P. Sloan Foundation, "The Alfred P. Sloan Foundation, which has supported the OCA from its inception in 2005, salutes this bold move by the BLC and its 19 member libraries to step up to the plate and embrace the great potential of mass digitization in a truly open, non-profit and non-exclusive basis. Unlike corporate backed efforts by Google, Microsoft, Amazon et al., which all impose different, albeit understandable, levels of restriction to protect their investment, the BLC has shown libraries all across the country the right way to take institutional responsibility and manage this historic transition to a universal digital archive that serves the needs of scholars, researchers and the general public without compromise. Bravo for the BLC and the OCA!"

Open Content Alliance website: www.opencontentalliance.org/

Google and IBM Announce University Initiative to Address Internet-Scale Computing Challenges

In October 2007, Google and IBM announced an initiative to promote new software development methods to help students and researchers address the challenges of Internet-scale applications in the future. The goal of this initiative is to improve computer science students' knowledge of highly parallel computing practices to better address the emerging paradigm of large-scale distributed computing. IBM and Google are teaming up to provide hardware, software and services to augment university curricula and expand research horizons. With their combined resources, the companies hope to lower the financial and logistical barriers for the academic community to explore this emerging model of computing.

The University of Washington was the first to join the initiative. A small number of universities will also pilot the program, including Carnegie Mellon University, Massachusetts Institute of Technology, Stanford University, the University of California at Berkeley and the University of Maryland. In the future, the program will be expanded to include additional researchers, educators and scientists.

Fundamental changes in computer architecture and increases in network capacity are encouraging software developers to take new approaches to computer-science problem solving. For web software such as search, social networking and mobile commerce to run quickly, computational tasks often need to be broken into hundreds or thousands of smaller pieces to run across many servers simultaneously. Parallel programming techniques are also used for complex scientific analysis such as gene sequencing and climate modeling.

"This project combines IBM's historic strengths in scientific, business and secure-transaction computing with Google's complementary expertise in Web computing and massively scaled clusters," said Samuel J. Palmisano, chairman, president and chief executive officer, IBM. "We're aiming to train tomorrow's programmers to write software that can support a tidal wave of global Web growth and trillions of secure transactions every day."

For this project, the two companies have dedicated a large cluster of several hundred computers (a combination of Google machines and IBM BladeCenter and System x servers) that is planned to grow to more than 1,600 processors. Students will access the cluster via the Internet to test their parallel programming course projects. The servers will run open source software including the Linux operating system, XEN systems virtualization and Apache's Hadoop project, an open source implementation of Google's published computing infrastructure, specifically MapReduce and the Google File System (GFS).

At the University of Washington, students were able to harness the power of distributed computing to produce complicated programs such as software that scans voluminous Wikipedia edits to identify spam and organizes global news articles by geographic location.

To simplify the development of massively parallel programs Google and IBM have created the following resources:

  • A cluster of processors running an open source implementation of Google's published computing infrastructure (MapReduce and GFS from Apache's Hadoop project).

  • A Creative Commons licensed university curriculum developed by Google and the University of Washington focusing on massively parallel computing techniques available at: http://code.google.com/edu/content/parallel.html

  • Open source software designed by IBM to help students develop programs for clusters running Hadoop. The software works with Eclipse, an open source development platform. The plugin is currently available at: http://lucene.apache.org/hadoop/

  • Management, monitoring and dynamic resource provisioning of the cluster by IBM using IBM Tivoli systems management software.

  • A website to encourage collaboration among universities in the program. This will be built on Web 2.0 technologies from IBM's Innovation Factory.

For more information on the IBM Academic Initiative, visit www.ibm.com/university

A white paper entitled "The Future of Scholarly Communication: Building the Infrastructure for Cyberscholarship", authored by William Y. Arms and Ronald L. Larsen, was released in September 2007 and is freely available for download. The report is the result of a workshop on data-driven science and data-driven scholarship sponsored by the National Science Foundation (NSF) and the Joint Information Systems Committee (JISC) held in Phoenix, Arizona, 17 to 19 April 2007. The invited workshop participants included representatives from Europe and the US with affiliations in government, higher education, industry, and private foundations.

The goal of the workshop was to unite the work of a series of studies and reports that highlighted the ever-growing importance for all academic fields of data and information in digital formats. Studies have looked at digital information in science and in the humanities; at the role of data in Cyberinfrastructure; at repositories for large-scale digital libraries; and at the challenges of archiving and preservation of digital information. The workshop website also includes the text of the position papers that were written in preparation for the workshop, including:

  • Bill Arms, "Repositories for Large-scale Digital Libraries."

  • Fran Berman, Brian E. C. Schottlaender, "The Need for Formalized Trust in Digital Repository Collaborative Infrastructure."

  • Laura E. Campbell, "How Digital Technologies Have Changed the Library of Congress: Inside and Outside."

  • Sayeed Choudhury, "The Relationship between Data and Scholarly Communication."

  • Bas Cordewener, "Institutional Repositories in the Netherlands, a national and international perspective."

  • Gregory Crane, "Repositories, Cyberinfrastructure and the Humanities."

  • Linda Frueh, "Access Tools: Bridging Individuals to Information."

  • Jerry Goldman and Andrew Gruen, "Complexity and scale in audio archives."

  • Babak Hamidzadeh, "Scale: A repository challenge."

  • Ken Hamma, "Professionally Indisposed to Change."

  • Rick Luce, "eDatabase Lessons for an eData World."

  • Janet H. Murray, "Genre creation as cognition and collective knowledge making."

  • Peter Murray-Rust, "Data-driven science a scientist's view."

  • Michael L. Nelson, "I Don't Know and I Don't Care."

  • Joyce Ray, "Discussion Group on Individual Users."

  • Jürgen Renn, "From Research Challenges of the Humanities to the Epistemic Web (Web 3.0)."

  • David Rosenthal, "Engineering Issues In Preserving Large Databases."

  • Abby Smith, "Thoughts on Scale and Complexity."

  • Beth Stewart, "NEH and Digital Humanities."

  • Eric F. Van de Velde, "Workshop on Data-Driven Science & Scholarship: Organizations."

  • Donald J. Waters, "Doing much more than we have so far attempted."

Workshop website: www.sis.pitt.edu/~repwkshop/

White paper: www.sis.pitt.edu/~repwkshop/NSF-JISC-report.pdf

Dioscuri: The Emulator for Digital Preservation

The Koninklijke Bibliotheek, national library of the Netherlands, and the Nationaal Archief of the Netherlands are proud to present the world's first modular emulator designed for digital preservation: Dioscuri.

Dioscuri is capable of emulating an Intel 8086-based computer platform with support for VGA-graphics, screen, keyboard, and storage devices like a virtual floppy drive and hard drive. With these components Dioscuri successfully runs 16-bit operating systems like MS-DOS and applications such as WordPerfect 5.1, DrawPerfect 1.1 and Norton Commander. Furthermore, it is capable of running many nostalgic DOS-games and a simple Linux kernel. And when you finally open your long-forgotten WP5.1-files you can extract the text from the emulated environment into your current working environment using a simple clipboard-feature.

Designed for digital preservation, the design of Dioscuri is based on two key features: portability and flexibility. Dioscuri is portable because it is built on top of a virtual layer, called a virtual machine (VM). By using a VM in between the real computer and the emulated one, Dioscuri becomes less dependent on the actual hardware and software it runs on. This approach offers better portability to other platforms, which ensures longevity when a platform fails to survive over time. Dioscuri has shown to run reliably on PC, Apple and Sun computers without the need to alter anything of the application.

Flexibility is gained by a component-based architecture. Each component, called module, imitates the functionality of a particular hardware component (i.e. processor, memory, hard disk, etc.). In concept, combining these modules any computer emulation can be created. Configuring Dioscuri is done by a user-friendly graphical interface, which stores the settings in an XML-file. Both its portability and flexibility make Dioscuri different from any other emulator that exists today and ensure that it is prepared for the future.

Development of the emulator started in January 2006 and was lead by software company Tessella Support Services plc (www.tessella.com/). Together with emulation proponent Jeff Rothenberg, the PC-architecture was examined and translated into a software representation, resulting in a modular emulator. Although developing an emulator is not an easy task, the project made it clear that even with limited resources it is possible to build one. With a total effort of roughly two man-years, Dioscuri has been designed, developed and tested.

Next steps are already in progress. Since July 2007 development of Dioscuri is continued under the umbrella of the European project Planets. Future work will consist of extending Dioscuri with more components to emulate newer x86 computers (286, 386, 486 and Pentium), which will make Dioscuri capable of running operating systems like MS Windows 95/98/2000/XP and Linux Ubuntu.

Dioscuri version 0.2.0 is now available as open source software for any institution or individual that would like to experience their old digital documents again.

Download Dioscuri from: http://dioscuri.sourceforge.net

Preservation in the Age of Large-Scale Digitization: White Paper from CLIR

In November, the Council on Library and Information Resources (CLIR) will issue the final version of Preservation in the Age of Large-Scale Digitization: a White Paper. The paper examines preservation issues relevant to large-scale digitization initiatives such as those being done by Google, Microsoft, and the Open Content Alliance. It was written by Oya Rieger, interim assistant university librarian for digital library and information technologies at Cornell University Library.

The paper identifies issues that will influence the availability and usability, over time, of the digital books being created by large-scale digitizing projects, and considers the relationship of these new resources to existing print collections. It concludes with a set of recommendations for rethinking a preservation strategy.

In issuing this paper, CLIR aimed to stimulate discussion among stakeholders and to generate productive thinking about collaborative approaches to enduring access. To this end, CLIR invited public comments through October 5. The comments and responses to the white paper are now available on the CLIR website.

Initial draft of the white paper (September 2007) and comments available at: www.clir.org/activities/details/mdpres.html

International Task Force to Address Economic Sustainability of Digital Preservation

The National Science Foundation (NSF) and the Andrew W. Mellon Foundation are funding a blue-ribbon task force to address the issue of economic sustainability for digital preservation and persistent access. The Task Force will be co-chaired by Fran Berman, director of the San Diego Supercomputer Center at University of California, San Diego and a pioneer in data cyberinfrastructure; and Brian Lavoie, an economist with strong interests in data preservation, and research scientist with OCLC Programs and Research, OCLC Online Computer Library Center, Inc.

The Blue Ribbon Task Force on Sustainable Digital Preservation and Access will also include support by the Library of Congress, the National Archives and Records Administration, the Council on Library and Information Resources, and the Joint Information Systems Committee of the UK.

"Digital information illuminates our world, and modern life, work, education, and research depend on it," said Christopher L. Greer, program director in NSF's Office of Cyberinfrastructure. "The time to act is now to ensure the digital information is reliably available as an engine for progress in our global knowledge society and to secure our digital heritage for future generations."

"It is impossible to imagine success in the Information Age without the availability of our most valuable digital information when we want it now and in the future," said Dr Berman. "It's critical for our society to have a long-term strategic plan for sustaining digital data and we are excited about the potential of the Task Force to help form that plan."

"Persistent access to digital information over long periods of time is vital for the future progress of research, education, and private enterprise," said Dr Lavoie. "In addition to developing sound technical processes for preserving digital information, we must also ensure that our preservation strategies are economically sustainable. The work of the Task Force will be an important step toward achieving that goal."

Though significant progress has been made to overcome the technical challenges of achieving persistent access to digital resources, the economic challenges remain. Dr Berman and Dr Lavoie will convene an international group of prominent leaders to develop actionable recommendations on economic sustainability of digital information for the science and engineering, cultural heritage, public and private sectors. The Task Force is expected to meet over the next two years and gather testimony from a broad set of thought leaders in preparation for the Task Force's Final Report.

In its final report, the Task Force is charged with developing a comprehensive analysis of current issues, and actionable recommendations for the future to catalyze the development of sustainable resource strategies for the reliable preservation of digital information. During its tenure, the Task Force also will produce a series of articles about the challenges and opportunities of digital information preservation, for both the scholarly community and the public.

NSF Award abstract: www.nsf.gov/awardsearch/showAward.do?AwardNumber=0737721

DROID Wins 2007 Digital Preservation Award

An innovative tool to analyze and identify computer file formats has won the 2007 Digital Preservation Award. The judges chose The National Archives of the UK from a strong shortlist of five contenders, whittled down from the original list of 13. The prestigious award was presented in a special ceremony at The British Museum on 27 September 2007 as part of the 2007 Conservation Awards, sponsored by Sir Paul McCartney.

Identifying file formats is a thorny issue for archivists. Organizations such as the National Archives have an ever-increasing volume of electronic records in their custody, many of which will be crucial for future historians to understand 21st-century Britain. But with rapidly changing technology and an unpredictable hardware base, preserving files is only half of the challenge. There is no guarantee that today's files will be readable or even recognizable using the software of the future.

Digital Record Object Identification (DROID) is a software tool developed by the National Archives to perform automated batch identification of file formats. Developed by its Digital Preservation Department as part of its broader digital preservation activities, DROID is designed to meet the fundamental requirement of any digital repository to be able to identify the precise format of all stored digital objects, and to link that identification to a central registry of technical information about that format and its dependencies.

DROID uses internal and external signatures to identify and report the specific file format versions of digital files. These signatures are stored in an XML signature file, generated from information recorded in the PRONOM technical registry. New and updated signatures are regularly added to PRONOM, and DROID can be configured to automatically download updated signature files from the PRONOM website via web services. DROID is a platform-independent Java application, and includes a documented, public API, for ease of integration with other systems. It can be invoked from two interfaces: (1) A Java Swing GUI or (2) A command line interface.

DROID allows files and folders to be selected from a file system for identification. This file list can be saved at any point. DROID can also be used to identify URIs and streams (command line interface only). After the identification process had been run, the results can be output in XML, CSV or printer-friendly formats.

More information: http://droid.sourceforge.net/

Swets Acquires ScholarlyStats from MPS Technologies

In October 2007, Swets announced that it had acquired exclusive rights to ScholarlyStats from MPS Technologies (MPST). MPST will continue to operate ScholarlyStats for Swets and to develop the service, ensuring continuity for existing customers.

ScholarlyStats is a Web-based portal that eases the burden of collecting, consolidating and analyzing e-journal usage statistics from multiple sources. Supplied in COUNTER compliant format, usage reports may be viewed and downloaded by libraries via a single, intuitive interface, thus freeing their staff to focus on other duties and facilitating more accurate collection decisions.

MPST launched ScholarlyStats in 2005 and the product has experienced rapid adoption on a global scale. ScholarlyStats won Library Product of the Year at the 2006 International Information Industry Awards. Working closely with MPST as a Global Sales Partner since the start of 2006, Swets developed a leading channel position, demonstrating its strength in bringing new technologies to the marketplace. Swets is enthused about incorporating the product fully into its extensive portfolio and the untapped potential it holds.

Although Swets has acquired the product, it will be "business as usual" for the existing customers and business partners of MPST. MPST will serve as an outsourcing partner for Swets and will continue to gather and process the usage statistics. Customers will still access and utilize ScholarlyStats through the same portal, www.scholarlystats.com, and the statistics will continue to be reported in the same format.

ScholarlyStats information: www.scholarlystats.com

MPST website: www.mpstechnologies.com

Swets website: www.swets.com

oSkope Visual Search

oSkope is a new visual browser that lets a person search and organize items from different web services like Amazon, Ebay, YouTube or Flickr in a visually intuitive way. oSkope visual search is a free online service developed by oSkope media gmbh in Zurich and Berlin.

oSkope is a search assistant with a highly intuitive visual interface. oSkope enables browsing quickly through a large number of images and preview information with minimal paging. Selected items can be saved by registered users. In its beta version, oSkope allows searching for products and images on popular web services like Amazon, Ebay, YouTube or Flickr and the intention is to add more services soon.

oSkope website: http://oskope.com/

Improved Yahoo Search Engine Released

Yahoo released a new Yahoo! Search engine in October 2007. The intent of the new search is to reduce the number of steps required by the user to find the information needed if possible reducing it to one search.

Features of Yahoo! Search Assist include real-time query suggestions to the search results page, along with related concepts that give users a point-and-click query refinement capability that enables them to explore a subject area they may be unfamiliar with. Search Assist "automagically" drops down from the search box on the results page when it senses that a user is having difficulty in formulating a query. But it only shows up when it is needed or asked for. It then offers real-time suggestions and concepts to explore.

Yahoo tested Search Assist over several months and saw a significant improvement in user satisfaction from those tests. One metric they found was a 61 per cent increase in successful task completion when users had Search Assist as part of their search experience.

They have also improved their algorithmic results to deliver a better multi-media search experience. When search results include links to videos from YouTube, Metacafe or Yahoo! Video, in addition to the link, the user gets an inline video player so they can watch those videos immediately.

Their multi-media improvements include inline Flickr photos too. When a Flickr photo or tag shows up in the user's results, the user gets to see those photos in addition to getting a link.

Full Description of New Service: www.ysearchblog.com/archives/000489.html

New Features for Google Book Search

As more and more books are added to the Book Search index, readers were asking for more ways to search, organize and use the growing digital collection. In response, in September 2007 Google launched several new features to make Google Book Search a more useful tool, including:

Create and search your own library: With My Library, Google extended their search functionality to a user's personal book collection. Users can build their own library on Google Book Search, which they can organize, annotate, and full text search through the books they select. While creating their library, they can also annotate it by adding labels, writing reviews, and rating books. Then they can share their collection with others by sending them a link to their library in Google Book Search. RSS feeds can be set up to alert friends when new books are added to their collection. Google's new "Popular Passages" feature displays quotations or other excerpts of a book that appear in lots of other books, so that the user can track how authors are quoted.

Select, Clip and Post Text: Google added a feature to let users clip and post selections of text from out-of-copyright books so they can share favorite passages or quotes with others.

Refine your search: in addition to the new features that let users interact with books, they've also added search refinement links on the results pages. These links point the user toward categories of books that match their search and give them a new way to peruse the index.

Google Book Search: http://books.google.com

Duke University Press Chooses ebrary Platform to Deliver New eBook Collection

Duke University Press has licensed the ebrary platform to host and deliver a new eBook product, the e-Duke Scholarly Books Collection. The new product is due to be fully released in January 2009, with a pilot program taking place during the 2008 year for a limited number of library partners. Using the ebrary platform, Duke will distribute the new collection directly to the academic library community under a perpetual access model.

"Duke University Press chose the ebrary platform for a number of reasons: It supports multiple business models including subscription and perpetual access, offers rich functionality such as InfoTools, personal bookshelves, and multiple search options, and is highly affordable and reliable," said Steve Cohn, Director of Duke University Press. "Additionally, we are able to easily submit our eBooks into the ebrary platform through the web-ready PDFs supplied to us by the University of Chicago Press's BiblioVault program. Since ebrary's technology is available as a hosted service, we do not have to invest in additional resources to build and maintain the system."

www.ebrary.com

Google Analytics Unveils New Features, Urchin Software

At the eMetrics Summit in Washington, DC in October 2007, Google announced new features to its Google Analytics web service, such as site search reporting and event tracking, and an updated version of Urchin software. This enhanced feature set makes essential Web 2.0 information more accessible and comprehensible, and enables Google Analytics users to better understand their users and to act on reporting information in more sophisticated ways.

Users can enable site search to identify keywords, categories, products, and trends across time and user segments, thereby helping them measure the effectiveness of their websites and their marketing dollars. Site search aggregates data about how searches affect site usage, e-commerce activity, and conversion rates, by tracking internal search patterns. This feature will soon be available worldwide, and it works with Google Custom Search, GSA, Google Mini, and many other non-Google site search products.

With Web 2.0 features also spreading on the Internet, measuring their success is increasingly valuable. Event tracking, which launches in a limited beta at the eMetrics Summit, enables Google Analytics users to measure visitor engagement with a site's interactive elements, such as Ajax, Javascript, Flash movies, page gadgets, downloads, and other multimedia Web 2.0 experiences.

Google Analytics is also introducing Urchin Software, a software version of web analytics, in limited beta worldwide. Urchin Software is an update to Urchin 5 software, and a free 90-day trial of the beta version of Urchin Software can be requested from an authorized reseller: www.google.com/analytics/support_partner_provided.html.

Urchin Software will be available for a discounted price to users of Urchin 5, and will include many improvements including tools to assist in migrating configurations and data from previous versions of Urchin. Once the new software is out of beta, it can be purchased through an authorized reseller.

To better serve their international customers, Google Analytics has added six new languages, bringing the total number of supported languages to 25. The new languages are Czech, Hungarian, Portuguese (Portugal), Thai, Filipino, and Indonesian.

Google Analytics: www.google.com/analytics/

Google Analytics blog: http://analytics.blogspot.com

Google Announces Content ID Tool for YouTube: Video Identification Beta

Thanks to the continued cooperation of content owners large and small, YouTube is able to develop, test, and implement increasingly effective content management tools. In October, YouTube launched, in beta form, its latest innovation in online video: YouTube Video Identification.

YouTube Video Identification will help copyright holders identify their works on YouTube. YouTube has worked with Google to develop one-of-a-kind technology that can recognize videos based on a variety of factors. As its Beta status indicates, Video Identification is brand-new, cutting-edge stuff, so YouTube will be constantly refining and improving it. Early tests with content companies have shown very promising results. As the system is scaled and refined, YouTube Video Identification will be available to all kinds of copyright holders all over the world, whether they want their content to appear on YouTube or not.

Copyright holders can choose what they want done with their videos: whether to block, promote, or even if a copyright holder chooses to partner with us create revenue from them, with minimal friction. YouTube Video ID will help carry out that choice.

No technology can anticipate the preferences of a copyright holder. Today, with millions of people and companies cranking out original video, preferences vary widely. Some copyright holders want control over every use of their creation. Many professional artists and media companies post their latest videos without telling us, while some home videomakers don't want their stuff online. Others want their fans to participate in the creative process. The best YouTube can do is cooperate with copyright holders to identify videos that include their content and offer them choices about sharing that content. As copyright holders make their preferences clear to YouTube up front, YouTube will do its best to automate that choice while balancing the rights of users, other copyright holders, and the community as a whole.

More information: www.youtube.com/t/video_id_about

Sharing, Privacy and Trust: New International Research Study from OCLC

OCLC has released the third in a series of reports that scan the information landscape to provide data, analyses and opinions about users' behaviors and expectations in today's networked world.

The new international report, Sharing, Privacy and Trust in Our Networked World examines four primary areas:

  • Web user practices and preferences on their favorite social sites.

  • User attitudes about sharing and receiving information on social spaces, commercial sites and library sites.

  • Information privacy; what matters and what doesn't.

  • US librarian social networking practices and preferences; their views on privacy, policy and social networks for libraries.

"We know relatively little about the possibilities that the emerging social Web will hold for library services," said Cathy De Rosa, Global Vice President of Marketing, OCLC, and principal contributor to the report. "More than a quarter of all Web users across the countries we surveyed are active users of social spaces. As Web users become both the consumers and the creators of the social Web, the implications and possibilities for libraries are significant. The research provides insights into what these online library users will expect."

OCLC commissioned Harris Interactive to administer the online surveys for the report. Over 6,100 respondents, ages 14 to 84, from Canada, France, Germany, Japan, the United Kingdom and the United States, were surveyed. The surveys were conducted in English, German, French and Japanese. OCLC and Harris also surveyed 382 US library directors.

Among the report highlights:

  • The Internet is familiar territory. Eighty-nine per cent of respondents have been online for four years or more and nearly a quarter have been using the Internet for more than 10 years.

  • The Web community has migrated from using the Internet to building it the Internet's readers are rapidly becoming its authors.

  • More than a quarter of the general public respondents currently participate on some type of social media or social networking site; half of college students use social sites.

  • On social networking sites, 39 per cent have shared information about a book they have read, 57 per cent have shared photos/videos and 14 per cent have shared self-published information.

  • Over half of respondents surveyed feel their personal information on the Internet is kept as private, or more private, than it was two years ago.

  • Online trust increases with usage. Seventy per cent of social networking users indicate they always, often or sometimes trust who they communicate with on social networking sites.

  • Respondents do not distinguish library websites as more private than other sites they are using.

  • Thirteen per cent of the public feels it is the role of the library to create a social networking site for their communities.

Sharing, Privacy and Trust in Our Networked World is the third in a series of reports that study the information environment and how libraries are addressing the needs of today's information users. The new study follows the 2005 Perceptions of Libraries and Information Resources report, which looks at what users think of libraries in the digital age, and The 2003 OCLC Environmental Scan: Pattern Recognition, the award-winning report that describes issues and trends that are impacting and will impact OCLC and libraries.

Like the two earlier reports, Ms De Rosa said she hopes the new report will spark discussion and interest in libraries and among library professionals. She will be speaking about the study and its findings at meetings and conferences in the weeks and months to come.

Sharing, Privacy and Trust in Our Networked World is available for download free of charge. Print copies of the 280-page report will also be available for purchase from the same site beginning October 29.

Download the report at: www.oclc.org/reports/sharing/

Students and Information Technology: ECAR Research Study Report

The Educause Center for Applied Research (ECAR) has released the results of a research study, which is a longitudinal extension of the 2004, 2005, and 2006 ECAR studies of students and information technology. The study, which reports noticeable changes from previous years, is based on quantitative data from a spring 2007 survey and interviews with 27,846 freshman, senior, and community college students at 103 higher education institutions. It focuses on what kinds of information technologies these students use, own, and experience; their technology behaviors, preferences, and skills; how IT impacts their experiences in their courses; and their perceptions of the role of IT in the academic experience.

ECAR website: www.educause.edu/ecar

Study: www.educause.edu/ers0706

Web2ForDev2007 Program Material Now Available

The Web2ForDev 2007 Participatory Web for Development Conference was held 24-27 September 2007 in Rome, Italy. The conference was devoted to exploring the ways in which international development stakeholders can take advantage of the technical and organizational opportunities provided by Web 2.0 methods, approaches and applications. The Conference presentations and the lecture videos are freely available for download from the conference website. The conference blog also contains some interesting cases of Web 2.0 applications.

Conference Presentations and lecture videos: www.web2fordev.net/programme.html

Conference blog: blog.web2fordev.net/

Program Material from NKOS Workshop at ECDL 2007

Presentation Material from the 6th European Networked Knowledge Organization Systems (NKOS) Workshop held at the 11th European Conference on Digital Libraries (ECDL) Conference, Budapest, Hungary on September 21st 2007 are freely available from the workshop website.

The 6th NKOS workshop at ECDL explored the potential of Knowledge Organization Systems, such as classifications, gazetteers, lexical databases, ontologies, taxonomies and thesauri. These attempt to model the underlying semantic structure of a domain for purposes of information retrieval, knowledge discovery, language engineering, semantic web. The workshop provided an opportunity to report and discuss projects, research, and development related to Networked Knowledge Organization Systems/Services in next-generation digital libraries.

Workshop Program Material: www.comp.glam.ac.uk/pages/research/hypermedia/nkos/nkos2007/programme.html

JISC Repositories Research Team Newsletter Released

The Repositories Research Team, part of JISC's Repository Net, has released the Autumn 2007 edition of its newsletter. Sent at the end of the European Conference on Digital Libraries (ECDL) held in Budapest in September, this special issue is packed with useful information about the various activities the JISC Repositories Research Team have been and are involved in the area of Digital Repositories. Among the items in this issue:

  • ECDL2007.

  • Repository Ecology: ecologically influenced approaches to repository and service interactions.

  • Object Reuse and Exchange (OAI-ORE): a new specification to support digital repository interoperability.

Repositories Research Team Newsletter: www.ukoln.ac.uk/repositories/newsletter/

Establishing a Scottish Higher Education Digital Library: Report Available

The Final Report of an Investigative Study into the establishment of a Scottish Higher Education Digital Library (SHEDL) was launched on Friday 5 October at a meeting in Edinburgh. The study was sponsored by the Scottish Confederation of University and Research Libraries (SCURL), and funded by the Universities of Edinburgh and Glasgow. The Report, and its Executive Summary, are available on the SCURL website.

The Study was carried out for SCURL by John Cox of John Cox Associates. The Report sets out a rationale for greatly improving access to electronic information resources, particularly e-journals, for all Scottish higher education institutions, and supplies a series of options for structure and governance, funding, consultative mechanisms and content acquisition strategies, in order to take SHEDL forward to implementation.

The members of SCURL will now review the Report findings, and consider the next steps to be taken.

SCURL website: http://scurl.ac.uk/

Non-Latin Characters to Be Included In LC/NACO Name Authority Records

The major authority record exchange partners (British Library, Library of Congress, National Library of Medicine, and OCLC, Inc., in consultation with Library and Archives Canada) have agreed to a basic outline that will allow for the addition of references with non-Latin characters to name authority records that make up the LC/NACO Authority File.

While the romanized form will continue to be the authorized heading (authority record 1XX field), NACO contributors will be able to add references in non-Latin scripts following MARC 21's "Model B" for multi-script records. Model B provides for unlinked non-Latin script fields with the same MARC tags used for romanized data, such as authority record 4XX fields. Using Model B for authorities is a departure from the current bibliographic record practice of many Anglo-American libraries where non-Latin characters are exported as 880 fields (Alternate Graphic Representation) using MARC 21's "Model A" for multi-script records.

For the initial implementation period, the use of non-Latin scripts will be limited to those scripts that represent the MARC-8 repertoire of UTF-8 (Japanese, Arabic, Chinese, Korean, Persian, Hebrew, Yiddish, Cyrillic, and Greek). Although the exchange of authority records between the NACO nodes will be in UTF-8, LC's Cataloging Distribution Service will continue to supply the MDS-Authorities weekly subscription product in both UTF-8 and MARC-8 for some period of time. It is expected that the use of non-Latin scripts beyond the MARC-8 repertoire will be implemented in the future.

System vendors should be prepared to handle authority records with non-Latin data no earlier than April 2008. Test files will be made available prior to that time. Questions can be addressed to the Cataloging Policy and Support Office at: cpso@loc.gov.

Johns Hopkins Creates New Center to Manage Digital Scholarship

The Johns Hopkins Sheridan Libraries have announced the creation of the Digital Research and Curation Center (DRCC) to manage, preserve, and provide access to the mounting digital scholarship generated by faculty and researchers at the university. No longer limited only to the sciences, the creation of datasets to support teaching and scholarship is becoming increasingly common in the humanities.

"It is critical for the library at a research-intensive university like Hopkins to be on the forefront of capturing this digital scholarship and ensuring that it is usefully organized for and available to both current and future generations of researchers," said Winston Tabb, Sheridan dean of university libraries at Johns Hopkins.

Sayeed Choudhury, who was recently promoted to associate dean of university libraries at Johns Hopkins, is also the Hodson Director of the DRCC. The DRCC builds upon the extensive digital library track record of the former Digital Knowledge Center, established in 1997. "The new center is a key element of the Libraries' digital program, which is looking beyond merely preserving immense digital data sets," he said. "Our librarians and technology specialists are working collaboratively with faculty across a broad range of disciplines to use the data in innovative ways that were not possible in the print world."

Choudhury's DRCC team brings together a unique combination of talent and experience, including programmers, engineers, and scientists. This group of experts works collaboratively with specialists throughout the digital programs division, from the library systems department to library academic computing services, which supports projects such as electronic dissertations and theses, geographical information systems services, and integration of library resources in courseware management systems.

One of the flagship digital initiatives in the humanities is the Roman de la Rose, which enables new approaches to medieval studies through the creation of digital surrogates, transcriptions, and text and image searching. Initially, a collaborative effort between the Libraries and French and humanities professor Stephen G. Nichols in the Krieger School of Arts and Sciences, the foundation-funded project spans a decade and now includes medieval scholars, librarians, and technical specialists at Johns Hopkins and other research institutions around the world. The result is 20 digitized versions of one of the most popular romances of the Middle Ages and an innovative teaching tool for the global community, which will ultimately make 149 of the known 250 Rose manuscripts available for research and scholarship.

The DRCC is also tackling the data-intensive challenge of astronomical data sets in its work with astronomers at Johns Hopkins and the National Virtual Observatory (NVO). Begun in 2001 by Professor Alexander Szalay in the Krieger School of Arts and Sciences, the NVO collects databases of telescopic images from observatories around the world to give researchers universal access to a complete view of the skies. The center has begun work on creating a digital archive and data set for the NVO, which has potential for new modes of astronomical inquiry that were unimaginable a few years ago.

Digital Research and Curation Center: http://ldp.library.jhu.edu/dkc

Working Group on Bibliographic Control Releases Draft Report

In November 2007, the Working Group on the Future of Bibliographic Control released its draft report on the future of bibliographic description in light of advances in search engine technology, the popularity of the Internet and the influx of electronic information resources.

In November 2006, Deanna Marcum, associate librarian for Library Services at the Library of Congress, convened a group made up of representatives of several organizations American Association of Law Libraries, American Library Association (ALA), Association of Research Libraries (ARL), Coalition for Networked Information, Medical Library Association, National Federation of Abstracting and Indexing Services, Program for Cooperative Cataloging and Special Libraries Association and vendors (Google, OCLC and Microsoft) to examine the role of bibliographic control and other descriptive practices in the evolving information and technology environment, and to make recommendations to the Library and to the larger library community.

The group's recommendations emphasized the role of the Library of Congress not as a sole supplier, but rather as an important leader in the cataloging world. "We recognize that you do not have the resources to do everything," said Olivia Madison, representing ARL. "These recommendations are not for the Library of Congress alone but are intended for the entire library and library vendor communities."

  • Increase the efficiency of bibliographic production for all libraries through cooperation and sharing of bibliographic records and through use of data produced in the overall supply chain.

  • Transfer effort into high-value activity. In particular, provide greater value for knowledge creation by leveraging access for unique materials held by libraries that are currently hidden and underused.

  • Position technology by recognizing that the World Wide Web is libraries' technology platform as well as the appropriate platform for standards. Recognize that users are not only people but also applications that interact with library data.

  • Position the library community for the future by adding evaluative, qualitative and quantitative analyses of resources. Work to realize the potential of the Functional Requirements for Bibliographic Records framework.

  • Strengthen the library and information science profession through education and through development of metrics that will inform decision-making now and in the future.

"I am very pleased with the approach taken by the working group," Marcum said. "Instead of focusing solely on the Library of Congress, the members of the group looked at the bibliographic ecosystem and thought deeply about the contributions that can and should be made by all of its parts. We are already doing in an experimental way many of the things suggested by the Working Group in its presentation. Once the final report is received, our challenge will be to analyze the recommendations, decide on which ones should be implemented and move beyond pilot projects and tests."

The report was available for public comment through December 15, 2007. The final report will be released by January 9, 2008, in time for the midwinter meeting of the American Library Association.

Working Group's website: www.loc.gov/bibliographic-future/

Symposium on the Future of the ILS Presentations Available

The Lincoln Trail Libraries System sponsored Symposium on the Future of the Integrated Library System, held in September 2007, was a very successful event and broke expected attendance numbers. Those who were unable to attend, now have a second chance to hear the presentations due to Lincoln Trail Libraries System having recorded the sessions and having now placed the podcasts on a website for open access.

Presentations included:

  • The ILS: The Past, Present and Future Marshall Breeding, Director for Innovative Technologies and Research, Vanderbilt

  • Cataloging and Metadata: What does the Future Hold Issues and Perspectives Michael Norman, Assistant Professor/Head of Content Access, University of Illinois at Urbana-Champaign Library

  • Libraries and the Landscape of the Future Chip Nilges, Vice-President of Business Development, OCLC

  • Planning for the Future Rob McGee, RMG Consultants

  • Integrated Library Systems A Vendor Perspective Carl Grant, President, CARE Affiliates, Inc

  • Open Source The Good, the Bad and the Wonderful Elizabeth Garcia, PINES Program Director, Georgia Public Library Service, and Mike Rylander, Vice President, Research and Design, Equinox

  • The OPAC Sucks Karen Schneider, Research and Development Consultant, College Center for Library Automation (Florida).

  • Developments in the OPAC World: A Panel Discussion Moderated by Karen Schneider. Panelists: Chip Nilges, OCLC Kate Sheehan, Coordinator of Automation, Danbury (CT) Public Library Presentation (Web) Cindi Holt, Information Services Manager, Phoenix Public Library

  • What the Studies Tell Us about the Future Jasmine de Gaia, OCLC; Georgia Needham, Vice President, Member Services, OCLC

Symposium website: www.lincolntrail.info/ilssymposium2007/intropage.html

eXtensible Catalog Project Receives Funding for the Project's Second Phase

A $749,000 grant from the Andrew W. Mellon Foundation to the University's River Campus Libraries will be used toward building and deploying the eXtensible Catalog (XC), a set of open-source software applications libraries can use to share their collections. The grant money will also be used to support broad adoption of the software by the library community. The grant and additional funding from the University and partner institutions makes up the $2.8 million needed for the project. The resulting system will allow libraries to simplify user access to all library resources, both digital and non-digital. This is the second grant awarded to the University by the Mellon Foundation for XC development.

Ronald F. Dow, the Andrew H., and Janet Dayton Neilly Dean of River Campus Libraries, said the system will provide library patrons with a richer experience when accessing the libraries' collections by offering them a variety of tools. Users will be able to navigate search results, and add user tags and reviews to documents, among other things. It will provide a platform for local development and experimentation that will ultimately allow libraries to share their collections through a variety of applications, such as Web sites, institutional repositories, and content management systems.

University of Rochester staff will build XC in partnership with the following institutions: Notre Dame University, CARLI (Consortium of Academic and Research Libraries in Illinois), Rochester Institute of Technology, Oregon State University, the Georgia PINES Consortium, Cornell University, the University at Buffalo, Ohio State University, and Yale University. Each XC partner institution has committed staff time or monetary contributions toward the development of XC.

A second group of institutions will contribute to the project through the participation of its staff members in XC-user research, or by providing advisory support to the University's development team. These institutions include the Library of Congress, OCLC, Inc., North Carolina State University, Darien (CT) Public Library, Ohio State University, and Yale University.

At the University of Rochester Dow is leading the project, along with David Lindahl, director of digital initiatives; Jennifer Bowen, director of catalog and metadata management; and Nancy Fried Foster, lead anthropologist for the libraries.

eXtensible Catalog Project website: www.eXtensiblecatalog.info

OCLC's Grid Services

In October 2007, Roy Tennant of OCLC hosted a select group of library developers at their headquarters in Dublin to talk about their yet to be clearly named new "Grid Services", or the OCLC Service Grid, or the WorldCat Grid. While it is unclear what this is a service, or a product, or a means of collaboration it is clear that there's plenty of buzz on this.

To borrow from Solvitur ambulando, the new "service" intends to "make many of the webservices that OCLC uses internally available externally to library developers. But not only that, they want to foster a network of people who are using these services, and give them feedback channels to let OCLC know what's working, what isn't, and what webservices we want to see, and to share with each other and the world the ways we find to mash-up these data streams."

There are some basic resources out there to look at to see what all the talk is about:

Presentation by Robin Murray to the OCLC Members Council in May 2007 (ppt):

www.oclc.org/memberscouncil/meetings/2007/may/murray_robin.ppt

Blogger write-up of the October 2007 meeting from the Solvitur ambulando blog: www.ibiblio.org/bess/?p=88 This appears to be one of the most complete write ups of this meeting and goes into some detail on what this new "service" will cover.

A write-up of a talk entitled "Leveraging the Power of the Grid" that Roy Tennant gave at Access 2007 in October 2007 in Vancouver, Canada:

http://blogs.talis.com/panlibus/archives/2007/10/roy_tennant_giv.php

Video of the presentation itself: http://video.google.com/videoplay?docid=5967648559348834140&hl=en

Notes from a discussion at librarycampnyc on grid services, moderated by Eric Hellman of OCLC: http://librarycampnyc.wikispaces.com/Grid+Services

Blogger write-up of a November 2007 meeting at OCLC-PICA in the Netherlands from Jakoblog: http://jakoblog.de/2007/11/28/oclc-grid-services-first-insights/

SirsiDynix and Brainware Form Partnership for Next Generation Catalog Development

SirsiDynix announced in November 2007 an original equipment manufacturer partnership with Brainware Inc., whose advanced context-based enterprise search technology will be incorporated into SirsiDynix's next-generation search solutions. Brainware technology will provide innovative fuzzy search, fuzzy logic, dynamic categorization and other capabilities that will empower information seekers to discover content from more sources including libraries' own catalogs, Z39.50 sources, subscription resources, digital collections, crawled Web content, subscription content and social networking data.

From these development efforts, SirsiDynix will build the foundation for a range of "user experience" solutions, including next-generation OPAC functionality and support for community/social networking such as user reviews, rankings and tagging. The first release of SirsiDynix's next-generation search solutions is slated for summer 2008.

Features of SirsiDynix's new search solutions will include:

  • Fuzzy, phrase, sentence, paragraph, keyword and exact search capabilities

  • Fuzzy logic

  • Categorization engine

  • Ability to search range of data sources and formats

  • Stateful, URL-based searching

  • Full-text document searching

  • Seamless integration with SirsiDynix integrated library systems and OPACs: direct interface to SirsiDynix Symphony, Horizon and Unicorn to push MARC record updates from the ILS to ensure highest-performance indexing of relevant MARC and non-MARC data types

  • Look and feel flexibility

  • Single- and multi-byte character support: for libraries with records in a range of languages and whose users search in a range of languages

  • Support on local servers or via SaaS/ASP hosted solution

Sirsi/Dynix website: www.sirsidynix.com

Brainware, Inc. website: www.brainware.com/

Building Connections through the Collections: Bibliocommons

BiblioCommons is an ambitious undertaking that has been developed with the support of Knowledge Ontario and Provincial Library Services in British Columbia, following extensive end-user research through 2006, and technology development through 2007. Bibliocommons is building and delivering groundbreaking new services, transforming online library catalogues from searchable inventory systems into engaging social discovery environments.

BiblioCommons services are now in beta release with charter subscribers Provincial Library Services Branch of British Columbia and Knowledge Ontario. These services will, for the first time, actively integrate the contribution of social data into the existing usage patterns of library users, and enable the aggregation and sharing of that data across library systems to create a highly interactive, discovery environment, which exists as a layer on top of the library's existing Integrated Library System.

Hear an interview with Beth Jefferson, the founder of BiblioCommons at ITConversations: http://itc.conversationsnetwork.org/shows/detail3424.html

Bibliocommons: http://bibliocommons.com/

Route 21: First Online Resource Dedicated to 21st Century Skills Teaching and Learning

Americans increasingly recognize that the US education system can and should do more to prepare young people to succeed in the rapidly evolving 21st century. Skills such as global literacy, problem solving, innovation and creativity have become critical in today's increasingly interconnected workforce and society. To help education leaders and policymakers implement 21st century teaching and learning, the Partnership for 21st Century Skills has launched Route 21, an online, one-stop shop for 21st century skills-related information, resources and tools.

The Partnership for 21st century skills is the leading advocacy organization focused on infusing 21st century skills into education. The organization brings together the business community, education leaders and policymakers to define a powerful vision for 21st century education to ensure every child's success as citizens and workers in the 21st century. The Partnership encourages schools, districts and states to advocate for the infusion of 21st century skills into education and provides tools and resources to help facilitate and drive change.

Route 21 showcases how 21st century skills can be supported through standards, assessments, professional development, curriculum and instruction and learning environments. The site represents the first comprehensive, go-to online resource for high-quality content, best practices, relevant reports, articles and research to assist practitioners in implementing 21st century teaching practices and learning outcomes, according to the Partnership.

Route 21 harnesses Web 2.0 features to allow users to tag, rank, organize, collect and share Route 21 content based on their personal interests. Individuals will continuously update the site with relevant examples as well as share their reactions and insights on implementing 21st century skills in their state, district or school.

Route 21's content is divided into four major sections: 21st century skills; 21st century support systems; resources for 21st century skills and 21st century partner states. Electronic white papers (e-papers) provide deeper insights into the importance of the skills and support systems. The extensive, easy to browse database offers examples, models and best practices on assessments, standards and professional development that reinforce 21st century skills. Thanks to a partnership with The George Lucas Educational Foundation, publishers of www.edutopia.org/, Edutopia magazine, and Edutopia video documentaries, numerous video examples of 21st-century teaching and learning are available.

More information on Route 21: www.21stcenturyskills.org/route21/

Global Faculty E-book Survey Results Now Available from ebrary

In November 2007, ebrary, a leading provider of e-content services and technology, announced that the results of its first Global Faculty E-book Survey, developed by more than 200 librarians and completed by 906 faculty members throughout the world, were publicly available for download at no cost. ebrary plans to periodically repeat the survey to compare how the dynamics of print and electronic resources, usage, and attitudes among faculty members change over time.

The 2007 Global Faculty E-book Survey was designed to better understand faculty experience with electronic resources and printed materials. Learning objectives included usage for research and instruction, perceived strengths and weaknesses, attitudes, and instruction experience and preferences. Key survey findings include the following:

  • Approximately 50 per cent of respondents indicated they prefer using online resources for research, class preparation, and instruction vs 18 per cent who prefer print resources.

  • Eighty-five per cent of respondents viewed information literacy as very necessary, compared to 15 per cent who stated it is somewhat necessary and less than 1 per cent who find it unnecessary.

  • Almost an equal number of faculty members require students to use electronic resources as print for course assignments.

  • Fifty-three per cent of respondents indicated that Google and other search engines are powerful tools for finding information. Twenty-nine per cent indicated Google and other search engines are more useful tools than the print resources provided by the library, compared to 11 per cent who indicated they are more useful than library-provided electronic resources.

Anyone interested in receiving a copy of the survey may register at www.surveymonkey.com/s.aspx?sm=wS8CU8W9N_2fIwRuMq5gNMsw_3d_3d.

Hachette Book Group USA is First to Provide eBook Content in .epub Format

Hachette Book Group USA has announced that it is the first publisher to produce its bestselling content in the new International Digital Publishing Forum's (IDPF) eBook format called ".epub." The new standard will enable Hachette Book Group to produce and send a single digital publication file and ISBN that can be used on any digital platform that is capable of rendering an .epub file. This new process will lower HBGUSA's e-book production costs and allow it to publish more titles as eBooks.

Hachette Book Group is a member of the IDPF committee that created the .epub standard. With the enhanced efficiency that the new standard brings, Hachette Book Group will steadily increase the number of eBooks it publishes, starting with titles released in December 2007. "Hachette Book Group's leadership in implementing new .epub digital standards will accelerate the growth of eBooks for both booksellers and readers," said Michael Smith, executive director of the International Digital Publishing Forum. "The new eBook file formats that Hachette Book Group has helped engineer provide streamlined publishing, distribution, and end-user reading features that we expect to be widely supported."

".epub" is the file extension of an XML format for reflowable digital books and publications. .epub is composed of three open standards, the Open Publication Structure (OPS), Open Packaging Format and Open Container Format, produced by the IDPF. .epub allows publishers to produce and send a single digital publication file through distribution and offers consumers interoperability between software/hardware for unencrypted reflowable digital books and other publications. The Open eBook Publication Structure or "OEB", originally produced in 1999, is the precursor to OPS.

International Digital Publishing Forum: www.idpf.org/

Microsoft Chooses DAISY NISO Digital Talking Book Standard for XML Translator

Microsoft Corp. and the DAISY Consortium, maintenance agency for the DAISY/NISO standard, have announced a joint standards-based development project that will make it possible for computer users who are blind or print-disabled to make better use of assistive technology in their daily lives. A reference model for other Open XML solution providers, this open technical collaboration project on SourceForge.net will yield a free, downloadable plug-in for Microsoft Office Word that can translate Open XML-based documents into DAISY XML, the foundation of the globally accepted DAISY Standard for reading and publishing navigable multimedia content.

In recent decades, individuals with print disabilities have increasingly accessed information using a wide variety of assistive technologies such as screen readers, large print, refreshable Braille and text-to-speech synthesizers. However, because these individuals cannot visually navigate complex page layouts, they often struggle to keep up with the demands of today's information-rich society.

DAISY Digital Talking Books go far beyond the limits imposed by analog audio books or commercial digital audio books. In a DAISY book, the audio is synchronized with the textual content and images, providing an accessible and enriched multimedia reading and learning experience. The structure within DAISY publications makes it possible to navigate quickly by heading or page number and to use indexes and references, all with correctly ordered, synchronized audio and text. A DAISY book also supports multiple outputs, such as Braille and large print. In addition to clear benefits for the print-disabled community, the Open XML to DAISY XML translator also offers the potential for further innovation in the information-intensive markets of publishing, training and education.

"In our information age, access to information is a fundamental human right," said George Kerscher, secretary general of the DAISY Consortium. "This is why leading organizations of and for the blind throughout the world are committed to the advancement of the DAISY Standard. The ability to create DAISY content from millions of Open XML-based documents using this translator for Microsoft Office Word will offer substantial and immediate benefits to publishers, governments, corporations, educators and, most important, to everyone who loves to read."

Maintenance agencies such as The DAISY Consortium an acronym for Digital Accessible Information SYstem Consortium monitor a standard's implementation and effectiveness, and look for ways to keep it current and forward-looking. To that end, the DAISY Consortium has begun a requirements gathering process to determine if changes to the DAISY/NISO Specification are necessary to more effectively meet a growing number of varied user needs. The Requirements Gathering process runs from 31 October 2007 through 31 March 2008 and is open to individuals who read material in accessible media, producers, publishers, educators, information providers, distributors, archivists, technology developers and any other groups involved in the use, creation and/or distribution of accessible and/or multimedia content. Interested parties should go to: www.daisy.org/z3986/requirements/.

Daisy Consortium: www.daisy.org/

More information about NISO: www.niso.org

Introducing the Amazon Kindle Portable Reader

Amazon.com has introduced Amazon Kindle, a portable reader that wirelessly downloads books, blogs, magazines and newspapers to a crisp, high-resolution electronic paper display that looks and reads like real paper, even in bright sunlight. More than 90,000 books are now available in the Kindle Store, including 101 of 112 current New York Times Best Sellers and New Releases.

Kindle uses a high-resolution display technology called electronic paper that provides a sharp black and white screen that is as easy to read as printed paper. The screen works using ink, just like books and newspapers, but displays the ink particles electronically. It reflects light like ordinary paper and uses no backlight, eliminating the eyestrain and glare associated with other electronic displays such as computer monitors or PDA screens. At 10.3 ounces, Kindle is lighter and thinner than a typical paperback and fits easily in one hand, yet its built-in memory stores more than 200 titles, and hundreds more with an optional SD memory card. Additionally, a copy of every book purchased is backed up online on Amazon.com so that customers have the option to make room for new titles on their Kindle knowing that Amazon.com is storing their personal library of purchased content.

Kindle is designed for long-form reading, so it is as easy to hold and use as a book. Full-length, vertical page-turning buttons are located on both sides of Kindle, allowing customers to read and turn pages comfortably from any position. The page-turning buttons are located on both the right and left sides of Kindle, which allows both left and right-handed customers to hold, turn pages, and position Kindle with one hand. In addition, Kindle has six adjustable font sizes to suit customers' varying reading preferences. "We've been working on Kindle for more than three years. Our top design objective was for Kindle to disappear in your hands to get out of the way so you can enjoy your reading," said Jeff Bezos, Amazon.com Founder and CEO.

The Kindle wireless delivery system, Amazon Whispernet, uses the same nationwide high-speed data network (EVDO) as advanced cell phones. Kindle customers can wirelessly shop the Kindle Store, download or receive new content all without a PC, Wi-Fi hot spot, or syncing. Books can be downloaded in less than a minute and magazines, newspapers, and blogs are delivered to subscribers automatically. Amazon pays for the wireless connectivity for Kindle so there are no monthly wireless bills, data plans, or service commitments for customers.

The Kindle Store currently offers more than 90,000 books, as well as hundreds of newspapers, magazines and blogs. Kindle customers can select from the most recognized US newspapers, as well as popular magazines and journals, such as The New York Times, Wall Street Journal, Washington Post, Atlantic Monthly, TIME, and Fortune. The Kindle Store also includes top international newspapers from France, Germany, and Ireland, including Le Monde, Frankfurter Allgemeine, and The Irish Times. Subscriptions are auto-delivered wirelessly to Kindle overnight so that the latest edition is waiting for customers when they wake up. All magazines and newspapers include a free two-week trial.

More information: www.amazon.com/kindle

3M and Checkpoint Form Marketing Alliance

In October 2007 3M and checkpoint Systems, Inc. announced a global strategic sales and marketing alliance. Under the terms of the alliance, 3M Library Systems will be come the exclusive worldwide reseller and service provider for Checkpoint's line of library security and productivity products. Checkpoint will continue to expand its patron-based marketing services portfolio and continue selling those offerings directly to libraries worldwide. The alliance is effective 1 January 2008.

With the addition of Checkpoint security and productivity products, 3M will be able to offer libraries a broader portfolio. Products include: (EM) and radio frequency security systems and accessories, self-service solutions, RFID systems, media storage solutions personal computer management software, and other library products. In 2008, 3M will also be launching a new web based library productivity software solution from Checkpoint, call The Library Advocate.

3M website: www.3M.com/us/library

Checkpoint Systems website: www.checkpointsystems.com/

Yale Partners with Kirtas Technologies and Microsoft Live Search Books

In October 2007, Kirtas Technologies, a provider of digital scanning solutions, announced it will provide high-quality digitization services to Yale University Library, in conjunction with the company's agreement with Microsoft Corp. to digitize books for Live Search Books. The project will initially focus on digitization of 100,000 out-of-copyright English-language books that may not be available at other institutions. Beginning in early 2008, the University's collection will gradually become available through Microsoft's Live Search interface (http://books.live.com), enabling students, scholars, and readers to use them anywhere in the world. With approximately 13 million volumes throughout its system, Yale University boasts one of the most extensive and unique academic libraries in the world.

Yale and Microsoft selected Kirtas Technologies for its innovative robotic book-scanning technology and its unique digitization expertise. The Library has successfully worked with Kirtas in a previous digitization project, as well. "As part of our agreement with Yale, we'll be opening a satellite service bureau in New Haven," said Mark Klein, Director of Operations at Kirtas. "And while our New Haven facility will be fully staffed, our production process allows for remote access, which means it will be fully integrated with our Victor operation."

The project will maintain rigorous standards established by the Yale Library and Microsoft for the quality and usability of the digital content, and for the safe and careful handling of the physical books. Yale and Microsoft will work together to identify which of the approximately 13 million volumes held by Yale's 22 libraries will be digitized. Books selected for digitization will remain available for use by students and researchers in their physical form. Digital copies of the books will also be preserved by the Yale Library for use in future academic initiatives and in collaborative scholarly ventures.

Kirtas Technologies: www.kirtastech.com

Yale press release: www.yale.edu/opa/newsr/07-10-30-02.all.html

Microsoft Live Search Books: http://books.live.com

Open Archives Initiative Announces Meeting on ORE Specifications

On 3 March 2008 the Open Archives Initiative (OAI) will hold a public meeting at Johns Hopkins University in Baltimore, MD to introduce the Object Reuse and Exchange (ORE) specifications. The ORE specifications are developed in response to a significant challenge that has emerged in eScholarship. In contrast to the paper publications of traditional scholarship, or even their digital counterparts, the artifacts of eScholarship are complex aggregations. These aggregations consist of multiple resources with varying media types, semantics types, network locations, and intra- and inter-relationships. The future scholarly communication, research, and higher education infrastructure requires standardized approaches to identify, describe, and exchange these new outputs of scholarship.

The ORE specifications address this challenge with the ORE data model that defines how to associate an identifier, a URI, with aggregations of web resources. By referring to these identifiers, aggregations can then be linked to, cited, and described with metadata, in the same manner as any web resource. The ORE data model also makes it possible to describe the structure and semantics of these aggregations. The ORE specifications define how these descriptions can then be packaged in the XML-based Atom syndication format or in RDF/XML, making them available to a variety of applications.

In addition to their utility in eScholarship, the ORE specifications also apply to our everyday web use where we often encounter aggregations such as multi-page HTML documents, and collections of multi-format images on sites like flickr. OAI-ORE descriptions of these aggregations can be used to improve search engine behavior, provide input for browser-based navigation tools, and develop automated web services to analyze and preserve this information.

The March 3 meeting at Hopkins is intended for information managers and strategists, and implementers of networked information systems. It will be led by the two coordinators of OAI-ORE, Carl Lagoze of Cornell University and Herbert Van de Sompel of Los Alamos National Laboratory. Attendees will learn about the ORE data model. They will also learn about the translation of this data model to the XML-based ATOM syndication format. In addition, they will hear the results of initial experiments with the specifications by OAI-ORE community members. There will be ample time for discussion and questions and to meet other members of the OAI-ORE community. A subsequent meeting with similar content will be held in the UK in connection with the Open Repositories 2008 Conference.

Detailed information for the meeting is at the registration page: www.regonline.com/oai-ore

(Note: attendees must register in advance and attendance is limited).

ISOC Netherlands and ISOC Belgium Partner in Launch of OpenDoc Society

A new member-based organization, OpenDoc Society, will try to bring a global community of users, technologists, and decision makers together around Open Document Format (ODF). The OpenDoc Society will be trying to build a community around the Open Document Format (ISO 26300:2006) and related document standards as key technologies for our society and the Internet in a pre-competitive way.

ODF is an OASIS/ISO-standardized, vendor neutral file format that enables cross-platform collaboration between people and many different types of applications from Office suites to server software. Having such a standard will re-establish full ownership of documents to users, guaranteeing unhindered access to content now and in the future. At the same time, it will contribute to interoperability and innovation across platforms and applications. This will help people work more efficiently and take away the dependency on specific software companies and versions of software for having access to one's own content. It is not about converting people to use specific software. It promotes all ODF-based technology alike: may the best offering in any given situation win. This pragmatic and positive approach is what makes the OpenDoc Society unique. A growing number of governments, including the Dutch, Belgian, South-African, and Danish governments, are moving away from the proprietary formats such as .doc, .wpd and .xls and converting to ODF.

On 23 October 2007, the new initiative was launched with a large event in the Royal Library in the Hague, with speakers from several governments, the European Commission, and the OASIS TC that produces ODF. Around forty organizations, representing government, industry, civil society, cultural institutions, organizations for people with visual impairments, and open source projects support the initiative already. ISOC Netherlands and ISOC Belgium actively contributed to the establishment of the new organization.

The founding board of OpenDoc Society will consist of Bert Bakker (director of Center for Media and Communication, and former member of the Netherlands parliament chair), Michiel Leenaars (director ISOC.nl, manager at NLnet foundation secretary), and Bob Goudriaan (financial specialist and informal investor treasurer). As new local branches around the world are added, an international board will be set up. The organization wants to expand internationally and hopes it can play a strategic role in creating awareness and building a community to further the growth of ODF.

More information can be found at: www.opendocsociety.org

Pitt's Libraries and University Press Collaborate on Open Access to Press Titles

The University of Pittsburgh's University Library System (ULS) and University Press have formed a partnership to provide digital editions of press titles as part of the library system's D-Scribe Digital Publishing Program. Thirty-nine books from the Pitt Latin American Series published by the University of Pittsburgh Press are now available online, freely accessible to scholars and students worldwide. Ultimately, most of the Press' titles older than two years will be provided through this open access platform.

For the past decade, the ULS has been building digital collections on the Web under its D-Scribe Digital Publishing Program, making available a wide array of historical documents, images, and texts which can be browsed by collection and are fully searchable. The addition of the University of Pittsburgh Press Digital Editions collection marks the newest in an expanding number of digital collaborations between the ULS and the University Press.

The D-Scribe Digital Publishing Program includes digitized materials drawn from Pitt collections and those of other libraries and cultural institutions in the region, pre-print repositories in several disciplines, the University's mandatory electronic theses and dissertations program, and electronic journals during the past eight years, sixty separate collections have been digitized and made freely accessible via the World Wide Web. Many of these projects have been carried out with content partners such as Pitt faculty members, other libraries and museums in the area, professional associations, and most recently, with the University of Pittsburgh Press with several professional journals and the new University of Pittsburgh Press Digital Editions. The D-Scribe collections are accessible free-of-charge on the World Wide Web at www.library.pitt.edu/dscribe/

More titles will be added to the University of Pittsburgh Press Digital Editions each month until most of the current scholarly books published by the Press are available both in print and as digital editions. The collection will eventually include titles from the Pitt Series in Russian and East European Studies, the Pitt-Konstanz Series in the Philosophy and History of Science, the Pittsburgh Series in Composition, Literacy, and Culture, the Security Continuum: Global Politics in the Modern Age, the History of the Urban Environment, back issues of Cuban Studies, and numerous other scholarly titles in history, political science, philosophy, and cultural studies.

The University of Pittsburgh Press Digital Editions may be viewed at http://digital.library.pitt.edu/p/pittpress/ and through direct links from the Press website, www.upress.pitt.edu/

Pitt Digital Research Library technical documentation: http://digital.library.pitt.edu/documentation/

Library of Congress and UNESCO Sign World Digital Library Agreement

In October 2007, Librarian of Congress James H. Billington and UNESCO Assistant Director for Communication and Information Abdul Waheed Khan signed an agreement at UNESCO headquarters in Paris pledging cooperative efforts to build a World Digital Library Web site. The World Digital Library will digitize unique and rare materials from libraries and other cultural institutions around the world and make them available for free on the Internet. These materials will include manuscripts, maps, books, musical scores, sound recordings, films, prints and photographs. The objectives of the World Digital Library include promoting international and intercultural understanding, increasing the quantity and diversity of cultural materials on the Internet, and contributing to education and scholarship.

Under the terms of the agreement, the Library of Congress and UNESCO will cooperate in convening working groups of experts and other stakeholders to develop guidelines and technical specifications for the project, enlist new partners and secure the necessary support for the project from private and public sources. A key aspect of the project is to build digital library capabilities in the developing world, so that all countries and regions of the world can participate and be represented in the World Digital Library.

To test the feasibility of the project, the Library of Congress, UNESCO and five other partner institutions the Bibliotheca Alexandrina of Alexandria, Egypt; the National Library of Brazil; the National Library of Egypt; the National Library of Russia and the Russian State have developed a prototype of the World Digital Library. The prototype is being demonstrated to national delegations at the UNESCO General Conference currently underway. The World Digital Library will become available to the public as a full-fledged Web site in late 2008 or early 2009.

The prototype functions in the six UN languages Arabic, Chinese, English, French, Russian and Spanish, plus Portuguese and features search and browse functionality by place, time, topic and contributing institution. Input into the design of the prototype was solicited through a consultative process that involved UNESCO, the International Federation of Library Associations and Institutions and individuals and institutions in more than 40 countries.

World Digital Library website: www.worlddigitallibrary.org

WDL Prototype: www.worlddigitallibrary.org/project/english/prototype.html

BioMed Central Launches Biology Image Library

In October 2007, BioMed Central announced the launch of Biology Image Library, an online resource that provides access to over 11,000 carefully selected biology-related images. The Library is a new subscription-based service offering access to an annotated selection of high-quality, peer-reviewed biological images, movies, illustrations and animations. Subscribers may make royalty-free use of images in the collection for research and educational purposes. All content comes from sources that are peer-reviewed by academic editors prior to publication online.

The Biology Image Library is working to expand its collection of images. It should be noted that contributors retain image rights and Biology Image Library will also offer them the option of selling commercial use rights on their behalf.

Biology Image Library website: www.biologyimagelibrary.com/home

Information for potential contributors: www.biologyimagelibrary.com/contribute

University of Maryland Libraries Announces Digital Collections Portal

The release of the UM Digital Collections Portal marks two and a half years of work in the creation of a repository that serves the teaching and research mission of the University of Maryland Libraries. Many of the objects are digital versions from Maryland's Special Collections (such as A Treasury of World's Fairs Art and Architecture) or are new virtual collections (The Jim Henson Works). Other collections (such as Films@UM) support the teaching mission of the Libraries. This release also marks the integration of electronically available finding aids, ArchivesUM, into the repository architecture, creating a framework for digital objects to be dynamically discovered from finding aids.

The repository is based on the Fedora platform, uses Lucene for indexing, and Helix for streaming video. The repository features almost 2,500 digital objects, with new objects added monthly. Object types currently delivered include full text (both TEI and EAD), video, and images. Objects can be discovered within a collection context or via a search across multiple collections. Cross-collection discovery is achieved through a common metadata scheme and controlled vocabulary. This metadata scheme also provides for individual collections to have more granular domain-specific metadata.

UM Digital Collections Portal: www.lib.umd.edu/digital/

Fedora Commons Integrates its Software Platform with Sun

Fedora Commons, a non-profit organization which provides sustainable technologies to create, manage, publish, share and preserve digital information assets, announced in November 2007 plans to integrate the Fedora Commons software platform with the Sun StorageTek(TM) 5800 System. This collaboration provides a substantial opportunity to advance open systems for durable access to the digital information assets, which increasingly forms the basis for our intellectual, organizational, scientific and cultural heritage.

The Fedora Commons free, open-source software platform uses a service-oriented architecture that enables the creation of collaborative, integrated information spaces where any information entity can be linked to any other entity. The StorageTek 5800 system has advanced data integrity, resilience and failure tolerance over other storage system designs, and includes customer-defined arbitrary metadata indexing and search. Customers can seamlessly scale, saving millions of dollars in administrative costs over traditional storage, and be assured of an advanced level of data availability.

During the creation or authoring of intellectual works, changes occur rapidly, but over their lifecycle these works become fixed information assets. New scholarship is built upon earlier works and new science is built upon other research. Without durable access to previous works, research progress cannot be sustained. Unless key digital assets such as datasets and analyses are reliably kept and their authenticity is guaranteed, the scientific method may be compromised and the results may be questioned. The StorageTek 5800 provides a powerful capability to handle fixed unstructured information assets. The Fedora Commons software platform and StorageTek 5800 architecture form a natural synergy for the creation, management, use and care of fixed information. Both systems recognize that, over time, all technologies and formats change.

Fedora Commons website: www.fedora-commons.org

Sun Solutions for Digital Content: http://sun.com/storagetek/

Government of Canada Web Archive Launched

Library and Archives Canada (LAC) launched the "Government of Canada Web Archive" in November 2007. The site can be found at: www.collectionscanada.gc.ca/webarchives/

The LAC Act received Royal Assent on 22 April 2004, allowing LAC to collect and preserve a representative sample of Canadian websites. To meet its new mandate, LAC began to harvest the Web domain of the Federal Government of Canada starting in December 2005. As resources permit, this harvesting activity will be undertaken on a semi-annual basis. The harvested website data is stored in the "Government of Canada Web Archive" (GCWA). Client access to the content of the GCWA is provided through searching full text by keyword, by department name and by URL. It is also possible to search by specific format type, (e.g.,*.PDF). By the Fall 2007, approximately 100 million digital objects (over 4 terabytes) of archived federal government website data will be made accessible via the LAC website.

LAC has implemented this first significant Canadian Web archive through the use of open source tools, developed by the International Internet Preservation Consortium (IIPC), of which LAC is a member. The goal of this organization is to collect, preserve and ensure long-term access to Internet content from around the world through the collaborative development of common tools and techniques for developing Web archives.

IIPC Toolkit with software downloads: www.netpreserve.org/software/downloads.php

Related articles