Citation
(2007), "New & noteworthy", Library Hi Tech News, Vol. 24 No. 5. https://doi.org/10.1108/lhtn.2007.23924eab.001
Publisher
:Emerald Group Publishing Limited
Copyright © 2007, Emerald Group Publishing Limited
New & noteworthy
British Library Launches Sounds Familiar, an Interactive Spoken English Website
Sounds Familiar is a unique and groundbreaking new interactive website from the British Library, celebrating the UK's many different accents, dialects and vocabularies. Users will be able to hear recordings of people from all over England, Wales, Scotland and Northern Ireland – and children and young adults are being asked to add their own.
Sounds Familiar is the only English language website of its kind. It features 72 recordings of regional accents and dialects from every corner of the UK, some recorded in the 1950s and some almost half a century later, in 1998-1999, making it possible for users to explore how spoken English varies regionally and how accents and dialects have changed over time.
In the recordings, the interviewees discuss a huge array of subjects ranging from football to farming, school, work and home life, shopping, computers and much more. Sounds Familiar also features a series of interactive sound maps that make it possible to explore specific aspects of language variation and change, and examine the vocabulary, grammar and sounds of spoken English. The website also includes three case studies with over 600 audio-clips intended that give an in-depth look at three very different varieties of English: Received Pronunciation (RP), Geordie dialect and minority ethnic English.
The British Library's vision is to use the website and the new recordings submitted by young speakers to create a comprehensive "sound map of the UK", which will showcase the varied accents and dialects that can be heard nationwide. The voice recordings gathered through the website will be added to the British Library's Sound Archive for the benefit of future generations.
Sounds Familiar: www.bl.uk/learning/langlit/sounds/index.html
Institute for the Future of the Book Releases Sophie
In April 2007, the Institute for the Future of the Book released an alpha version of Sophie, their first piece of software. Sophie is an open-source platform for creating and reading electronic books for the networked environment. It is intended to facilitate the construction of documents that use multimedia and time in ways that are currently difficult, if not impossible, with today's software.
As noted on the project website: Sophie's raison d'être is to enable people to create robust, elegant rich-media, networked documents without recourse to programming. We have word processors, video, audio and photo editors but no viable options for assembling the parts into a complex whole except tools like Flash which are expensive, hard to use, and often create documents with closed proprietary file formats. Sophie promises to open up the world of multimedia authoring to a wide range of creative people.
Originally conceived as a standalone multimedia authoring tool, Sophie is now integrated into the Web 2.0 network in some very powerful ways: Sophie documents can be uploaded to a server and then streamed over the net. It is possible to embed remote audio, video and graphic text files in the pages of Sophie documents meaning that the actual document that needs to be distributed might be only a few hundred kilobytes even if the book itself is comprised hundreds of megabytes or even a few gigabytes. Sophie now has the ability to browse open knowledge initiative (OKI) repositories from within Sophie itself and then to embed objects from those repositories. They now have live dynamic text fields (similar to the Institute's CommentPress experiments on the web) such that a comment written in the margin is displayed immediately in every other copy of that book – anywhere in the world.
Release of Sophie version 1.0 is planned for December 2007.
Sophie Project website: www.sophieproject.org
Overview of Sophie Project: www.futureofthebook.org/sophie/SophieIntro.pdf
Sophie's Technological Pedigree: www.futureofthebook.org/sophie/SophieHistory.pdf
Digital Media and Learning: Confronting the Challenges of the 21st Century
Henry Jenkins, Director of the Comparative Media Studies Program at the Massachusetts Institute of Technology, has co-authored a white paper that outlines a set of skills and strategies that allow students to participate in cultures rather than interact with technologies. The report identifies forms of participatory culture including:
Affiliations – memberships, formal and informal, in online communities centered on various forms of media (such as Friendster, Facebook, message boards, metagaming, game clans, or MySpace).
Expressions – producing new creative forms (such as digital sampling, skinning (applying a personalized visual to interfaces) and modding (Creating "user modifications" – modifying code to work differently than intended by programmer), fan videomaking, fan fiction writing, zines, mash-ups).
Collaborative problem-solving – working together in teams, formal and informal, to complete tasks and develop new knowledge (such as through Wikipedia, alternative reality gaming, spoiling (sharing details of features, embedded or hidden, with users).
Circulations – shaping the flow of media (such as podcasting, blogging).
A central goal of the report is to shift the focus of the conversation about the digital divide from questions of technological access to those of opportunities to participate and to develop the cultural competencies and social skills needed for full involvement. The report offers a list of core media literacy skills, including Play – the capacity to experiment with one's surroundings as a form of problem-solving; Appropriation – the ability to meaningfully sample and remix media content; Judgment – the ability to evaluate the reliability and credibility of different information sources; Networking – the ability to search for, synthesize, and disseminate information; Negotiation – the ability to travel across diverse communities, discerning and respecting multiple perspectives, and grasping and following alternative norms.
Download the white paper at: www.digitallearning.macfound.org/site/c.enJLKQNlFiG/b.2029291/k.97E5/Occasional_Papers.htm
Five Weeks to a Social Library: Course Materials Available
Between February and March 2007, the first Five Weeks to a Social Library course was held. Five Weeks to a Social Library is the first free, grassroots, completely online course devoted to teaching librarians about social software and how to use it in their libraries. It was developed to provide a free, comprehensive, and social online learning opportunity for librarians who do not otherwise have access to conferences or continuing education and who would benefit greatly from learning about social software.
The course was taught using a variety of social software tools so that the participants acquired experience using the tools while they were taking part in the class. It made use of synchronous online communication, with one or two weekly Webcasts and many small group IM chat sessions made available to participants each week. By the end of the course, each student developed a proposal for implementing a specific social software tool in their library. Course content was and is freely viewable by any and all interested parties and all live Webcasts were archived for later viewing.
The course covered the following topics: Blogs RSS Wikis Social Networking Software and Second Life Flickr Social Bookmarking Software Selling Social Software at Your Library
The organizers of the course included: Meredith Farkas, Chair, Michelle Boule, Karen Coombs, Amanda Etches-Johnson, Ellyssa Kroski, and Dorothea Salo. Content presenters included a wide range of library professions and are noted on the Course Program page: www.sociallibraries.com/course/prelimprogram
The content of this course is licensed under a Creative Commons Attribution-Non-Commercial-Share-Alike license.
Course website: www.sociallibraries.com/course/
libSite.org Recommendation Service Launched
libSite.org, a recommendation service for library-related Websites, was officially launched on 10 April 2007. Anything that is "library-related" is fair game for addition to libSite. It can be a blog whether personal or institutional, a library website – either new or redesigned. It can be a digital collection, a research site or one devoted to instruction. It can be in a school, public setting, or even in private industry. All submissions are moderated and the vast majority of submissions are welcomed and approved.
libSite.org is built around the premise that library-related projects need and deserve a higher profile, that the current technology allows users to engage this material in any number of creative ways. For that reason, the site features a blog, a wiki, RSS feeds, and email alerts – the last two being configurable down to individual tags. Users can rate sites and add them to a "favorites" page. There is even a libSite Widget that people can put on their own sites.
Fundamental to the concept is user-involvement. Users can post recommendations ("webCites"). Users can add their own tags. Based on this input, others can then create RSS feeds and email alerts. All can participate if only to leave constructive comments and rate the work of others.
The site is built on the open-source content management system Drupal. Like all good projects it is a work-in-progress, or as often said nowadays, "permanent beta". Everyone is invited to register, recommend sites, post comments, and add ratings to listed sites.
US Patent & Trademark Office Implementing Social Software for Patent Review
The Community Patent Review project is an initiative of the New York Law School Institute for Information Law and Policy in collaboration with the United States Patent and Trademark Office (USPTO). Community Patent Review aims to improve the quality of issued patents by giving the patent examiner access to better information by means of an open network for community peer review of patent applications.
Designed by dozens of experts in consultative workshops at Harvard, Stanford, New York Law School, University of Michigan and elsewhere, Community Patent Review is a web-based system that exploits network technology to connect innovation experts to patent examiners and the patent examination process. The process has come to be referred to as "peer-to-patent", "open examination" or "open review". The Community Patent Review pilot project focuses on integrating an open peer review process with the USPTO, creating and amalgamating a vetted database of prior art references that, over time, produces better patent grants, and developing a deliberation methodology and technology to allow community rating, ranking of prior art and feedback from patent examiners. Community Patent Review is the first social software project to be directly connected to and have an impact on the legal decision-making process.
The project is currently accepting patent applications for open review. The pilot will run from June 2007-June 2008. GE, Hewlett-Packard, IBM, Intel, Microsoft, Oracle and Red Hat have already agreed to have their patents examined under this model. Community Patent Review aims to create a blueprint for democratizing policymaking that can be applied, not only to patents, but also to agency decision-making across government.
Peer to Patent Project home page: http://dotank.nyls.edu/communitypatent/
Library of Congress Launches Its First-Ever Blog in Celebration of 207th Birthday
The Library of Congress turned 207 years old on 24 April 2007, but with the addition of the first-ever public blog to its award-winning Web site, it quite possibly has never looked younger. Long a pioneer and leading provider of online content, with a Web site at www.loc.gov that makes 22 million digital items available at the click of a mouse and receives 5 billion hits per year, the Library of Congress has launched the blog at www.loc.gov/blog/.
"The Library of Congress has been in the vanguard of providing a wealth of knowledge in digital form, so it is fitting that it would be among the first federal agencies to join the blogosphere," said Librarian of Congress James H. Billington.
The blog will be authored by the Library's director of communications, Matt Raymond, with contributions from Dr. Billington, along with curators and other Library staff. "Given the presence of some 70 million blogs worldwide – and growing exponentially – it is crucial that the Library of Congress be a part of the collective conversation that is taking place", Raymond said. "Birthdays are often an occasion to look backward, but we chose April 24 to look to a future in which the digital world becomes an even more indispensable part of the physical world".
The blog will accept moderated comments from readers, governed by rules of respectful, civil discourse and appropriateness. Those rules are part of a policy the Library of Congress is in the process of adopting to guide the creation of other audience-specific blogs in the near future.
Blogs are among the important born-digital content that is being saved and preserved in perpetuity under the Library's National Digital Information Infrastructure and Preservation Program (www.digitalpreservation.gov).
Library of Congress blog: www.loc.gov/blog/
Linux Partners with LC in Digitization Project Using Open Source Software
The Library of Congress is about to begin an ambitious project to digitize thousands of rare and fragile public domain documents using Linux-based systems and publish the results online in multiple formats. Thanks to a $2 million grant from the Sloan Foundation, "Digitizing American Imprints at the Library of Congress" will begin the task of digitizing these rare materials. Open source software will play an important role in the process.
The main component is Scribe, a book-scanning system that is a combination of hardware and free software. While previous versions were written for both Linux and Windows, the Internet Archive has migrated Scribe entirely to Linux, and Windows support has been dropped. The project uses Ubuntu now. A Linux-based Scribe workstation at the Library of Congress will hold the material to be scanned in a V-shaped cradle while two cameras take images of it. A human operator performs quality assurance, and then Scribe sends the digital images to the Internet Archive in San Francisco, where it is processed and eventually posted online in various formats. Free software is used almost every step of the way. The books are stored in the PetaBox, which is the Internet Archive's massive million-gigabyte storage system.
A good number of the historic materials in question are old, fragile, and in such rough shape that placing them in Scribe's cradle, or even attempting to read them, could irreparably damage them. If scanning the brittle materials demands new software and digitization techniques, the Library of Congress will work in conjunction with the Internet Archive to make the innovations available to the public.
The program is part of larger efforts, both at the Library of Congress, to preserve old media and records, and at the Internet Archive, which is already scanning public domain materials with its Open Content Alliance, a consortium of about 40 libraries.
Linux.com Article: http://enterprise.linux.com/article.pl?sid=07/03/26/1157212&from=rss
Open Content Alliance: www.opencontentalliance.org/
"Chronicling America": Digital Access to Historic American Newspapers
The Library of Congress and the National Endowment for the Humanities has announced that "Chronicling America: Historic American Newspapers" debuted in March 2007 with more than 226,000 pages of public-domain newspapers from California, Florida, Kentucky, New York, Utah, Virginia and the District of Columbia published between 1900 and 1910. The fully searchable site is available at www.loc.gov/chroniclingamerica/.
Chronicling America is a prototype Website providing access to information about historic newspapers and select digitized newspaper pages, and is produced by the National Digital Newspaper Program (NDNP). NDNP, a partnership between the National Endowment for the Humanities (NEH) and the Library of Congress (LC), is a long-term effort to develop an Internet-based, searchable database of US newspapers with descriptive information and select digitization of historic pages. Supported by NEH, this rich digital resource will be developed and permanently maintained at the Library of Congress. An NEH award program will fund the contribution of content from, eventually, all US states and territories.
Over a period of approximately 20 years, NDNP will create a national, digital resource of historically significant newspapers published between 1836 and 1922 from all US states and territories. Also on the Web site, an accompanying national newspaper directory of bibliographic and holdings information directs users to newspaper titles in all formats. The information in the directory was created through an earlier NEH initiative. The LC will also digitize and contribute to the NDNP database a significant number of newspaper pages drawn from its own collections during the course of this partnership. For the initial launch, the LC contributed more than 90,000 pages from 14 different newspaper titles published in the District of Columbia between 1900 and 1910.
The Newspaper Title Directory is derived from the library catalog records created by state institutions during the NEH-sponsored United States Newspaper Program (www.neh.gov/projects/usnp.html), 1980-2007. Under this program, each institution created machine-readable cataloging (MARC) via the cooperative online serials program (CONSER) for its state collections, contributing bibliographic descriptions and library holdings information to the Newspaper Union List, hosted by the Online Computer Library Center (OCLC). These data, approximately 140,000 bibliographic title entries and 900,000 separate library holdings records, were acquired and converted to MARCXML format for use in the Chronicling America Newspaper Title Directory.
Each NDNP participant received an award to select and digitize up to 100,000 newspaper pages representing that state's regional history, geographic coverage, and events of the period 1900-1910. These newspaper materials were digitized to technical specifications designed by the Library of Congress (profiles describing the full set of specifications can be found at www.loc.gov/ndnp/techspecs.html). Chronicling America provides access to these digitized historic materials primarily through a geographic and timeline-based graphical Web interface enhanced with Flash interactivity. Basic and advanced searches are available for both full-text newspaper pages and bibliographic newspaper records. Pages are viewed in JPEG format, dynamically created on request and presented into a Flash-based zoom/magnify and image navigation application. A low-bandwidth, text-based interface is also available from the home page.
The NDNP repository developed for Chronicling America is based on the Open Archive Information System (OAIS) Reference Model for preservation repository architecture and supported by a variety of modular components to enable long-term sustainability of data ingestion, archival management and data dissemination. The various modules include the use of Flexible Extensible Digital Object and Repository Architecture (FEDORA) for basic repository architecture, SRU/SRW, Aware JPEG2000 Libraries, Apache Lucene for search and index, Apache Cocoon for Web dissemination and Adobe Flash.
Chronicling America: www.loc.gov/chroniclingamerica/
More information on program guidelines, participation, and technical information: www.neh.gov/projects/ndnp.html or www.loc.gov/ndnp/
EThOS, UK Electronic Theses Service To Go Live
EThOS, the Electronic Theses Online Service, is based on a prototype that was developed by a project team funded by the Joint Information Systems Committee (JISC), the Consortium of Research Libraries in the British Isles (CURL), and the British Library and participating partners. A new two-year funded project, "EThOSnet", has now been set up to establish EThOS as a live service. JISC and CURL have announced their support for the new service which will be run by the British Library on behalf of the UK higher education community and will be developed in partnership with both HE and the JISC.
The vision of EThOS is to allow UK theses to take their place online, alongside those in other European countries and elsewhere, ensuring that the UK contributes to and benefits from a much greater level of engagement with research worldwide. For libraries, EThOS offers assurance that electronic theses can be discovered, accessed, managed and preserved, exploiting the strengths of universities and of the British Library to, for example, save both time and space.
EThOSnet is intended to: create a one-stop shop for resource discovery for UK theses; provide direct links, free at the point of use, to the full electronic text; increase the number of e-theses initially available, thus enhancing UK repository content; extend the EThOS partnership and encourage "early adopters" by presenting the EThOS case and enabling institutions to sign up; enhance the procedural infrastructure, and upgrade the EThOS Toolkit accordingly, with a view to improving institutional workflows and sharing experience and best practice, in close partnership with registry and academic staff; address the HE community's concerns, identified by the independent evaluation, regarding the management of third-party rights and the detection of plagiarism; scale up the EThOS technological infrastructure for the move from prototype to "live" status; monitor and test relevant technology trends in order to identify those technologies which EThOS may be able to adopt in the future to improve further the management of e-theses and consolidate the embedding of the service within institutional practices.
For further information about EThOS and the EThOSnet project, please email: info@ethos.ac.uk4.
EThOSnet description: www.jisc.ac.uk/whatwedo/programmes/programme_rep_pres/repositories_sue/ethosnet.aspx
Full press release: www.jisc.ac.uk/whatwedo/programmes/programme_rep_pres/ethosnet_announcement_mar07
EThOS project: www.ethos.ac.uk/
Researchers' Use of UK Academic Libraries and Their Services
The Research Information Network (RIN) and the Consortium of Research Libraries (CURL) have released a new study designed to provide an up-to-date and forward-looking view of how researchers interact with academic libraries in the UK. Harnessing empirical data and qualitative insights from over 2,250 researchers and 300 librarians, the RIN and CURL hope that the results will be useful in informing the debate about the future development of academic libraries and the services they provide to researchers.
This is an important moment in the relationship between researchers and research libraries in the UK. The foundations of the relationship are beginning to be tested by shifts in the way that researchers work. The rise of e-research, interdisciplinary work, cross-institution collaborations, and the expectation of massive increases in the quantity of research output in digital form all pose new challenges. These challenges are about how libraries should serve the needs of researchers as users of information sources of many different kinds, but also about how to deal with the information outputs that researchers are creating.
Currently, the majority of researchers think that their institutions' libraries are doing an effective job in providing the information they need to do their work, but it is time to consider the future roles and responsibilities of all those involved in the research cycle – researchers, research institutions and national bodies, as well as libraries – in meeting the challenges that are coming.
In commissioning this study, the RIN and CURL have sought to establish a solid base of evidence on how libraries have been developing their services and strategies, and how researchers have been making use of those services. But they have also sought to look forward, to gain a perspective from both researchers and librarians as to how they envisage library services developing in the future.
Download the report and its appendix at: www.rin.ac.uk/researchers-use-libraries
ACRL Releases Essay on Technology and Change in Academic Libraries
The Association of College and Research Libraries (ACRL), a division of the American Library Association (ALA), has published an essay on technology and change in academic libraries that resulted from a November 2 and 3, 2006 summit held in Chicago. ACRL convened an invitational summit focusing on how technologies and the changing climate for teaching, learning, and scholarship will likely recast the roles, responsibilities and resources of academic libraries over the next decade.
The summit was conducted as an unscripted roundtable facilitated by Robert Zemsky of The Learning Alliance. Attended by 30 leaders who both care about academic libraries and have the ability to look over the horizon in order to imagine an alternative future, the summit included librarians, presidents and provosts, association representatives, and technology innovators and vendors. The time together resulted in a discussion paper that asks key questions and suggests a few answers that should expand the national discussion of how academic libraries can best serve their institutions and the larger nation.
The summit identified three essential actions libraries must take to achieve the necessary transformation and remain vital forces on campus in the years ahead: Libraries must evolve from an institution perceived primarily as the domain of the book to an institution that users clearly perceive as providing pathways to high-quality information in a variety of media and information sources. The culture of libraries and their staff must proceed beyond a mindset primarily of ownership and control to one that seeks to provide service and guidance in more useful ways, helping users find and use information that may be available through a range of providers, including libraries themselves, in electronic format. Libraries must assert their evolving roles in more active ways, both in the context of their institutions and in the increasingly competitive markets for information dissemination and retrieval. Libraries must descend from what many have regarded as an increasingly isolated perch of presumed privilege and enter the contentious race to advance in the market for information services – what one participant in our roundtable termed "taking it to the streets".
Summit participants suggested that to remain indispensable, libraries and librarians must come to define and fulfill a reconfigured set of roles for serving their institutions: Broaden the catalog of resources libraries provide in support of academic inquiry and discovery. Foster the creation of new academic communities on campus. Support and manage the institution's intellectual capital. Become more assertive in helping their institutions define strategic purposes. Summit participants further suggested possible roles for ACRL. Convene and facilitate dialogues with leaders of key constituencies to consider the future of libraries in supporting the missions of higher education institutions. Contribute to national efforts to better understand elements of successful learning, and help advance higher education's performance in the achievement of learning outcomes. Identify and monitor indices of change in the environment of libraries and information dissemination, as well as metrics to gauge the effectiveness of libraries in serving changing needs of their institutions. Provide leadership in helping libraries and librarians make effective use of technology in supporting research and education. Provide national leadership in communicating the potential and performance of libraries in adopting new paradigms and meeting changing demands of institutions, faculty, and students.
ACRL seeks to continue the conversation about the changing roles for librarians, libraries, and ACRL. The first response, prepared by Julie Todaro, ACRL Vice-President/President-Elect, is posted with the essay. People who want to comment should do so at the ACRLog and comment on the story about the essay at acrlblog.org/2007/03/15/acrl-summit-report-on-changing-role-of-academic-libraries-now-available/
Essay: www.ala.org/ala/acrl/acrlissues/future/changingroles.htm
More Conference Presentations Available in Video or Podcast
An increasing number of conferences are making presentation materials available in either video files or as podcasts.
Some of the proceedings of the CERN Workshop on Innovations in Scholarly Communication (OAI5) are available from the conference website as video file attachments. The OAI series of workshops is one of the biggest international meetings of technical repository-developers, library Open Access policy formulators, and the funders and researchers that they serve. The program contains a mix of practical tutorials given by experts in the field, presentations from cutting-edge projects and research, posters from the community, and breakout discussion groups.
Examples of the sessions available include: State of OAI-PMH/Simeon Warner, Cornell University OAI Object Re-Use and Exchange/Herbert Van De Sompel, LANL Author identification: national and disciplinary approaches/Leo Waaijers, SURF; Thomas Krichel, Long Island University MESUR: metrics from scholarly usage of resources/Johan Bollen, LANL
Links to slides and presentation materials are also available.
Conference program with links to files: indico.cern.ch/conferenceOtherViews.py?view=standard&confId=5710
As part of the 13th National ACRL Conference, the association has a Cyber Zed Shed to learn how librarians are using new technologies in innovative ways. Cyber Zed Shed presentations are 20-min in length, including 5 min for audience Q&A. This year they included podcasts of the sessions. The podcasts are hosted by PALINET as part of their Technology Conversations podcast series. John Houser, PALINET Senior Technology Consultant, conducted a brief 5 min interview with each Cyber Zed Shed presenter during the ACRL conference.
Cyber Zed Shed sessions included ones on: Effective Use of Innovative Technologies in Library Instruction Sessions: TurningPoint Software; Staff Learning and Sharing Using Squidoo; Using Firefox Extensions to Reveal Library Holdings; and PennTags – A Social Bookmarking Tool.
ACRL Cyber Zed Shed: www.acrl.org/ala/acrl/acrlevents/baltimore/program07/cyberzedshed.htm
Podcasts: www.palinet.org/lts_techupdates_podcasts.aspx#Tech
Study Examines Impact of Open Archiving on Journal Subscriptions
A recent study of librarian purchasing preferences has revealed the factors that could prompt a librarian to substitute Open Access materials for journal subscriptions. According to a study commissioned by the Publishing Research Consortium, the lengths of the embargo period and peer review are key determinants in a librarian's decision to maintain or cancel journal subscriptions.
This study raises questions about previous claims that librarians will continue to subscribe to journals, even when some or all of the content is freely available on institutional archives.
The study, conducted by Scholarly Information Strategies in July 2006, surveyed over 400 librarians internationally. As well as collecting their general attitudes to open access, conjoint analysis was employed to identify the relative importance of specific decision-making factors such as price, embargo period, article version, and reliability of access. This approach avoids selection bias and produced data models that show the likely impact on subscription or cancellation behavior under different market scenarios. The model outputs can be highly useful for developing products, understanding price sensitivity and examining other practical issues.
"Because most content is delivered to the research community via libraries, it is critical to understand how librarians make decisions," said Bob Campbell, Chairman, Steering Group of the Publishing Research Consortium and President, Blackwell Publishing. "This study will help publishers better analyze and evaluate how alternative acquisition methods might impact how they sell journal subscriptions to librarians".
The full report of the study, "Self-Archiving and Journal Subscriptions: Co-existence or Competition?" can be accessed on the PRC site at: www.publishingresearch.org.uk
New Open Source Journal on Digital Media in the Humanities
The first issue of Digital Humanities Quarterly (DHQ), an open-access, peer-reviewed scholarly journal covering all aspects of digital media in the humanities, published online by the Alliance of Digital Humanities Organizations, is now available.
DHQ is a community experiment in journal publication: developed and published in XML on an open-source platform, under a Creative Commons license. The journal publishes a wide range of peer-reviewed materials, including scholarly articles, editorials, opinion pieces, and reviews. We encourage submissions that exploit the expressive potential of the digital medium. Information about submissions, reviewing, and the journal's mission are available at the DHQ web site. The experiment is funded by the Alliance of Digital Humanities Organizations (ADHO, www.digitalhumanities.org) and the Association for Computers and the Humanities (ACH, www.ach.org).
The Editorial Board intends to showcase the wide variety of materials that are being submitted, both from traditional digital humanities domains and from important related areas such as new media studies, digital libraries, and digital art. New pieces will be added in a preview section as soon as they are ready for publication, and a quarterly announcement will notify readers when each new issue is complete. An RSS feed will be coming soon. During the course of the next year they will also be adding more features such as commenting, searching, and a variety of ways of interacting with the content.
Journal website: www.digitalhumanities.org/dhq/
Stanford Research Program Aims to Overhaul the Internet
Taking a nothing-is-sacred approach to better meet human communications needs, researchers at Stanford University are launching a new program called the Clean Slate Design for the Internet. "How should the Internet look in 15 years?" asks Nick McKeown, an associate professor of electrical engineering and computer science who is leading the effort. "We should be able to answer that question by saying we created exactly what we need, not just that we patched some more holes, made some new tweaks or came up with some more work-arounds. Let's invent the car instead of giving the same horse better hay".
McKeown is a seasoned expert and entrepreneur who has made substantial contributions to developing the router technology at the core of today's Internet. Project co-director and electrical engineering Professor Bernd Girod meanwhile has been a pioneer of Internet multimedia delivery, both by contributing directly to standards for digital video encoding and by founding and advising startup companies. Also sharing in this vision of fundamentally new ways to engineer a global communications infrastructure are faculty from three engineering departments and the Graduate School of Business who have signed on to conduct research in the program. Supporting them are industrial affiliates including Cisco Systems, Deutsche Telekom Laboratories and NEC.
The research also closely complements two projects under way at the National Science Foundation. The first, called GENI, for Global Environment for Network Innovations (www.geni.net/), aims to build a nationwide programmable platform for research in network architectures. The second, called FIND, for Future Internet Network Design (find.isi.edu/), aims to develop new Internet architectures.
The point of these efforts is not that the Internet is broken – just that it has become ossified in the face of emerging security threats and novel applications.
McKeown and his colleagues already have identified and begun working on four projects that constitute the initial research direction of the program. Some of these efforts are developing prototypes that may presage how a new Internet could work. "We will measure our success in the long term", McKeown says. "We intend to look back in 15 years' time and see significant impact from our program".
Clean Slate Design for the Internet: http://cleanslate.stanford.edu/
OCLC Announces WorldCat Local
OCLC is piloting a new service that will allow libraries to combine the cooperative power of OCLC member libraries worldwide with the ability to customize WorldCat.org as a solution for local discovery and delivery services. The WorldCat Local pilot builds on WorldCat.org. Through a locally branded interface, the service will provide libraries the ability to search the entire WorldCat database and present results beginning with items most accessible to the patron. These might include collections from the home library, collections shared in a consortium, and open access collections.
WorldCat Local will offer the same feature set as WorldCat.org, such as a single search box, relevancy ranking of search results, result sets that bring multiple versions of a work together under one record, faceted browse capability, citation formatting options, cover art, and additional evaluative content.
The WorldCat Local service interoperates with locally maintained services like circulation, resource sharing, and resolution to full text to create a seamless experience for the end user. WorldCat Local will also include future enhancements to WorldCat.org including more than 30 million article citations, and social networking services. The WorldCat Local pilot will test new functionality that allows users to place requests, gain online access, or request an interlibrary loan within WorldCat.org.
Libraries and groups participating in the WorldCat Local pilot include: University of Washington Peninsula Library System in California Libraries in Illinois, including: University of Illinois – Urbana-Champaign Glenside Public Library District CCS (Cooperative Computer Services) Consortium Lincoln Library Illinois State Museum Illinois State Library Hoopeston Public Library Northeastern Illinois University Mattoon Public Library Champaign Central High School Williamsville Senior High School
The University of Washington Libraries will be first to pilot WorldCat Local with OCLC in April. Other institutions will follow. OCLC will examine results of the pilot to determine a production schedule. OCLC will test interoperability with systems used by participating pilot libraries, including Innovative Interfaces, SirsiDynix, and ExLibris Voyager.
University of Washington Beta WorldCat Local: http://uwashington.worldcat.org/
OCLC website: www.oclc.org/productworks/worldcatlocal.htm
Commentary from Talis: http://blogs.talis.com/panlibus/archives/2007/04/more_on_worldca.php
Biblioblogosphere commentary: www.technorati.com/search/%22worldcat+local%22
In a similar project, building on the report of the UC Bibliographic Services Task Force (BSTF) which affirmed the importance of improved library systems, the University of California Libraries and OCLC have entered into an agreement to explore second generation library services. Demonstration projects are projected to be in place by the end of 2007.
UC-OCLC Project Website: http://libraries.universityofcalifornia.edu/about/uc_oclc.html
UC-OCLC Project FAQ: http://libraries.universityofcalifornia.edu/about/uc_oclc_faq.html
Stanford University Launches Copyright Renewal Database
An online database that enables people to search copyright-renewal records for books published in the United States between 1923 and 1963 has been launched by Stanford University Libraries and Academic Information Resources (SULAIR). SULAIR developed the Copyright Renewal Database, dubbed the "Copyright Determinator", with a grant from the Hewlett Foundation. The effort built on Project Gutenberg's transcriptions of the Catalog of Copyright Entries, which was published by the US Copyright Office.
Determining the copyright status of books has become a pressing issue as libraries and businesses develop plans to digitize materials and make works in the public domain widely available. In order to appropriately select books for digitization, these organizations need to determine efficiently and with some certainty the copyright status of each work in a large collection. The Determinator supports this process, bringing all 1923-1963 book-renewal records together in a single database and, more significantly, making searchable renewal records that had previously been distributed only in print.
US works published from 1923 to 1963 are the only group of works for which renewal is now a concern. Renewals have expired for works published before 1923, and they are generally in the public domain. The 1976 Copyright Act made renewal automatic for works published after 1 January 1964. Determining the renewal status of works published between 1923 and 1963 has been a challenge; the Copyright Office received renewals as early as 1950, but only records received by that office after 1977 are available in electronic form. Renewals received between 1950 and 1977 were announced and distributed only in a semi-annual print publication. For the Determinator databases, Stanford has converted the print records to machine-readable form and combined them with the electronic renewal records from the Copyright Office.
SULAIR continues to refine the database and welcomes feedback. Contact Mimi Calter at mcalter@stanford.edu with questions or comments.
Copyright Renewal Database: http://collections.stanford.edu/copyrightrenewals/bin/page?forward=home
EU Digital Library Expert Group Releases Advisory Report on Copyright Issues
The EU's High Level Expert Group on Digital Libraries – which includes stakeholders from the British Library, the Deutsche Nationalbibliothek, the Federation of European Publishers and Google – has presented an advisory report on copyright issues to the European Commission. In addition, the group has offered recommendations to ensure more open access to scientific research and to improve public–private cooperation. The work of the High Level Group is part of the European Commission's efforts to make Europe's rich cultural and scientific heritage available online. For this purpose, the group advises the Commission on issues regarding digitization, online accessibility and digital preservation of cultural material.
The report suggests a voluntary license scheme to deal with copyright issues hampering the progress of library digitization efforts, and includes a model license that libraries could use to scan orphan works, i.e. works where the copyright owner is not identifiable, and works that are out of print.
Full text of the report: http://ec.europa.eu/information_society/newsroom/cf/document.cfm?action=display&doc_id=295
Model license: http://ec.europa.eu/information_society/newsroom/cf/document.cfm?action=display&doc_id=296
DigitalPreservationEurope (DPE) Announces Research Exchange Programme
Research and practice in digital preservation is patchy, fragmented, and disconnected. Communication between research groups is limited and does not always engage with the needs of practitioners. Exchange of professional practitioners and researchers provides a valuable way to understand and overcome these barriers. They can facilitate knowledge exchange, capacity building, and innovation.
DigitalPreservationEurope, building on the earlier successful work of ERPANET, works to improve coordination, cooperation and consistency in current digital preservation and curation activities to secure the longevity of digital assets and heritage. DPE, supported by the European Union with funding under the Sixth Framework Programme, recognizes the value of exchange programmes as a mechanism to establish cross-institutional synergies.
It is our hope that the planned 25 DPE Exchanges will propagate knowledge, capacity, and innovation as well as foster better cooperation among research institutions and industrial partners working on meeting pressing challenges in digital preservation. DPEX aims to encourage innovative practice through research collaboration and to build bridges between practitioners and researchers.
Participants in the DPEX programme benefit from contact with the experienced preservation professionals, engagement in environments where preservation challenges are encountered on a daily basis, and/or contact with renowned research labs and industrial partners in the area of digital preservation in Europe. DPEX will allow participants to look beyond their specific professional environment.
Exchanges should typically last for four weeks and the DPEX support of up to 3,500 euros per exchange can be used to meet partially the costs of accommodation, subsistence, and travel. Unfortunately, DPEX support cannot be used to meet salary costs and currently exchanges must involve participants from and institutions located in EU Member States.
The first application deadline is June 1st, 2007 with selected exchanges being announced on the 1st of July 2007. There will be three further deadlines for applications under what we hope will be the first phase of the DPEX Programme: October 2007, January 2008, and June 2008. The application process can be completed online. Completed applications will be reviewed by an independent review committee of six researchers and practitioners, chaired by Birte Christensen-Dalsgaard, Statsbiblioteket (Denmark) and Andreas Rauber, Vienna University of Technology (Austria).
Full programme details and application: www.digitalpreservationeurope.eu/exchange/