Sunday, December 23, 2007

Establishing a Research Agenda for Scholarly Communication. 5

5. Value and Value Metrics in Scholarly Communications
Determining and measuring the effectiveness of and the value that is derived from scholarly communication is challenging and often subjective. John Houghton’s work on scholarly communication in Australia seeks to measure the economic and social returns to public sector investment in research and development and how those might rise with open access to published research findings. Analyses of citations in the published literature to the published literature lead to metrics such as the h-index, Eigenfactor, and the heavily-used impact factor. Extant measures may suffer from being tightly coupled to traditional processes while also inhibiting the application of other measures of value.

In the new digital environment, activities other than traditional or formal publication should be valued in the reward structure for scholarship. To this end, the Modern Language Association provides an example of examining current standards and emerging trends in publication requirements for tenure and promotion.

Illustrative Challenges
Citation analysis
relies on a 50 year old assumption that the number of citations represents value, but in today’s environment this assumption is limiting. Other metrics could reflect the scholarly significance of new discoveries as they are developed and communicated. Effective metrics must be based on resources and practices that truly advance scholarly research. For example, it could be argued that journal articles have become totems to accrue and count for tenure and promotion but are not unique in their ability to advance scholarship and may be losing some effectiveness for this purpose.

"Open notebook science" and “open data” are examples of new research and communication practices that might be advancing scholarly research as much or more than what is possible through scholarly publication. The relationship between the reward system and indicators of the progress of knowledge is more tenuous. What resources and practices truly advance scholarly research? Even where robust indicators of the progress of knowledge exist, their relationship with the current reward system may be tenuous. How can the value and impact of communication practices be assessed and documented? How could these assessments be assimilated into the reward system?

Libraries should adopt a stronger role that more directly advances scholarly research beyond satisfying tenure and promotion practices. A starting point is work reported by King and Harley regarding formal vs. informal scholarly communications. Given that scholars are finding new ways to register, seek comment, refine, evaluate, and certify their work, how can those processes be tracked, recorded, measured, and reported as part of the value-chain in scholarship?

Should informal communications be captured and preserved by libraries, and if so, how? A useful analogy is to consider that presentations, preprints and letters and other informal communications are the conversations of science, while publications are the minutes. In some disciplines, journals are becoming less important to scholars than their professional meetings and informal networks where their accomplishments are recognized. How can librarians better characterize and, measure the contributions of these informal communications, and thereby make wise decisions about organized access to them?

Libraries also need to determine the value of their own services as contributions to the communication of scholarship. Which services, such as institutional repositories, should be evaluated, and what tools and measures exist for this purpose?

Research Possibilities
Identify and evaluate the range of metrics currently used to measure the value and impact of scholarly publishing. There is citation analysis and its derivatives; but there are other measures being developed, including those that combine usage/readership and citation, such as those by the MESUR project at Los Alamos National Laboratory, and the UKSG Serials Group. A literature review to collect these efforts would provide a central reference point.
Explore additional measurements that incorporate new kinds of indicators of value and covering a broader range of communication activities. New measures should address increased research efficiency and productivity, variations between disciplines, advancement of the process of research.

Other metrics may:
° characterize the value of Open Data and Open Notebook Science (disseminating source data, research methods, and negative experimental or clinical results) to advancing research and knowledge.
° correspond to technology transfer or other uses of new knowledge beyond generating further research, for instance, number of views, number of patents.
° show how informal communications are advancing the process of research

Explore the relative value, importance, and significance of traditional journal and book publication compared to newer, informal forms of scholarly communication for a sample of representative scholars. This could build on studies by the CIC and Estabrook that indicated that in the humanities there is some acceptance of digital publications and new forms, while the scholarly monograph was still the standard for promotion and tenure.

Establishing a Research Agenda for Scholarly Communication: A Call for Community Engagement

Establishing a Research Agenda for Scholarly Communication: A Call for Community Engagement

By ACRL Scholarly Communications Committee

Executive Summary:
The system of scholarly communication – which allows research results and scholarship to be registered, evaluated for quality, disseminated, and preserved – is rapidly evolving. Academic libraries and their parent institutions are adopting strategies, making plans, and taking action to respond to the changing environment and to influence its development. Believing that meaningful research can inform and assist the entire academic community in influencing and managing this evolution, the Association of College and Research Libraries (ACRL) convened an invitational meeting on July 21, 2007, to collectively brainstorm the evidence needed to inform strategic planning for scholarly communication programs.

Influencing a system as complex and dynamic as scholarly communication requires broad and deep understanding. The issues for investigation that emerged at the meeting range from cyberinfrastructure to changing academic organizational models to public policy. This report thematically summarizes and synthesizes the meeting’s rich discussion, framing eight essential research challenges and opportunities. We invite those engaged in creating, supporting, and distributing scholarship to comment and extend the issues and possible research initiatives. Without substantive comment from librarians and their partners, the goal of outlining a community research agenda cannot be considered complete.


Themes and Research Opportunities:
Participants identified eight themes characterizing the changes transforming scholarly communications. Developing a deeper understanding of these challenges through research can enable academic stakeholders to influence and construct scholarly communication systems that optimally support the academic enterprise and the communities it serves.

Theme 1: The Impact and Implications of Cyberinfrastructure

Theme 2: Changing Organizational Models

Theme 3: How Scholars Work

Theme 4: Authorship and Scholarly Publishing

Theme 5: Value and Value Metrics of Scholarly Communications

Theme 6: Adoption of Successful Innovations

Theme 7: Preservation of Critical Materials

Theme 8: Public Policy and Legal Matters

Conclusion and Invitation

Appendix A: Attendee List

Appendix B: ACRL Scholarly Communications Committee Roster 2007-2008


Coverage and Commentary


PDF Version Podcast Interview Press Release

Released November 2007

Saturday, December 22, 2007

Show Me The Data! Show Me The Data! Show Me The Data!

Show Me The Data
Mike Rossner, Heather Van Epps, and Emma Hill
The Journal of Cell Biology, Vol. 179, No. 6, 1091-1092

The integrity of data, and transparency about their acquisition, are vital to science. The impact factor data that are gathered and sold by Thomson Scientific (formerly the Institute of Scientific Information, or ISI) have a strong influence on the scientific community, affecting decisions on where to publish, whom to promote or hire, the success of grant applications, and even salary bonuses. Yet, members of the community seem to have little understanding of how impact factors are determined, and, to our knowledge, no one has independently audited the underlying data to validate their reliability.

Calculations and negotiations
The impact factor for a journal in a particular year is declared to be a measure of the average number of times a paper published in the previous two years was cited during the year in question. For example, the 2006 impact factor is the average number of times a paper published in 2004 or 2005 was cited in 2006. There are, however, some quirks about impact factor calculations that have been pointed out by others, but which we think are worth reiterating here:


3. Some publishers negotiate with Thomson Scientific to change these designations in their favor . The specifics of these negotiations are not available to the public, but one can't help but wonder what has occurred when a journal experiences a sudden jump in impact factor. For example, Current Biology had an impact factor of 7.00 in 2002 and 11.91 in 2003. [snip]

4. Citations to retracted articles are counted in the impact factor calculation. [snip]

5. Because the impact factor calculation is a mean, it can be badly skewed by a "blockbuster" paper. [snip]

When we asked Thomson Scientific if they would consider providing a median calculation in addition to the mean they already publish, they replied, "It's an interesting suggestion...The median...would typically be much lower than the mean. There are other statistical measures to describe the nature of the citation frequency distribution skewness, but the median is probably not the right choice." Perhaps so, but it can't hurt to provide the community with measures other than the mean, which, by Thomson Scientific's own admission, is a poor reflection of the average number of citations gleaned by most papers.

6. There are ways of playing the impact factor game, known very well by all journal editors, but played by only some of them. For example, review articles typically garner many citations, as do genome or other "data-heavy" articles [snip]. When asked if they would be willing to provide a calculation for primary research papers only, Thomson Scientific did not respond.

As journal editors, data integrity means that data presented to the public accurately reflect what was actually observed. [snip]

Thomson Scientific makes its data for individual journals available for purchase. With the aim of dissecting the data to determine which topics were being highly cited and which were not, we decided to buy the data for our three journals (The Journal of Experimental Medicine, The Journal of Cell Biology, and The Journal of General Physiology) and for some of our direct competitor journals. Our intention was not to question the integrity of their data.

When we examined the data in the Thomson Scientific database, two things quickly became evident: first, there were numerous incorrect article-type designations. Many articles that we consider "front matter" were included in the denominator. This was true for all the journals we examined. Second, the numbers did not add up. The total number of citations for each journal was substantially fewer than the number published on the Thomson Scientific, Journal Citation Reports (JCR) website ... . The difference in citation numbers was as high as 19% for a given journal, and the impact factor rankings of several journals were affected when the calculation was done using the purchased data [snip]

Your database or mine?
When queried about the discrepancy, Thomson Scientific explained that they have two separate databases—one for their "Research Group" and one used for the published impact factors (the JCR). [snip]

When we requested the database used to calculate the published impact factors (i.e., including the erroneous records), Thomson Scientific sent us a second database. But these data still did not match the published impact factor data. This database appeared to have been assembled in an ad hoc manner to create a facsimile of the published data that might appease us. It did not.

Opaque data
It became clear that Thomson Scientific could not or (for some as yet unexplained reason) would not sell us the data used to calculate their published impact factor. If an author is unable to produce original data to verify a figure in one of our papers, we revoke the acceptance of the paper. We hope this account will convince some scientists and funding organizations to revoke their acceptance of impact factors as an accurate representation of the quality—or impact—of a paper published in a given journal.

Just as scientists would not accept the findings in a scientific paper without seeing the primary data, so should they not rely on Thomson Scientific's impact factor, which is based on hidden data. As more publication and citation data become available to the public through services like PubMed, PubMed Central, and Google Scholar®, we hope that people will begin to develop their own metrics for assessing scientific quality rather than rely on an ill-defined and manifestly unscientific number.



Monday, December 17, 2007

The Insight Journal: An Open Access, Dynamic Publishing Environment

The Insight Journal is an Open Access on-line publication covering the domain of medical image processing. The unique characteristics of the Insight Journal include:
  • Open-access to articles, data, code, and reviews : All submissions are available for free. The Insight Journal advances the idea that the results of scientific research should be made available to the public. Open access enables others to more fully understand the research and to more easily build upon the research. The field of medical image analysis will more rapidly advance.

  • Open peer-review that invites discussion between reviewers and authors: The review process is made public and performed online. Every reader is a reviewer and may post comments. Reviewers' comments are posted with the submissions. Authors may respond to reviewers' comments. Authors may submit revisions to the papers in response to reviewers' comments. [snip]

  • Emphasis on reproducible science via automated code compilation and testing: Code submitted to the IJ is verified by an automatic system. Authors are encouraged to post their papers along with the source code, the input data, and the expected output data needed to replicate the results presented in the paper. [snip]

  • Support for continuous revision of articles, code, and reviews: The IJ is a dynamic publication environment. All submissions can be continually reviewed, tested, revised, and extended. [snip]

  • The IJ is volunteer supported: We need your help in the form of submissions, reviews, and ideas! The IJ is a novel concept in the field of medical image analysis, and we need your time, intelligence, and insight to make it work.


Sunday, December 16, 2007

Economics: A Public Peer Reviewed e-Journal

Economics is a new type of academic journal in economics. By involving a large research community in an innovative public peer review process, Economics aims to provide fast access to top-quality papers. Modern communication technologies are used to find for every research issue the best virtual team out of a network of highly motivated researchers from all over the world. Thus, publishing is seen as a cooperative enterprise between authors, editors, referees, and readers. Economics offers open access to all readers and takes the form of an e-journal, i.e. submission, evaluation, and publication are electronic. economics embodies the following principles:
  • Open Access: Following the principle that knowledge is a public good, all readers have open access to reading and downloading papers. The simple and free access ensures maximum readership and high citation records for published papers.
  • Open Assessment: The traditional peer review process is substantially supplemented by a public peer review process in which the community of active researchers from all over the world has a hand in the evaluation process. Due to interactive peer review and public discussion, Eonomics provides fast and efficient quality assurance. Within a two stage publication process, much of the research evaluation takes place after rather than before an article is published.[snip]
  • Submitted papers that have been identified as sufficiently promising for a referee process are made available on the journal’s homepage within three weeks. Thus, the time for new ideas to find their way into the scientific community is substantially reduced.
  • Add-on Services: To foster scientific exchange, Economics embeds forums on special themes, where authors and readers can communicate and possibly conduct joint research. In addition, interested readers can take advantage of alert services announcing new papers in their fields. As far as possible, Economics also provides hyperlinks to the referenced literature.

Style and Contents: Economics aims to cover all the main areas of economics. Inevitably, articles in different areas of economics are addressed at different audiences. Many of the articles submitted to the journal are standard technical pieces, addressed to a purely academic audience. Others concern economic policy and thus are addressed both to economists and policy makers with some economic background. Yet others are surveys and overviews, often interdisciplinary, addressed to a non-technical audience. To attract this variety of contributions, Economics will contain the following areas, in addition to the standard contributions for a purely academic audience:

  • Policy Papers
    conomics Policy Papers are concerned with the economic analysis of current policy issues. The analysis is rigorous, from a theoretical and empirical perspective, but the articles are written in non-technical language appropriate for a broad spectrum of economic decision makers and participants economic policy discussion. The rigorous analysis is contained in the appendix of each article.
  • Surveys and Overviews
    Surveys and Overviews
    aim to integrate the analysis and lessons from various fields of economics with the aim of providing new insights, that are not accessible from any particular sub-discipline of economics. The contributions may include survey and review articles, provided that the broad perspective is maintained. They are addressed to a general audience interested in economic issues.
  • New Frontier Papers
    New Frontier Papers
    contain articles that make potentially fundamental contributions to economic thinking. The papers are meant to change the way we conceptualize economic phenomena or transform the way the profession thinks about important economic issues. Contributions will be reviewed by renowned, top-flight economists, including Nobel prize winners.


Review Process
Economics aims to allow the research evaluation process to be market-driven. The traditional peer review process is substantially supplemented by a "public" peer review process in which the community of active researchers from all over the world has a hand in the evaluation process. This is realized within a two-stage publication process.

First Stage: Publication of the submitted version as Economics Discussion Paper

Access to public peer review (less than three weeks)

The author submits a paper, if possible with hyperlinks to the referenced literature. [snip].

The managing editors check formal and professional standards and pass the paper to an associate editor in the relevant research field ... [snip].

The associate editor decides on whether to accept the paper for the further peer review process by answering two questions: (i) Whether the paper shows promise of making a significant contribution and (ii) whether it meets basic scientific standards. [snip]

If these questions are answered in the affirmative, the paper is published on the platform of the Economics Discussion Papers.

Open assessment of the submitted paper (eight weeks)

The Associate Editor appoints at least two referees for reviewing the paper and uploads their comments on the discussion platform within six weeks. The referees’ comments may be anonymous, but referees are encouraged to allow their reports to be attributed. Referees are asked to provide short reports that focus on two questions: (i) Is the contribution of the paper potentially significant? (ii) Is the analysis correct? [snip].

While the referees are reviewing the paper, the corresponding public discussion platform is open for eight weeks (the “discussion period”). On this platform, registered readers may review the paper by uploading comments (anonymously or with the name of the contributor published on the web). In addition, the referees´comments are posted on the Web site. Authors are asked to respond to these comments. All comments and responses are published and archived in the economics discussion paper section. This “public” peer review process lasts 8 weeks.

All registered readers can rate the comments by the criteria "extremly helpful, helpful, not helpful, unacceptable" (the results of these ratings are not made public). The editorial board of Economics reserves the right to delete unacceptable comments and exclude their commentators from the community of registered readers. [snip]

During the discussion period, the editorial staff sends the paper to potentially interested researchers (invited readers) to encourage them to post a comment in the discussion platform. [snip]

The author is free to respond to all comments.

Second Stage: Publication of the final version in Economics

Peer review completion

Based on the referee reports, the author replies, and the comments made by registered readers, the associate editor decides whether the paper is accepted or rejected for publication in Economics.


If accepted, the discussion paper or its revised version is published in Economics. The Economics Discussion Paper with all comments is permanently archived and remains accessible to the public for documenting the paper’s history.

Market-based evaluation of published articles

Readers are asked to rate articles in economics on a scale from one to five ... [snip]

Readers are free to upload comments.

Authors are free to respond to all comments.

Authors are free to upload revised versions at any time.

Statistics on ratings, downloads and citations are collected to compile rankings of papers in the various fields of economics. Based on the rankings, prizes for the most outstanding papers in special fields are awarded each year.


Saturday, December 15, 2007

ELPUB2008: Open Scholarship: Authority, Community and Sustainability in the Age of Web 2.0

The International Conference on Electronic Publishing is entering its twelfth year and ELPUB2008 marks the first time the conference will be held in North America. The Knowledge Media Design Institute at the University of Toronto is pleased to serve as the host [site] ... [snip].

Call For Papers
Scholarly communications, in particular scholarly publications, are undergoing tremendous changes. Researchers, universities, funding bodies, research libraries, and publishers are responding in different ways, from active experimentation, adaptation, to strong resistance.

The ELPUB 2008 conference will focus on key issues on the future of scholarly communications resulting from the intersection of semantic web technologies, the development of cyberinfrastructure for humanities and the sciences , and new dissemination channels and business models. We welcome a wide variety of papers from members of these communities whose research and experiments are transforming the nature of scholarly communications.

Topics include but are not restricted to:
  • New Publishing models, tools, services and roles
  • New scholarly constructs and discourse method
  • Innovative business models for scholarly publishing
  • Multilingual and multimodal interfaces
  • Services and technology for specific user communities, media, and content
  • Content search, analysis and retrieval
  • Interoperability, scalability and middleware infrastructure to facilitate awareness and discovery
  • Personalisation technologies (e.g. social tagging, folksonomies, RSS, microformats)
  • Metadata creation, usage and interoperability
  • Semantic web issues
  • Data mining, text harvesting, and dynamic formatting
  • User generated content and its relation to publisher's content
  • Usage and citation impact
  • Security, privacy and copyright issues
  • Digital preservation, content authentication
  • Recommendations, guidelines, interoperability standards

Author Guidelines
Contributions are invited for the following categories:

  • Single papers (abstract minimum of 1,000 and maximum of 1500 words)
  • Tutorial (abstract minimum of 500 and maximum of 1500 words)
  • Workshop (abstract max of 1000 words) - Poster (abstract max of 500 words)
  • Demonstration (abstract max of 500 words)

Abstracts must be submitted following the instructions on the conference website (

Key Dates

January 20th 2008: Deadline for submission of abstracts (in all categories)

February 28, 2008: Authors will be notified of the acceptance of submitted papers and workshop proposals.

April 11th, 2008: Final papers must be received.

See Website For Detailed Author Instructions

Posters (A1-format) and demonstration materials should be brought by their authors at the conference time.

Only abstracts of these contributions will be published in the conference proceedings. Information on requirements for Workshops and tutorials proposals will soon be posted on the website.

All submissions are subjected to peer review (double-blind) and accepted by the international ELPUB Programme Committee.

Accepted full papers will be published in the conference proceedings. Printed proceedings are distributed during the conference.

Electronic versions of the contributions will be archived (


MediaCommons: A Digital Scholarly Network

MediaCommons, a project-in-development with support from the Institute for the Future of the Book (part of the Annenberg Center for Communication at USC) and the MacArthur Foundation, will be a network in which scholars, students, and other interested members of the public can help to shift the focus of scholarship back to the circulation of discourse. This network will be community-driven, responding flexibly to the needs and desires of its users. It will also be multi-nodal, providing access to a wide range of intellectual writing and media production, including forms such as blogs, wikis, and journals, as well as digitally networked scholarly monographs. Larger-scale publishing projects will be developed with an editorial board that will also function as stewards of the larger network.

What you see here now is simply an early stage along the way toward that network. We’re using the site now to test out some possible future features, such as In Media Res, and to solicit proposals for our initial large-scale projects. But we’re also using the site’s blog to plan in public, to generate conversation about what MediaCommons ought to become.

Our hope is that the interpenetration of the different forms of discourse will not simply shift the locus of publishing from print to screen, but will actually transform what it means to “publish,” allowing the author, the publisher, and the reader all to make the process of such discourse just as visible as its product. In so doing, new communities will be able to get involved in academic discourse, and new processes and products will emerge, leading to new forms of digital scholarship and pedagogy.For this reason, we want our readers and our writers intimately involved in MediaCommons not just after its fuller realization, but in its preliminary stages of development. Get involved in the various conversations around the blog, videos and project proposals. Submit a video yourself, or better yet, a proposal for a larger publishing project. Help us set the agenda for the future of publishing in media studies.

More information about the genesis of MediaCommons is available ... :

Kathleen Fitzpatrick and Avi Santo, Editors

" ... The Times They Are A-Changin'"

In 2001, CERN, the European Organisation for Nuclear Research, in Geneva, Switzerland, served as the venue for the first Workshop on the Open Archives Initiative (OAI). Focused on OAI and 'Peer Review Journals in Europe', the purpose of this workshop was to

mobilise a group of European scientists and librarians who want to play an active role in organising a self-managed system for electronic scholarly communication as a means to address the serials crisis. Such a system should be compliant to the technical standards proposed by the Open Archives Initiative (OAI) [].


Two years after the workshop, a policy briefing of the European Science Foundation was published. The publication not only profiled the variety of issues relating to Open Access and the OAI, but also summarized the themes of the first OAI workshop, and the second held at CERN in mid-October 2002, as well. In addition, the briefing included consensus recommendations from each workshop. While the "participants [of the first workshop] were unanimous in their belief that the certification of scholarly work remains a fundamental part of a system for scholarly communication," they also "believed that the electronic environment allows for novel approaches to accord a stamp of quality to scholarly works." The suggested 'new metrics' that could be extracted from a fully electronic communication system include the discussion level generated by a paper submitted to a publication system with open peer review and peer commentary features; automated citation indexing beyond the standard Institute for Scientific Information (ISI) print-focused service; and access statistics.


JoVE: Journal of Visualized Experiments

JoVE: What is it?

Journal of Visualized Experiments (JoVE) is an online video-publication for biological research.


JoVE: Addressing Complexity

State-of-the-art life science research has reached a level of complexity that is matched only by the complexity of the living species under investigation. At this stage, an essential requirement to advance basic research and to aid bench-to-bedside translation is the ability to rapidly transfer knowledge within the research community and to the general public.Contrasting the rapid advancement of scientific research itself, scientific communication still relies on traditional print journals, which can not however, capture the intricacy of contemporary life science research.

JoVE: Addressing the “Bottleneck” of Reproducibility and Transparency

As every practicing life science researcher knows, it may take weeks or sometimes months and years to learn and apply new experimental techniques. It has become especially difficult to reproduce newly published studies describing the most advanced state-of-the-art techniques. Thus, much time in life science research is devoted to the learning of laboratory techniques and procedures. [snip]

JoVE: Rapid Knowledge Transfer

With participation of scientists from leading research institutions, the Journal of Visualized Experiments (JoVE) was established as a new, open access tool in life science publication and communication. We utilize a video-based approach to scientific publishing to fully capture all dimensions of life science research. [snip]

JoVE: Integrating Time

While promoting efficiency and performance of life science research, JoVE opens a new frontier in scientific publication. Visualization of the temporal component, the change over time, integral to many life science experiments, can now be done. For the first time, JoVE allows you to publish your experiments in all its dimensions, overcoming the inherent limitations of traditional print journals, thus adding a whole new quality to the communication of your experimental work and research results.

JoVE: Participate

As a scientific journal and as a novel tool to advance life science research, we invite you to actively participate and contribute through the submission of a video-article visualizing your experiments to JoVE.


Facebook Group


Working with the Facebook Generation: Engaging Student Views on Access to Scholarship

CHICAGO & WASHINGTON, DC - The 16th SPARC-ACRL Forum, "Working with the Facebook Generation: Engaging Student Views on Access to Scholarship," will be held at the ALA Midwinter Meeting in Philadelphia on January 12. Co-sponsored by SPARC (Scholarly Publishing and Academic Resources Coalition) and ACRL (Association of College and Research Libraries), the semiannual forum focuses on emerging issues in scholarly communication.

Appreciating, if not understanding, student perspectives on information sharing and access to research will advance library outreach programs. Librarians and students have the power to build valuable bridges of collaboration and guide the larger academic community to reshape scholarly communication.

Tech-savvy students, who live and breathe information sharing, are critical to changing the way scholarly communication is conducted. Not bound by traditional modes of research exchange, students are using all the technologies at their disposal to engage in scholarly discourse - including blogs, wikis and tagging tools. What will they do next? How do they view the future of scholarly exchange?

At the next SPARC-ACRL Forum, graduate students from an array of disciplines, institutions and engaged perspectives will share their approaches to scholarly communication issues. Joined by librarians whose scholarly communication programs have explicit student-focused components, they will explore the importance of outreach and the potential impact of students as current and future key stakeholders. The forum will also showcase the winners of the first Sparky Award for the best short videos on the value of information sharing.

The SPARC-ACRL Forum will be heldon Saturday, Jan. 12, 20084:00 - 6:00PM / Pennsylvania Convention Center, Room 204 A/B.

The event will be also available via SPARC Podcast at a later date.

Speakers include:
  • Andre Brown, PhD student in Physics and Astronomy at the University of Pennsylvania and co-blogger for Biocurious
  • Kimberly Douglas, University Librarian, California Institute of Technology
  • Nelson Pavlosky, Law student at George Mason University and co-founder of Students for Freeculture
  • Stephanie Wang, graduate student in Economics at Princeton University and former National Coordinating Committee member, Universities Allied for Essential Medicines

The forum is followed by the ACRL Scholarly Communication Discussion Group, where there will be an open discussion of key issues that surface at the forum. The Discussion Group will be held Sunday, Jan. 13, from 4:00 - 6:00PM at the Marriott Philadelphia, room Franklin 11.

For more information, visit the SPARC Web site at

SPARC (Scholarly Publishing and Academic Resources Coalition), with SPARC Europe and SPARC Japan, is an international alliance of more than 800 academic and research libraries working to create a more open system of scholarly communication. SPARC's advocacy, educational and publisher partnership programs encourage expanded dissemination of research. SPARC is on the Web at

ACRL is a division of the American Library Association (ALA), representing more than 13,500 academic and research librarians and interested individuals. ACRL is the only individual membership organization in North America that develops programs, products and services to meet the unique needs of academic and research librarians. Its initiatives enable the higher education community to understand the role that academic libraries play in the teaching, learning and research environments.


Friday, December 14, 2007

Horizon Report 2007: New Scholarship and Emerging Forms of Publication

The New Media Consortium's [NMC] Emerging Technologies Initiative focuses on expanding the boundaries of teaching, learning and creative expression by creatively applying new tools in new contexts. The Horizon Project, the centerpiece of this initiative, charts the landscape of emerging technologies and produces the NMC’s annual Horizon Report

The New Scholarship and Emerging Forms of Publication

The time-honored activities of academic research and scholarly activity have benefited from the explosion of access to research materials and the ability to collaborate at a distance. At the same time, the processes of research, review, publication, and tenure are challenged by the same trends. The proliferation of audience- generated content combined with open-access content models is changing the way we think about scholarship and publication—and the way these activities are conducted.

Both the process and shape of scholarship are changing. Nontraditional forms are emerging that call for new ways of evaluating and disseminating work. Increasingly, scholars are beginning to employ methods unavailable to their counterparts of several years ago, including prepublication releases of their work, distribution through nontraditional channels, dynamic visualization of data and results, and new ways to conduct peer reviews using online collaboration. These new approaches present a new challenge: to protect the integrity of scholarly activity while taking advantage of the opportunity for increased creativity and collaboration.

New forms of scholarship, including fresh models of publication and nontraditional scholarly products, are evolving along with the changing process. Some of these forms are very common—blogs and video clips, for instance—but academia has been slow to recognize and accept them. [snip] Proponents of these new forms argue that they serve a different purpose than traditional writing and research—a purpose that improves, rather than runs counter to, other kinds of scholarly work. [snip]


While significant challenges remain before the emerging forms of scholarship we are seeing are accepted, nonetheless, there are many examples of work that is expanding the boundaries of what we have traditionally thought of as scholarship. In the coming years, as more scholars and researchers make original and worthwhile contributions to their fields using these new forms, methods for evaluating and recognizing those contributions will be developed, and we expect to see them become an accepted form of academic work.

Relevance for Teaching, Learning, and Creative Expression
The real potential of this trend for education is to expand the audience for scholarship and research— not only among those at scholarly institutions, but among the public as well.


Increasingly, we are seeing other technologies being applied to the purposes of collaboration as well. Writers use shared editing tools like Google Docs and wikis and create online books that accept reader comments at the paragraph level, opening up the process of writing itself to collaboration. [snip]


The new scholarship also acknowledges certain complications of traditional methods of publication that arise from the rapid rate of change and discovery of new information in many fields. Emerging forms of the book, including prepublication research and drafts shared online, the incorporation of data visualization tools into online publications, all forms of customized publishing, and the e-book, are ironically causing us to regard the traditional book as an impermanent medium. [snip] A response to that trend is that more and more books are often accompanied by a website, wiki, or other online resource that can communicate new insights as they arise and create and sustain a living community around the concepts entombed in the published material.

A sampling of applications for the new scholarship and emerging forms of publication across disciplines includes the following:
  • Include—and learn from—new voices. Both books and their authors may benefit from the comments of interested students, colleagues, and members of the public, who in turn will benefit from hearing scholars narrate their process. When his 1999 book Code and Other Laws of Cyberspace needed an update, author Lawrence Lessig set up a wiki and invited the public to help him write the second edition, Codev2, now available in both print and electronic formats.
  • [snip]
  • Illustrate and educate using a variety of media. Graphs, photographs, video and audio clips can all be included in an online paper or book. Online textbooks for computer science, history and politics, and other disciplines are available that incorporate illustrations both static and animated, video and audio commentary by experts in the field, and graphs that respond to user input. Combined with new methods of data visualization, mapping, graphing, and charting, online books are becoming powerful interactive tools for learning.

Examples of the New Scholarship and Emerging Forms of Publication
GAM3R 7H30RY by McKenzie Wark ; The Django Book by Adrian Holovaty and Jacob Kaplan-Moss. These two books are online in prepublication format, where readers can add comments that will inform the authors’ work.

N I N E S - N I N E S is a consortium of scholars promoting and exploring new forms of scholarship.

Poetess Archive - The Poetess Archive provides an extensive bibliography and some full texts. Over the next year, the database will be linked to a visualization tool. The accompanying Poetess Archive Journal is an evolving online scholarly peer-reviewed publication that will take advantage of innovative technologies to push the boundaries of research and publication.

Public Library of Science - The Public Library of Science is committed to making the world’s scientific and medical literature a freely available public resource via a new process of peer-reviewed publishing.

Texas Politics - An online textbook developed at the University of Texas at Austin, Texas Politics includes audio and video, commentary, a series of live speakers, and other media as well as traditional text.


Using Wiki in Education - A wiki *and** a published book, Using Wiki in Education explores the ways online publishing can extend the life and usefulness of a scholarly work.

Further Reading
Book 2.0 (Jeffrey R. Young, The Chronicle of Higher Education, July 28, 2006) Reviews some ways educators are exploring new modes of electronic publishing.

The Book as Place (Paula Berinstein, Searcher, November/ December 2006) Describes the networked book as a destination and a center for community as well as reading material: “The book is now a place as well as a thing and you can find its location mapped in cyberspace.”

The Future of Books (Jason Epstein, Technology Review, January 2005) Reviews the writer’s experiences in the world of traditional publishing and looks ahead to the future of publishing.

Giving it Away (Cory Doctorow, Forbes, December 1, 2006) A technology writer explains the value of publishing electronic, free versions of books.

The Institute for the Future of the Book (Retrieved December 20, 2006) Promotes the next generation of the book with conversation, research, and even software.



Wiki []

PDF (Entire Report) []

Thursday, December 13, 2007

Quality Control in Scholarly Publishing on the Web

Quality Control in Scholarly Publishing on the Web

When the Web was young, a common complaint was that it was full of junk. Today a marvelous assortment of high-quality information is available on line, often with open access. As a recent JSTOR study indicates, scholars in every discipline use the Web as a major source of information. There is still junk on the Web -- material that is inaccurate, biased, sloppy, bigoted, wrongly attributed, blasphemous, obscene, and simply wrong -- but there is also much that is of the highest quality, often from obscure sources. As scholars and researchers, we are often called upon to separate the high-quality materials from the bad. What are the methods by which quality control is established and what are the indicators that allow a user to recognize the good materials?


This paper is motivated by three interrelated questions:

  • How can readers recognize good quality materials on the Web?

  • How can publishers maintain high standards and let readers know about them?

  • How can librarians select materials that are of good scientific or scholarly quality?


Human Review
The traditional approaches for establishing quality are based on human review, including peer review, editorial control, and selection by librarians. These approaches can be used on the Web, but there are economic barriers. The volume of material is so great that it is feasible to review only a small fraction of materials. Inevitably, most of the Web has not been reviewed for quality, including large amounts of excellent material.

[snip]Peer Review
Peer review is often considered the gold standard of scholarly publishing, but all that glitters is not gold.


The Journal of the ACM is one of the top journals in theoretical computer science. It has a top-flight editorial board that works on behalf of a well-respected society publisher. Every paper is carefully read by experts who check the accuracy of the material, suggest improvements, and advise the editor-in-chief about the overall quality. With very few exceptions, every paper published in this journal is first rate.

However, many peer-reviewed journals are less exalted than the Journal of the ACM. There are said to be 5,000 peer-reviewed journals in education alone. Inevitably the quality of papers in them is of uneven quality. Thirty years ago, as a young faculty member, I was given the advice, "Whatever you do, write a paper. Some journal will publish it." This is even more true today.

One problem with peer review is that many types of research cannot be validated by a reviewer. In the Journal of the ACM, the content is mainly mathematics. The papers are self-contained. A reviewer can check the accuracy of the paper by reading the paper without reviewing external evidence beyond other published sources. This is not possible in experimental areas, including clinical trials and computer systems. Since a reviewer cannot repeat the experiment, the review is little more than a comment on whether the research appears to be well done.


[snip] ACM conference papers go through a lower standard of review than journal articles. Moreover, many of these papers in this conference summarize data or experiments that the reviewers could not check for accuracy by simply reading the papers; for a full review, they would need to examine the actual experiment. Finally, the threshold of quality that a paper must pass to be accepted is much lower for this small conference than for the ACM's premier journal. This is a decent publication, but it is not gold.

In summary, peer review varies greatly in its effectiveness in establishing accuracy and value of research. For the lowest-quality journals, peer review merely puts a stamp on mediocre work that will never be read. In experimental fields, the best that peer review can do is validate the framework for the research. However, peer review remains the benchmark by which all other approaches to quality are measured.

Incidental Uses of Peer Review
Peer-reviewed journals are often called "primary literature," but this is increasingly becoming a misnomer. Theoretical computer scientists do not use the Journal of the ACM as primary material. They rely on papers that are posted on Web sites or discussed at conferences for their current work. The slow and deliberate process of peer review means that papers in the published journal are a historic record, not the active literature of the field.


Peer review began as a system to establish quality for purposes of publication, but over the years it has become used for administrative functions. In many fields, the principal use of peer-reviewed journals is not to publish research but to provide apparently impartial criteria for universities to use in promoting faculty. This poses a dilemma for academic librarians, a dilemma that applies to both digital and printed materials. Every year libraries spend more money on collecting peer-reviewed journals, yet for many of their patrons these journals are no longer the primary literature.


Seeking Gold
As we look for gold on the Web, often all we have to guide us is internal evidence. We look at the URL on a Web page to see where it comes from, or the quality of production may give us clues. If we are knowledgeable about the subject area, we often can judge the quality ourselves. Internal clues, such as what previous work is referenced, can inform an experienced reader, but such clues are difficult to interpret.


Strategies for Establishing Quality
The Publisher as Creator
Many of the most dependable sites on the Web contain materials that are developed by authors who are employed by the publisher. Readers judge the quality through the reputation of the publisher.


Editorial Control
In the three previous examples, the content was created or selected by the publisher's staff. As an alternative, the publisher can rely on an editorial process whereby experts recommend which works to publish. The editors act as a filter, selecting the materials to publish and often working with authors on the details of their work.


Outsiders sometimes think that peer-reviewed materials are superior to those whose quality is established by editorial control, but this is naive. For instance, the Journal of Electronic Publishing contains some papers that have gone through a peer review and others that were personally selected by the editor. This distinction may be important to some authors, but is irrelevant to almost all readers. Either process can be effective if carried out diligently.


In every example so far ... the author, editor, or publisher has a well-established reputation. The observations about the quality of the materials begin with the reputation of the publisher. How can we trust anything without personal knowledge? Conversely, how can a new Web site establish a reputation for quality?


Caroline Arms of the Library of Congress has suggested that everything depends upon a chain of reputation, beginning with people we respect. As students, we begin with respect for our teachers. They direct us to sources of information that they respect -- books, journals, Web sites, datasets, etc. -- or to professionals, such as librarians. As we develop our own expertise, we add our personal judgments and pass our recommendations on to others. Conversely, if we are disappointed in the quality of materials, we pass this information on to others, sometimes formally but often by word of mouth. Depending on our own reputations, such observations about quality become part of the reputation of the materials.

Volunteer Reviews
Reviews provide a systematic way to extend the chain of reputation. Reviewers, independent of the author and publisher, describe their opinion of the item. The value of the review to the user depends on the reputation of the reviewer, where the review is published, and how well it is done.


The Web lends itself to novel forms of review, which can be called "volunteer review" processes. [snip]In a volunteer review process, anybody can provide a review. The publisher manages the process, but does not select the reviewers. Often the publisher will encourage the readers to review the reviewers. The reputation of the system is established over time, based on readers' experiences.


Automated Methods
The success of volunteer reviews shows that systematic aggregation of the opinions of unknown individuals can give valuable information about quality. This is the concept behind measures that are based on reference patterns. The pioneer is citation analysis [snip] More recently, similar concepts have been applied to the Web with great success in Google's PageRank algorithm. These methods have the same underlying assumption. If an item is referenced by many others, then it is likely to be important in some way. Importance does not guarantee any form of quality, but, in practice, heavily cited journal articles tend to be good scholarship and Web pages that PageRank ranks highly are usually of good quality.


Quality Control in the NSD
The goal of the NSDL is to be comprehensive in its coverage of digital materials that are relevant to science education, broadly defined. To achieve this goal requires hundreds of millions of items from tens or hundreds of thousands of publishers and Web sites. Clearly, the NSDL staff cannot review each of these items individually for quality or even administer a conventional review process. The quality control process that we are developing has the following main themes:
  • Most selection and quality control decisions are made at a collection level, not at an item level.

  • Information about quality will be maintained in a collection-level metadata record, which is stored in a central metadata repository.

  • This metadata is made available to NSDL service providers.

  • User interfaces can display quality information.

Academic Reputation
How does a scholar or scientist build a reputation outside the traditional peer-reviewed journals? A few people have well-known achievements that do not need to be documented in journal articles. [snip]. More often, promotions are based on a mechanical process in which publication in peer-reviewed journals is central. Although it is manifestly impossible, most universities wish to have an objective process for evaluating faculty and this is the best that can be done. As the saying goes, "Our dean can't read, but he sure can count."

Meanwhile we have a situation in which a large and growing proportion of the primary and working materials are outside the peer-review system, and a high proportion of the peer-reviewed literature is written to enhance resumes, not to convey scientific and scholarly information. Readers know that good quality information can be found in unconventional places, but publishers and librarians devote little efforts to these materials.

The NDSL project is one example of how to avoid over-reliance on peer review. Most of the high quality materials on the Web are not peer-reviewed and much of the peer-reviewed literature is of dubious quality. Publishers and libraries need to approach the challenge of identifying quality with a fresh mind. We need new ways to do things in this new world.

The Journal of Electronic Publishing / August, 2002 / Volume 8, Issue 1 / ISSN 1080-2711


Saturday, December 8, 2007

ARL Publishes Report on Journals' Transition from Print to Electronic Formats

Washington DC—The Association of Research Libraries (ARL) has published "The E-only Tipping Point for Journals: What's Ahead in the Print-to-Electronic Transition Zone," by Richard K. Johnson and Judy Luther. The report examines the issues associated with the migration from dual-format publishing toward electronic-only publication of journals.

Publishers and libraries today find themselves in an extended transition zone between print-only and e-only journals. Both parties are struggling with the demands of dual-format publishing as well as the opportunity costs of keeping electronic journals operating within the bounds of the print publishing process, which are increasingly taxing the status quo for publishers, libraries, authors, and readers. There are suggestions that this transitional phase is especially challenging to small publishers of high-quality titles and places them at a disadvantage in relation to large, resource-rich publishers as they compete for subscribers, authors, and readers. The question of when dual-format journals will complete the transition to single-format (electronic) publishing is taking on increasing urgency.

The persistence of dual-format journals suggests that substantial obstacles need to be surmounted if the transformation to e-only publication is to be complete. This study seeks to create a better understanding of the dynamics of the transition process, both for librarians and for publishers. Neither publishers nor librarians independently control the process and the need to coordinate their activities greatly increases the complexity of the transition.

The report provides a synthetic analysis of librarian and publisher perspectives on the current state of format migration, considering the drivers toward electronic-only publishing and barriers that are slowing change. The authors provide an assessment of likely change in the near term and recommend strategic areas of focus for further work to enable change.

The work is based in large part on interviews conducted between June and August 2007 with two dozen academic librarians and journal publishers. Publishers and librarians were consulted equally in recognition that these changes pose significant issues of coordination. Interviews were conducted with collection officers and others at ARL member libraries and publishing staff of societies and university presses, publishing platform hosts, and publishing production consultants.

By commissioning this work and disseminating its findings, ARL seeks to better comprehend varying perspectives and to enhance broader, deeper understanding of the challenges and decisions faced by publishers and libraries as they navigate the transition that is underway. The report is intended to be of value well beyond the library community, serving publishers and others active in leading these transitions.




Friday, December 7, 2007

PLoS ONE: A Post-Publication-Peer-Reviewed-Open-Access-E-Journal



WHEN? December 2006

WHY? "PLoS ONE is an international, peer-reviewed, open-access, online journal." It "features reports of primary research from all disciplines within science and medicine. By not excluding papers on the basis of subject area, PLoS ONE facilitates the discovery of the connections between papers whether within or between disciplines."

HOW? "Each submission will be assessed by a member of the PLoS ONE Editorial Board before publication. This pre-publication peer review will concentrate on technical rather than subjective concerns and may involve discussion with other members of the Editorial Board and/or the solicitation of formal reports from independent referees. If published, papers will be made available for community-based open peer review involving online annotation, discussion, and rating" ( The open-source software called TOPAZ ( is used to create the advanced Web functionality; the core of TOPAZ is Fedora, the (Flexible Extensible Digital Object Repository Architecture) ( ). "Fedora is an Open Source content management application that supports the creation and management of digital objects" ( ).

"PLoS ONE will publish all papers that are judged to be rigorous and technically sound. Papers published in PLoS ONE will be available for commenting and debate by the readers, making every paper the start of a scientific conversation." "Judgments about the importance of any particular paper are then made after publication" (

"The Public Library of Science (PLoS) applies the Creative Commons Attribution License (CCAL) to all works … it publish[es]. Under the CCAL, authors retain ownership of the copyright for their article[s], but authors allow anyone to download, reuse, reprint, modify, distribute, and/or copy articles in PLoS journals, so long as the original authors and source are cited. No permission is required from the authors or the publishers" (

PLoS ONE is currently a beta release.

WHO? PLoS ONE is published by the Public Library of Science (PLoS), in partnership with an international advisory board and an extensive editorial board ( PLoS is "nonprofit organization of scientists and physicians committed to making the world's scientific and medical literature a public resource" (

CITES: "PLoS One," PLoS E-Newsletter for Institutional Members, November 14, 2006. Available at: (accessed 27 October 2007).

Catriona J. MacCallum, "ONE for All: The Next Step for PLoS," PLoS Biology 4 no. 11 (November 2006): 1875. Available at: (accessed 27 October 2007).

Thursday, December 6, 2007

LAMPSS: Lots of Alternative Models Provide Sensible Solutions. I: Open Peer Review

LAMPSS: Lots of Alternative Models Provide Sensible Solutions

[1] Open Peer Review

During much of its recent history, conventional peer review has been wholly or partially anonymous. In the former arrangement, neither reviewer nor the author is known to each other; in the latter, the author is identified. Until five years ago, the British Medical Journal (BMJ), the high-impact, general medical journal of the British Medical Association, had "used a closed system of peer review, where the authors do not know who has reviewed their papers ... but the reviewers do, however, know the names of the authors."In announcing a change in its editorial policy, Richard Smith, the BMJ editor further observes that

Most medical journals use the same system, ... based on custom not evidence. Now we plan to let authors know the identity of reviewers. Soon we are likely to open up the whole system so that anybody interested can see the whole process on the World Wide Web. The change is based on evidence and an ethical argument.

He further notes that "the primary argument against closed peer review is that it seems wrong for somebody making an important judgment on the work of others to do so in secret." In a supportive argument, Smith quotes Drummond Rennie, a deputy editor of JAMA: The Journal of the American Medical Association, stating that identifying the reviewer links "privilege and duty, by reminding the reviewer that with power comes responsibility: that the scientist invested with the mantle of the judge cannot be arbitrary in his or her judgment and must be a constructive critic."

Journals that have implemented open peer review include not only the British Medical Journal (, but also select journals published by BioMed Central (, as well as Internet Health: Journal of Research, Application, Communication & Ethics, (, among others.


Ideal Speech Situation

Scientific Publishing as Rhetoric
For some, the fundamental problems of peer review are inherent in the peer review process itself as it is currently implemented. As noted by Sosteric,

"the traditional mode of peer review obscures the problems of reference and the rhetorical dimension of science. The rhetorical process ... [that] is at the heart of science and peer review conveniently disappears with the final publication of the manuscript. In its place is an ideal typical representation (the scientific paper) of the realist assumptions about empirical reference. All the academic world sees is a polished manuscript where the personal involvement of the researcher and reviewers has been systematically eliminated."

As an alternative to conventional peer review, Sosteric, Gross, and others, promote the framework of the 'ideal speech situation', a "theoretical construct that describes the ideal type of interpersonal interaction that should exist in a rhetorical situation” proposed and developed by Jürgen Habermas, the noted German philosopher and sociologist.

Drawing upon Habermas, Gross describes the ideal speech situation in the following terms.

1) the ideal speech situation permits each interlocutor an equal opportunity to initiate speech.
2) there is mutual understanding between interlocutors.
3) there is space for clarification.
4) all interlocutors are equally free to use of any speech act.
5) there is equal power over the exchange.

As applied in the context of peer review, Gross notes that ideally "scientific peer review would permit unimpeded authorial initiative, endless rounds of give and take, [and] unchecked openness among authors, editors, and referees.

Earlier Web Usage Statistics as Predictors of Later Citation Impact

Tim Brody, Stevan Harnad

Abstract: The use of citation counts to assess the impact of research articles is well established. However, the citation impact of an article can only be measured several years after it has been published. As research articles are increasingly accessed through the Web, the number of times an article is downloaded can be instantly recorded and counted. One would expect the number of times an article is read to be related both to the number of times it is cited and to how old the article is. This paper analyses how short-term Web usage impact predicts medium-term citation impact. The physics e-print archive ( is used to test this.

Conclusion: Whereas the significance of citation impact is well established, access of research literature via the Web provides a new metric for measuring the impact of articles - Web download impact. Download impact is useful for at least two reasons:

(1) The portion of download variance that is correlated with citation counts provides an early-days estimate of probable citation impact that can begin to be tracked from the instant an article is made Open Access and that already attains its maximum predictive power after 6 months.

(2) The portion of download variance that is uncorrelated with citation counts provides a second, partly independent estimate of the impact of an article, sensitive to another form of research usage that is not reflected in citations


Journal of the American Society for Information Science and Technology / Volume 57, Issue 8 , Pages 1060 - 1072 / June 6 2006 / DOI: 10.1002/asi.20373


Peer Review Reviewed

Workshop Series
"Dialogues Between Organizational Theory and the Social Sciences"
Sponsored by EGOS in cooperation with the Social Science Research Center Berlin

Call for Papers

Peer Review Reviewed:
The International Career of a Quality-control Instrument
and New Challenges

24–25 April 2008, Social Science Research Center Berlin (WZB), Berlin

Organizers: Andreas Knie (WZB), Sigrid Quack (EGOS, Max Planck Institute for the Study of Social Sciences), and Dagmar Simon (WZB and iFQ)

Invited Keynote Speakers: Linda Butler, Australian National University, Canberra; Philippe Laredo, Ecole Nationale des Ponts et Chaussées, Paris and University of Manchester; William H. Starbuck, University of Oregon

Invited Commentators :Lars Engwall, University of Uppsala; Richard Whitley, University of Manchester

National science systems, particularly those in the OECD countries, have come under considerable pressure to change in the last 10 to 15 years. The repeatedly attested impact of science and research on the innovative capacity of national economies has led to intensifying demands from politics and society to legitimate the allocation of funding to different disciplines and academic institutions. Instruments for promoting quality control, some of them new, are being drawn upon and expanded in order to establish evaluation systems that allow for a comparative ranking of higher education and research institutions.

The ubiquitous introduction of evaluation systems has consequences for the organizational structure of the sciences and the humanities. Elements of competition known in other subsystems of society have appeared: sharper differentiation between the universities and research centres outside the universities, new actors such as privately organized institutions of higher learning, and new forms of cooperation and coordination between the various institutional actors. The national actors find themselves confronted with benchmarking processes of the OECD and other organizations that "measure" both the spending on research and development and the output of the university and research systems and make recommendations that usually take little heed of specific national characteristics. Within national systems, universities and research centres compete with each other for a better ranking in order to attract funding and enhance their reputation.

Most of the procedures for evaluating the performance of universities and research organizations still rely to a significant degree on peer judgements. The trust of politicians and various groups in society in this mechanism of self-monitoring, however, is eroding. The triumphal procession of systems for evaluating higher education and research systems therefore comes with a good deal of controversy over evaluation practices — goals, procedures, and criteria — and their appropriateness for the defined tasks of research institutions and universities.

This raises a number of questions about the development of evaluation criteria that are not only comparable across different institutions, disciplines and national systems, but also succeed in capturing these institutions’ increasing differentiation and specialization. Of particular interest is the way in which procedures of peer review are used in broader systems of evaluation, how and by whom reference groups of peers are defined, the degree of formalization and standardization built into evaluation systems and their dependence on national and international researchers’ judgements in different academic institutions, disciplines and countries.

Despite the relevance of the subject, surprisingly little empirical research is available that examines the variety of models in which peer review processes are used for the evaluation of higher education and research institutions. There is also little cross-referencing between debates in the natural and social sciences, not to mention the different research fields within the social sciences.

The aim of the workshop is to fill some of these gaps by inviting papers investigating the role of peer review in evaluation systems on the basis of case studies or by means of comparative analysis. We invite scholars from different fields in the social sciences, and particularly from organizational studies, to engage in a dialogue across disciplinary boundaries.

Paper proposals may address, but are not limited to, the following themes:

• How does the role of peer review in evaluation procedures vary according to different institutional, disciplinary and organizational frameworks?
• How is the use of peer review in evaluation systems changing in different organizations, disciplines and countries? What new processes, if any, can be identified?
• To what extent are internationality and interdisciplinarity changing the demands on peers? To what degree are "transnational" practices emerging?
• Which alternative instruments are in use and how do they impact on the governance of university and research systems?
• What role do different instruments of monitoring play in the performance assessment of organization studies as compared to other research fields in the social sciences?

We invite the submission of extended abstracts for papers related to the above issues (approx. 800 words, describing the theme, theoretical approach and methodology of the paper) together with a brief biographical note.

Deadline for the submission of extended abstracts is November 30, 2007. Submissions should be sent to Sylvia Pichorner (

Authors will be notified of acceptance or otherwise by December 31, 2007. Papers should be submitted to Sylvia Pichorner ( ) by March 15, 2008 and will be uploaded to a workshop website. For questions regarding the workshop please contact Sigrid Quack (