Call for Papers
Peer Review Reviewed:
Organizers: Andreas Knie (WZB), Sigrid Quack (EGOS, Max Planck Institute for the Study of Social Sciences), and Dagmar Simon (WZB and iFQ)
National science systems, particularly those in the OECD countries, have come under considerable pressure to change in the last 10 to 15 years. The repeatedly attested impact of science and research on the innovative capacity of national economies has led to intensifying demands from politics and society to legitimate the allocation of funding to different disciplines and academic institutions. Instruments for promoting quality control, some of them new, are being drawn upon and expanded in order to establish evaluation systems that allow for a comparative ranking of higher education and research institutions.
The ubiquitous introduction of evaluation systems has consequences for the organizational structure of the sciences and the humanities. Elements of competition known in other subsystems of society have appeared: sharper differentiation between the universities and research centres outside the universities, new actors such as privately organized institutions of higher learning, and new forms of cooperation and coordination between the various institutional actors. The national actors find themselves confronted with benchmarking processes of the OECD and other organizations that "measure" both the spending on research and development and the output of the university and research systems and make recommendations that usually take little heed of specific national characteristics. Within national systems, universities and research centres compete with each other for a better ranking in order to attract funding and enhance their reputation.
Most of the procedures for evaluating the performance of universities and research organizations still rely to a significant degree on peer judgements. The trust of politicians and various groups in society in this mechanism of self-monitoring, however, is eroding. The triumphal procession of systems for evaluating higher education and research systems therefore comes with a good deal of controversy over evaluation practices — goals, procedures, and criteria — and their appropriateness for the defined tasks of research institutions and universities.
This raises a number of questions about the development of evaluation criteria that are not only comparable across different institutions, disciplines and national systems, but also succeed in capturing these institutions’ increasing differentiation and specialization. Of particular interest is the way in which procedures of peer review are used in broader systems of evaluation, how and by whom reference groups of peers are defined, the degree of formalization and standardization built into evaluation systems and their dependence on national and international researchers’ judgements in different academic institutions, disciplines and countries.
Despite the relevance of the subject, surprisingly little empirical research is available that examines the variety of models in which peer review processes are used for the evaluation of higher education and research institutions. There is also little cross-referencing between debates in the natural and social sciences, not to mention the different research fields within the social sciences.
The aim of the workshop is to fill some of these gaps by inviting papers investigating the role of peer review in evaluation systems on the basis of case studies or by means of comparative analysis. We invite scholars from different fields in the social sciences, and particularly from organizational studies, to engage in a dialogue across disciplinary boundaries.
Paper proposals may address, but are not limited to, the following themes:
• How does the role of peer review in evaluation procedures vary according to different institutional, disciplinary and organizational frameworks?
• How is the use of peer review in evaluation systems changing in different organizations, disciplines and countries? What new processes, if any, can be identified?
• To what extent are internationality and interdisciplinarity changing the demands on peers? To what degree are "transnational" practices emerging?
• Which alternative instruments are in use and how do they impact on the governance of university and research systems?
• What role do different instruments of monitoring play in the performance assessment of organization studies as compared to other research fields in the social sciences?
We invite the submission of extended abstracts for papers related to the above issues (approx. 800 words, describing the theme, theoretical approach and methodology of the paper) together with a brief biographical note.
Deadline for the submission of extended abstracts is November 30, 2007. Submissions should be sent to Sylvia Pichorner (email@example.com).
Authors will be notified of acceptance or otherwise by December 31, 2007. Papers should be submitted to Sylvia Pichorner (firstname.lastname@example.org ) by March 15, 2008 and will be uploaded to a workshop website. For questions regarding the workshop please contact Sigrid Quack (email@example.com).