Friday, May 30, 2008

ThoughtMesh: An Innovative Scholarly Publishing and Discovery Model

ThoughtMesh is an unusual model for publishing and discovering scholarly papers online. It gives readers a tag-based navigation system that uses keywords to connect excerpts of essays published on different Web sites.

Add your essay to the mesh, and ThoughtMesh gives you a traditional navigation menu plus a tag cloud that enables nonlinear access to text excerpts. You can navigate across excerpts both within the original essay and from related essays distributed across the mesh.

So let's say you are reading an essay on Modern art. You can pick a single word out of that essay's tag cloud- -say Picasso- -and view a list of all the sections from that essay that relate to Picasso. Or you can view a list of sections of other articles tagged with Picasso, and jump right to one of those sections. You can also combine tags to narrow your search, such as Picasso + Cubism + 1900.

As an author, you can choose to post your essay in a central repository hosted by the Vectors program at USC, the sponsor of this project. Or you can self-archive your essay on your own Web site. [snip]



Innovative Search Options
  • Use tags to find text blocks within the current article

  • Use tags to find related blocks in outside articles

  • Use search-as-you-type lookup to find words in current article

Expandable Navigation Menu

  • Offers more traditional navigation

  • Breaks long essays into easy-to-read screen-sized chunks

  • Can be used interchangeable with tag-based navigation

Automated Tag and HTML Generation

  • Paste in your essay sections and easy-to-use software generates a ThoughtMeshed version

  • Software can auto-generate tags for each text block

  • Or author can assign custom tags

  • Overall tag cloud gives quick sense of article's themes

Meshes (Features For Future Releases)

  • Users can view a map of where the current article fits in the larger mesh.

  • Publications and groups of authors can define and administrate their own meshes.

  • Users can choose only lexias from current mesh, or from all meshes

What's a tag cloud?
A bunch of keywords in a box. Click on one to see text excerpts related to that theme, or click on several to see excerpts tagged with all of those keywords.

What's a lexia?
A text excerpt from a longer essay or Web site--usually a couple of paragraphs. Lots of blogs and newspapers have tag clouds. How is ThoughtMesh different?Most of these sites are data-base driven collections of text blocks run off a single server. ThoughtMesh's tag registry (or mesh) can connect articles on different servers across the Internet.

Is this the "Semantic Web"?
Yes and no. Like the long-term vision of the Semantic Web, ThoughtMesh treats every page on the Web as a potential "database record" to be searched. Unlike the conventional XML-powered vision of the Semantic Web, however, ThoughtMesh's data are only minimally structured in the page itself; instead, a registry of tags housed on a remote host serves to connect all the individual pages. But it's still a model of distributed publication, since in principle the same pages can be navigated via independently operated registries.

So it's like
Sort of.'s global folksonomy of tags is great, but it only indexes entire pages, which is less efficient for finding relevant passages in long academic papers. ThoughtMesh helps trace thematic connections between particular sections of online essays. And ThoughtMesh's tags (and the meshes that connect them) are determined (or at least validated) by the authors of the pages.

Is this "Web 2.0"?
ThoughtMesh exploits participatory media, remote scripting, and lateral navigation. So yeah, you can call it that.


How to Navigate Essays[]

How to Tag an Essay[]


Jon Ippolito
Conceptual architect, client-side designer, and client-side engineer

Craig Dietrich
Designer and server-side engineer

John Bell
Telamon.js author and remote scripting contributor

Chirag Mehta
ThoughtMesh uses Mehta's Tagline software


/ ThoughtMesh" Tag Your Writing. Join the conversation / Jon Ippolito & Craig Dietrich / Vectors: Journal of Culture and Technology in a Dynamic Vernacular / Volume 3 Issue 1, Fall 2007 /

White Papers

/ New Criteria for New Media / New Media Department, University of Maine / Promotion and Tenure Guidelines Addendum: Rationale for Redefined Criteria / Version 2.2, January 2007 /

ABSTRACT: An argument for redefining promotion and tenure criteria for faculty in new media departments of today's universities.


ThoughtMesh Author's Statement


ThoughtMesh Forum



Related Work

/ New Age Navigation: Innovative Information Interfaces for Electronic Journals / Gerry McKiernan / The Serials Librarian, Vol. 45(2) / 87-123 / 2003 / DOI: 10.1300/J123v45n02_06 /

ABSTRACT. While it is typical for electronic journals to offer conventional search features similar to those provided by electronic databases, a select number of e-journals have also made available higher-level access options as well. In this article, we review several novel technologies and implementations that creatively exploit the inherent potential of the digital environment to further facilitate use of e-collections.We conclude with speculation on the functionalities of a next-generation e-journal interface that are likely to emerge in the near future.


Wednesday, May 28, 2008


New York Times / May 27, 2008 / SCIENCE / Basics
Curriculum Designed to Unite Art and Science

The battle between the sciences and the humanities has been going on for so long, its early participants have stopped walking and talking, because they’re already dead.

It’s been some 50 years since the physicist-turned-novelist C.P. Snow delivered his famous “Two Cultures” lecture at the University of Cambridge, in which he decried the “gulf of mutual incomprehension,” the “hostility and dislike” that divided the world’s “natural scientists,” its chemists, engineers, physicists and biologists, from its “literary intellectuals,” a group that, by Snow’s reckoning, included pretty much everyone who wasn’t a scientist.


His critique set off a frenzy of hand-wringing that continues to this day, particularly in the United States, as educators, policymakers and other observers bemoan the Balkanization of knowledge, the scientific illiteracy of the general public and the chronic academic turf wars that are all too easily lampooned.
Yet a few scholars of thick dermis and pep-rally vigor believe that the cultural chasm can be bridged and the sciences and the humanities united into a powerful new discipline that would apply the strengths of both mindsets, the quantitative and qualitative, to a wide array of problems. Among the most ambitious of these exercises in fusion thinking is a program under development at Binghamton University in New York called the New Humanities Initiative.

Jointly conceived by David Sloan Wilson, a professor of biology, and Leslie Heywood, a professor of English, the program is intended to build on some of the themes explored in Dr. Wilson’s evolutionary studies program, which has proved enormously popular with science and nonscience majors alike, and which he describes in the recently published “Evolution for Everybody.” In Dr. Wilson’s view, evolutionary biology is a discipline that, to be done right, demands a crossover approach, the capacity to think in narrative and abstract terms simultaneously, so why not use it as a template for emulsifying the two cultures generally?

“There are more similarities than differences between the humanities and the sciences, and some of the stereotypes have to be altered,” Dr. Wilson said. “Darwin, for example, established his entire evolutionary theory on the basis of his observations of natural history, and most of that information was qualitative, not quantitative.” As he and Dr. Heywood envision the program, courses under the New Humanities rubric would be offered campuswide, in any number of departments, including history, literature, philosophy, sociology, law and business. The students would be introduced to basic scientific tools like statistics and experimental design and to liberal arts staples like the importance of analyzing specific texts or documents closely, identifying their animating ideas and comparing them with the texts of other times or other immortal minds.
One goal of the initiative is to demystify science by applying its traditional routines and parlance in nontraditional settings — graphing Jane Austen, as the title of an upcoming book felicitously puts it. [snip]

To illustrate how the New Humanities approach to scholarship might work, Dr. Heywood cited her own recent investigations into the complex symbolism of the wolf, a topic inspired by a pet of hers that was seven-eighths wolf. [snip]

Dr. Heywood began studying the association between wolves and nature, and how people’s attitudes toward one might affect their regard for the other. “In the standard humanities approach, you compile and interpret images of wolves from folkloric history, and you analyze previously published texts about wolves,” and that’s pretty much it, Dr. Heywood said. Seeking a more full-bodied understanding, she delved into the scientific literature, studying wolf ecology, biology and evolution. [snip]


In designing the New Humanities initiative, Dr. Wilson is determined to avoid romanticizing science or presenting it as the ultimate arbiter of meaning, as other would-be integrationists and ardent Darwinists have done.

“You can study music, dance, narrative storytelling and artmaking scientifically, and you can conclude that yes, they’re deeply biologically driven, they’re essential to our species, but there would still be something missing,” he said, “and that thing is an appreciation for the work itself, a true understanding of its meaning in its culture and context.”


Reading the New Humanities proposal, by contrast, [George Levine, an emeritus professor of English at Rutgers University" "... was struck by how it absolutely refused the simple dichotomy,” he said.
“There is a kind of basic illiteracy on both sides,” he added, “and I find it a thrilling idea that people might be made to take pleasure in crossing the border.”

Everyone Into The Pool

Chronicle of Higher Education / May 30, 2008

New-Media Scholars' Place in 'the Pool' Could Lead to Tenure


[snip] Re:Poste is one of 600 creative works — games, art, and more — by new-media students and faculty members, most of them on the [University of Maine ] Orono campus, described in the Pool,


which also contains about 2,000 reviews of those works. Starting in June, the Pool will have a much wider reach, as people in general will be invited to add material to the site, rate others' projects, build on their ideas, and find collaborators for their own projects.

The Pool, as yet little known, could provide a new avenue for new-media scholars to do their jobs. Eventually it could play a role in their tenure and promotion as well. The numbers and influence of such scholars in academe are growing, and they are looking for new ways for their institutions to evaluate them. Books and journal articles alone are a flawed measure of their productivity, new-media professors say, because many of their accomplishments exist only as Web sites, interactive games, or multimedia presentations. The Pool, they suggest, can be one measure for judging their work.
"What we're trying to do is find alternative metrics," says Mr. Ippolito, who conceived of the Pool with Joline J. Blais, an associate professor of new media at Orono. "Sometimes it's not even the quality of what you do, it's how much influence it has."


Graphical Performance

Here's how the Pool works:

Titles of new-media projects are plotted on a two-dimensional graph. People log in and post the reviews of projects, rating their appearance, function, and concept on a scale from 1 to 10. As works garner more reviews, they move from left to right on the graph. If reviews become more positive, the works move toward the top.

Accordingly, the most highly regarded and widely reviewed works migrate to the upper right corner of the graph.
The program calculates the ratings and takes into account the credibility of the reviewers. If a reviewer receives a low appearance rating for his own projects, then his assessment of how others' projects look will not be given much weight.

The Pool also allows visitors to bore deep into a project via hyperlinks, in many cases viewing its evolution from conception to finish. They can see its creator or creators and read how others rated the project. They can see the works that inspired it and the works it inspired. Basic information about a project is posted by the developers.

Mr. Ippolito and Ms. Blais plan to divide the site according to content. The current database is largely in the Art Pool. A Code Pool is for software code, and a Text Pool will be for written works.

No college is yet using the site as a way to evaluate professors. But Gerard McKiernan, a science-and-technology librarian at Iowa State University, says the Pool, once open to the public, could be a good barometer of a scholar's influence.

"Five hundred heads ... [are] better than two in assessing the value of a work," says Mr. McKiernan, who runs the blog Scholarship 2.0, on alternative Web-based methods for scholarly publishing.



Connecting With Colleagues

Even if the Pool won't be used for decisions on tenure and promotion, Mr. Ippolito says, it will encourage collaboration among scholars.

"Instead of people toiling away at their own lab bench or scholarly archive," he says, "people begin to share ideas and work from each other."
One feature of the Pool allows users to view scholarly connections schematically. By clicking on the name of one scholar, a visitor can view all of the people in the Pool with whom he or she has collaborated, their projects, and, in turn, all of those with whom the collaborators have worked. The data are added by project developers. The visual effect is a computer screen filled with a dizzying array of crisscrossing lines and scholars' names, which becomes difficult to follow as the number of connections multiply.

Mr. Ippolito, who is also an artist and a curator at the Guggenheim Museum, in New York, is so passionate about sharing among scholars and students that he added to his curriculum vitae, "Taught students to cheat using the Internet."
His point is that for digital culture to thrive, artists and scholars must freely exchange their ideas, software code, and images. It is a philosophy that some academics believe permits the theft of intellectual property.

Even so, the notion of sharing is what attracted Richard J. Rinehart, digital-media director and adjunct curator at the University of California at Berkeley Art Museum, to the Pool. He and his Berkeley colleagues tested and helped to refine the Web site.

Tagged Papers

The Pool is one of two projects to promote scholarly collaboration that Mr. Ippolito has created with colleagues at Still Water, a research arm of Maine's new-media department.

His other project, ThoughtMesh, was created with Craig Dietrich, a new-media researcher and artist who just earned a master's degree in "intermedia" at the University of Iowa.


ThoughtMesh is a Web site that tags open-access scholarly papers with key words. Visitors can jump to passages in papers that contain those words. And they can see others' papers, throughout academe, tagged with the same words. A "cloud" of tagged words hovers above each paper.

Mr. Ippolito says the goal of ThoughtMesh is for scholars to get their work out quickly and identify others who might be able to help them in their research.


Monday, May 26, 2008

Soft Peer Review: Social Software and Distributed Scientific Evaluation

Soft Peer Review: Social Software and Distributed Scientific Evaluation

Dario TARABORELLI / Department of Psychology / University College London / Gower Street / London / WC1 6BT / United Kingdom /


The debate on the prospects of peer-review in the Internet age and the increasing criticism leveled against the dominant role of impact factor indicators are calling for new measurable criteria to assess scientific quality. Usage-based metrics offer a new avenue to scientific quality assessment but face the same risks as first generation search engines that used unreliable metrics (such as raw traffic data) to estimate content quality. In this article I analyze the contribution that social bookmarking systems can provide to the problem of usage-based metrics for scientific evaluation. I suggest that collaboratively aggregated metadata may help fill the gap between traditional citation-based criteria and raw usage factors. I submit that bottom-up, distributed evaluation models such as those afforded by social bookmarking will challenge more traditional quality assessment models in terms of coverage, efficiency and scalability. Services aggregating user-related quality indicators for online scientific content will come to occupy a key function in the scholarly communication system.

D. Taraborelli (2008), Soft peer review. Social software and distributed scientific evaluation, Proceedings of the 8th International Conference on the Design of Cooperative Systems (COOP 08), Carry-Le-Rouet, France, May 20-23, 2008


[1] Revolutionizing peer review? Nat Neurosci, 8(4):397–397, April 2005. doi: 10.1038/nn0405397. URL

[2] Peer review and fraud. Nature, 444(7122):971–972, December 2006. doi: 10.1038/444971b. URL

[3] The impact factor game. PLoS Medicine, 3(6), June 2006. doi: 10.1371/journal. pmed.0030291. URL

[4] S. Bao, G. Xue, X. Wu, Y. Yu, B. Fei, and Z. Su. Optimizing web search using social annotations. In WWW ’07: Proceedings of the 16th international conference on World Wide Web, pages 501–510, New York, NY, USA, 2007. ACM Press. ISBN 9781595936547. doi: 10.1145/1242572.1242640. URL

[5] J. Bollen, H. Van de Sompel, J. A. Smith, and R. Luce. Toward alternative metrics of journal impact: A comparison of download and citation data. Information Processing & Management, 41(6):1419–1440, December 2005. doi: 10.1016/j.ipm.2005.03.024. URL

[6] T. Brody, S. Harnad, and L. Carr. Earlier Web usage statistics as predictors of later citation impact. J. Am. Soc. Inf. Sci. Technol., 57(8):1060–1072, June 2006. ISSN 1532-2882. doi: 10.1002/asi.v57:8. URL

[7] E. Garfield. The agony and the ecstasy— the history and meaning of the journal impact factor. In International Congress on Peer Review And Biomedical Publication, Chicago, September 2005. URL

[8] P. Ginsparg. Can peer review be better focused. Science & Technology Libraries, 22 (3-4):5–17, January 2004. doi: 10.1300/J122v22n03 02. URL

[9] W. Glanzel. Journal impact measures in bibliometric research. Scientometrics, 53(2): 171–193, 2002. URL

[10] S. Greaves, J. Scott, M. Clarke, L. Miller, T. Hannay, A. Thomas, and P. Campbell. Nature’s trial of open peer review. Nature, December 2006. doi: 10.1038/nature05535. URL

[11] S. Harnad. Open access scientometrics and the uk research assessment exercise. In D. Torres-Salinas and H. F. Moed, editors, 11th Annual Meeting of the International Society for Scientometrics and Informetrics, volume 11, pages 27–33, 2007. URL

[12] S. Harnad. Implementing Peer Review on the Net: Scientific Quality Control in Scholarly Electronic Journals, pages 103–118. MIT Press, 1996. URL

[13] C. Heintz. Web search engines and distributed assessment systems. Pragmatics & Cognition, 14(2):387–409, 2006.

[14] C. G. Jennings. Quality and value: The true purpose of peer review. Nature, 2006. doi: 10.1038/nature05032. URL

[15] M. Jensen. The new metrics of scholarly authority. The Chronicle, June 2007. URL

[16] G. McKiernan. Peer review in the internet age: Five (5) easy pieces. Against the Grain, 16(3):52–55, June 2004. URL


[17] H. Roosendaal and P. Geurts. Forces and functions in scientific communication. In Cooperative Research Information Systems in Physics, Oldenburg, Germany, August 1997. URL

[18] P. T. Shepherd. Final report on the investigation into the feasibility of developing and implementing journal usage factors. Technical report, United Kingdom Serials Group, May 2007. URL

[19] Y. Yanbe, A. Jatowt, S. Nakamura, and K. Tanaka. Can social bookmarking enhance search in the web? In JCDL ’07: Proceedings of the 2007 conference on Digital libraries, pages 107–116, New York, NY, USA, 2007. ACM Press. ISBN 9781595936448. doi: 10.1145/1255175.1255198. URL


peer review; rating; impact factor; citation analysis; usage factors; scholarly publishing; social bookmarking; collaborative annotation; online reference managers; social software; web 2.0; tagging; folksonomy

* This paper is based on ideas previously published on a post on the Academic Productivity blog.


PDF of Presentation Slides Available


Saturday, May 24, 2008

(More) Open Metrics: Emerging Impact Measures

Numbers Game Hots Up

Citation metrics have become key numbers for journals, institutions and even individuals, and a host of different models are emerging

/ Tracey Caldwell / Information World Review / February 4 2008 /

The beguiling simplicity of the “impact factor” has made it a figure of supreme importance in research.

Impact Factor

Journal impact factors, or IFs, measure how often science and social science journals are cited by academics. The measurement of the number of times a journal is cited by researchers in the field has become shorthand for the value of that journal; and funding bodies and employers use citation metrics to assess the productivity of institutions, departments and individuals.

Thomson Scientific dominates the citation metrics landscape with its Web of Science-based citation index. Recently, however, it has faced increasing competition from the likes of Scopus and Google Scholar. The existence of realistic alternatives to Thomson Scientific’s index – and which give different results to it – has thrown the debate on citation metrics wide open.

Web of Science


Google Scholar


Citation metrics are also used to produce the H Index (a measure of an individual’s publishing activity) and the G Index (a weighted version of the H Index).



G Index

The Web of Science (WoS) is well established with huge coverage. But critics say that WoS is expensive – Google Scholar, by comparison, is free – and its coverage incomplete. They also say that because citation metrics take years to create, WoS cannot identify what is hot right now.

Lagging, Not Leading

But Kuan-Teh Jeang, editor in chief of open access (OA) journal Retrovirology, says that metrics designed to measure “previous” modes of publication are an assessment of publishing impact on a largely “Western and developed audience” and are lagging rather than leading indicators.


"Things that seem to be invisible now might prove to be highly impactful,” says Jeang. [snip]

Matthew Cockerill, publisher at OA publishing house Biomed Central, believes the timeliness of the newer indices is an asset. “Google Scholar is wide-embracing and up to date. Scopus adds every new biomedical journal on an annual basis. And Google Scholar adds on an automated basis ... .

BioMed Central


Thomson Scientific believes that maintaining indexing quality and consistency of citation data is key. “Our focus is on making sure our metrics reflect the scholarly process well,” says Jim Pringle, vice president of product development at Thomson Scientific. He points out that the company supplements its journal citation reports by publishing a hotlist of papers that are emerging as highly cited.

[Jim] Pringle [of Thomson Scientific] says [the company] is watching with interest experiments with other citation metrics from journal ranking body Eigenfactor to download metrics. “With a download or a page view versus a citation in peer-reviewed literature, you are dealing with a different point on the value scale,” he says.


"Eigenfactor: Measuring the Value and Prestige of Scholarly Journals" / Carl Bergstrom / College & Research Libraries News (May 2007) Vol. 68, No. 5

When developing the H Index, Jorge Hirsch, who teaches at the University of California, decided that as citation counts were used for research evaluation in faculty recruiting and promotion, as well as in grant allocations, articles that received large numbers of citations should be considered as significant in such evaluations, even when they were not published in high-impact journals.

Hirsch developed the H Index as a metric that could illustrate research achievement.

Thomson Scientific ... puts health warnings on its metrics. The company says it does not rely on the impact factor alone in assessing the usefulness of a journal, and neither should anyone else.

“It is important that people use the metrics well and use them for the right purpose,” says Pringle.


[Cockerill notes that] “[t]he reality is that people tend to give emphasis to a number and this can be circular so that the decision to submit a piece of research is based on the IF of the journal.

“Evaluation authorities say they don’t attach importance to impact factors but the perception is that IFs are all-important. But people forget about the partial nature of IFs.



One of the issues with citation metrics is that they do not thoroughly reflect the range of scientific advancement. Research with a more practical application might be cited less in other research. There is scientific value in clinical trials and individual datasets, yet no-one cites results from them.

There have been moves to include sources beyond journal papers, but there is still a way to go. Scopus publishes conference proceedings publications and 33 million abstracts and Thomson Scientific also publishes conference proceedings as “a way to uncover research ideas as they are presented for the first time - often before publication in the journal literature”.

Beyond the journal [Niels] Weertman, [Scopus product manager] says: “We want to include other sources, and researchers need to have access to that content. In some disciplines such as science and maths it has been shown that 50-60% of research results are in conference proceedings; while in arts and humanities most research is in a book, not in papers or conference proceedings.”

Citation measures have come under fire but whatever their flaws, they are undeniably relevant.

Research has shown there is a positive relationship between average citations per paper and peer review measures of research performance.


The increasing complexity of the metrics landscape should have at least one beneficial effect: making people think twice before bandying about misleading indicators. More importantly, it will hasten the development of better, more open metrics based on more criteria, with the ultimate effect of improving the rate of scientific advancement.


Measure the Metrics

Cognitive scientist and open access (OA) evangelist Stevan Harnad also welcomes the introduction of metrics to supplement and eventually substitute for panel review, but he believes HEFCE must test and validate many potential metrics against the panel reviews in 2008.

He says: “The candidate metrics must go beyond just ISI journal impact factors, or even article/author citation counts. Non-ISI citation data (such as Google Scholar), download data, co-citations, and many other candidate metrics should be tested and validated against the RAE 2008 panel reviews, discipline by discipline.

“Open access looms large in both the generation and evaluation of metrics. RAE/HEFCE still has not made the link. Once OA self-archiving is mandated UK-wide and worldwide there will be an unprecedentedly rich and diverse set of OA metrics to test and validate.”



Wednesday, May 14, 2008

Student Plagiarism in an Online World: Problems and Solutions

Student Plagiarism in an Online World: Problems and Solutions

Edited By: Tim S. Roberts, Central Queensland University, Australia
ISBN: 978-1-59904-801-7 / Hard Cover / Publisher:
Information Science Reference / Pub Date: December 2007 /
Pages: 320 / List Price: US$180.00 / US$ 132.00 E-Version

Free Access to the Online Version When Your Library Purchases a Print Copy

Description:Twenty years ago, plagiarism was seen as an isolated misdemeanor, restricted to a small group of students. Today it is widely recognized as a ubiquitous, systemic issue, compounded by the accessibility of content in the virtual environment.

Student Plagiarism in an Online World: Problems and Solutions describes the legal and ethical issues surrounding plagiarism, the tools and techniques available to combat the spreading of this problem, and real-life situational examples to further the understanding of the scholars, practitioners, educators, and instructional designers who will find this book an invaluable resource.


Topics Covered:
Alternatives to plagiarism
Assessing textual plagiarism
Assignments that support original work
Blogging and plagiarism
Contract cheating
Contributing factors to online plagiarism
Controlling plagiarism
Educating students
Information revolution
Lecturer attitudes toward plagiarism
Plagiarism and international students
Plagiarism and the community college
Plagiarism as an ethical issue
Plagiarism detection systems
Plagiarism prevention
Plagiarism-related behaviors
Student perspective of plagiarism
Unintentional plagiarism
Writing as a developmental skill


Table of Contents:
Section I: Some Groundwork

Chapter I: Student Plagiarism in an Online World: An Introduction Tim S. Roberts, Central Queensland University, Australia
Chapter II: A Student Perspective of Plagiarism Craig Zimitat, Griffith University, Australia
Chapter III: Controlling Plagiarism: A Study of Lecturer Attitudes Erik J. Eriksson, Umeå University, Sweden Kirk P. H. Sullivan, Umeå University, Sweden

Section II: Two Particular Case Studies

Chapter IV: Dealing with Plagiarism as an Ethical Issue Barbara Cogdell, University of Glasgow, UK Dorothy Aidulis, University of Glasgow, UK
Chapter V: Working Together to Educate Students Frankie Wilson, Brunel University, UK Kate Ippolito, Brunel University, UK

Section III: EFL and International Students

Chapter VI: EFL Students: Factors Contributing to Online Plagiarism Teresa Chen, California State University, USA Nai-Kuang Teresa Ku, California State University, USA
Chapter VII: International Students: A Conceptual Framework for Dealing with Unintentional Plagiarism Ursula McGowan, The University of Adelaide, Australia
Chapter VIII: International Students and Plagiarism Detection Systems: Detecting Plagiarism, Copying, or Learning? Lucas D. Introna, Lancaster University Management School, UK Niall Hayes, Lancaster University Management School, UK

Section IV: Two Specific Issues

Chapter IX: Plagiarism and the Community College Teri Thomson Maddox, Jackson State Community College, USA
Chapter X: The Phenomena of Contract Cheating Thomas Lancaster, Birmingham City University, UK Robert Clarke, Birmingham City University, UK

Section V: Prevention is Better than Cure

Chapter XI: Prevention is Better than Cure: Addressing Cheating and Plagiarism Based on the IT Student Perspective Martin Dick, RMIT University, Australia Judithe Sheard, Monash University, Australia Maurie Hasen, Monash University, Australia
Chapter XII: Plagiarism, Instruction, and Blogs Michael Hanrahan, Bates College, USA
Chapter XIII: Minimizing Plagiarism by Redesigning the Learning Environment and Assessment Madhumita Bhattacharya, Athabasca University, Canada and Massey University, New Zealand Lone Jorgensen, Massey University, New Zealand
Chapter XIV: Expect Originality! Using Taxonomies to Structure Assignments that Support Original Work Janet Salmons, Vision2Lead, Inc., USA

Section VI: Two Looks to the Future

Chapter XV: Substantial, Verbatim, Unattributed, Misleading: Applying Criteria to Assess Textual Plagiarism Wilfried Decoo, Brigham Young University, USA and University of Antwerp, Belgium
Chapter XVI: Students and the Internet: The Dissolution of Boundaries Jon R. Ramsey, University of California, Santa Barbara, USA