Monday, October 7, 2013

NISO Altmetrics Project > First In-person / [Streamed] Meeting > October 9, 2013 > San Francisco, CA

As previously announced, NISO is undertaking—with a grant from the Alfred P. Sloan Foundation—a two-phase initiative to explore, identify, and advance standards and/or best practices related to a new suite of assessment metrics for the scholarly community. The first phase of the project is intended to expose areas for potential standardization and collectively prioritize those potential projects.

The first in-person meeting in support of this work will take place on Wednesday, October 9, 2013 in San Francisco. The objectives of this one-day meeting will include a short opening keynote on topic of assessment, lightning talks on related projects, brainstorming for identification of topics for discussion, and prioritizing proposed work items.

The meeting is free for all attendees. [snip]. In-person registration for this event closed on Friday, October 4 at 5:00 p.m. (ET). Virtual attendance registration closes on Tuesday, October 8 at 5:00 p.m. (ET).

Virtual Attendance Registration is available via 


Livestream information and link will be added to this page on the day of the event.
AGENDA
Note: All Sessions Are Pacific Time

October 9, 2013
8:30 a.m.
Welcome
Round the Room Introductions
8:45 a.m.
Introduction: Background and What We Hope to Achieve
Todd Carpenter, Executive Director, NISO
9:15 a.m.
Lightning Talks on Related Projects (5 min each)
Speakers currently signed up are:
  • Euan Adie – Uptake of altmetrics in academic publishing environments
  • Michael Habib – Expectations by researchers
  • Stefanie Haustein – Exploring disciplinary differences in the use of social media in scholarly communication
  • Gregg Gordon – Building trust into altmetrics
  • Heather Piwowar – Altmetrics for Alt-products: approaches and challenges
  • Marcus Banks – Moving beyond the PDF: data sets and visualizations as equal partners
  • Carly Strasser – Altmetrics as part of the services of a large university library system
  • William Gunn – The provenance of altmetrics readership
  • Richard Price – The role of altmetrics in Academia.edu
  • Peter Brantley – Deriving altmetrics from annotation activity

10:45 a.m.
Break
11:00 a.m.
Brainstorming: Identification of Topics for Discussion
Participation of all attendees including virtual attendees
Exercise to include noting topics of interest from attendees and posting problems/issues/gaps/challenges/themes on post-it notes followed by collective grouping and prioritizing of themes. Initial themes will have surfaced in the open Google Doc shared with the group prior to the event.
12:00 p.m.
Lunch
1:00 p.m.
Breakout of Discussion Groups
The groups will have an open discussion of their selected topics and how it plays into a future ecosystem. Depending on the topic, this could include identification of related projects, potential solutions, ongoing pilot projects and gaps in community activity related to the theme. Each group will come up with 3-6 action items related to the topic for prioritization later.
2:00 p.m.
Reporting Out of Discussion Groups & All-Attendee Discussion of Reports
Each group will report on its discussions, highlighting necessary actions, gaps or areas where more information is needed.
3:30 p.m.
Wrap up, Meeting Adjourns
7:00 p.m.
Group Dinner

Source and Links Available At:

Sunday, October 6, 2013

ASIS Bulletin > Altmetrics: What, Why and Where?


Heather Piwowar, Guest Editor

Introduction

Altmetrics is a hot buzzword. What does it mean? What's behind the buzz? What are the risks and benefits of using alternative metrics of research impact – altmetrics – in our discovery and evaluation systems? How are altmetrics being used now, and where is the field going?

This special section of the Bulletin of the Association for Information Science and Technology focuses on these questions. Essays from seven perspectives highlight the role of altmetrics in a wide variety of settings.

The collection begins with its most general article, one I authored with my ImpactStory co-founder Jason Priem, motivating the role of altmetrics for individual scholars through "The Power of Altmetrics on a CV." The next few papers highlight ways that altmetrics may transform scholarly communication itself. Ross Mounce, a doctoral student and Panton Fellow of the Open Knowledge Foundation, explores the relationship between open access and altmetrics in "OA and Altmetrics: Distinct but Complementary." Juan Pablo Alperin, doctoral student and developer with the Public Knowledge Project, encourages us to "Ask Not What Altmetrics Can Do for You, but What Altmetrics Can Do for Developing Countries." Stacy Konkiel and Dave Scherer, librarians at Indiana University and Purdue, respectively, discuss how almetrics can empower institutional repositories in "New Opportunities for Repositories in the Age of Altmetrics."
Completing the collection are three more perspectives from the builders of hot altmetrics tools. Jennifer Lin and Martin Fenner, both of PLOS, explore patterns in altmetrics data in "The Many Faces of Article-level Metrics." Jean Liu, blogger, and Euan Adie, founder of Altmetric.com, consider "Five Challenges in Altmetrics: A Toolmaker's Perspective." Finally, Mike Buschman and Andrea Michalek, founders of Plum Analytics, wrap up the collection asking, "Are Alternative Metrics Still Alternative?"

[snip]

We might even consider nontraditional applications of citation metrics to be altmetrics – citations to datasets as first-class research objects, for example. Other examples include citation counts filtered by type of citation, like citations by editorials or citations only from review articles or citations made only in the context of experimental replication. All of these are alternative indicators of impact.

Altmetrics offer four potential advantages:

  • A more nuanced understanding of impact, showing us which scholarly products are read, discussed, saved and recommended as well as cited.
  • Often more timely data, showing evidence of impact in days instead of years.
  • A window on the impact of web-native scholarly products like datasets, software, blog posts, videos and more.
  • Indications of impacts on diverse audiences including scholars but also practitioners, clinicians, educators and the general public.

Of course, these indicators may not be “alternative” for long. At that point, hopefully we’ll all just call them metrics.

[snip]

Source and Links Available At:

[http://www.asis.org/Bulletin/Apr-13/AprMay13_Piwowar.html]