impact measurements for the humanities – University affairs
If you need help quantifying your work, it is best to consult with the specialist librarians at your university first.
With each annual review, I feel increasing pressure to quantify my “impact”, even though that is a ridiculous concept in my field. Should I play this game? Are there ways to subvert it? I don’t use social media, so alt-metrics aren’t for me.
– Anonymous (medieval studies)
Response from Dr Editor:
The idea that the value of scientific work can be quantified by citations – let alone by clues that do not follow monographs – is indeed troubling. And the rising tide of numbers is not limited to bibliometrics such as the index h or systems like The UK REF regime: In Ontario and Alberta, we have seen provincial governments link university funding to performance measures such as graduation rates and student income after graduation (see CAUT 2020).
As Kathryn Maude convincingly argues, these metrics are problematic not only because they inaccurately measure the actual number of citations for humanities monographs in particular: they can also “make our work less radical”, because the “Pressure to publish limits intellectual curiosity and forces research on more conventional paths” (2014, p. 247).
In short, there are good reasons to be concerned about the pressure to demonstrate quantifiable impact. It is appropriate to use all the power you have in your institution to resist metrisation.
For many, however, the annual review and the tenure and promotion file will not be the most effective place to make such arguments. To determine which metrics of influence are appropriate for humanities faculty, I spoke with the Duke University Librarian for Literature and Drama Studies. Arianne Hartsell-Gundy, who, like me, takes a pragmatic approach to impact measurement when working with individual researchers.
Arianne’s first tip is always to go see and work with your favorite college librarian. Librarians in your institution are more familiar with your institutional context and are therefore in a better position to support you than the following general guidelines. That said, here are some starting points that Arianne recommends for making digital statements about the quality of your humanities research:
1. “I publish in journals with acceptance rates of X%”
A journal’s influence is too often assumed to be reflected in its Journal Impact Factor (JIF), an indicator of the average number of citations of articles published in a short-term journal. As Marc Couture argued, JIF is “an inaccurate and unreliable measure”, and the link between JIF and the quality of any article it publishes is “sketchy at best” (2017).
If you are publishing in journals that do not have a JIF, or in journals that appear to have low JIFs compared to other disciplines, an alternative measure is to cite a journal’s acceptance rate. The Directory of MLA periodicals is comprehensive enough for literary studies and includes journals from other humanities disciplines; if you are not a member of the MLA, you can find the directory in your university library. In the “Submission Details” section, find the number of articles published per year and divide it by the number of manuscripts submitted per year to calculate an acceptance rate.
2. “X number of libraries contain my book”
World Cat is more than just a way to find books to request through interlibrary loan. Thanks to WorldCat Identities feature, you can search for your own name and see a list of each of your books. Although the “edits” feature can be confusing (WorldCat counts as separate editions, soft and hard covers, as well as physical and electronic books), you can reliably cite the number of WorldCat member libraries in the list. world who hold a copy of your book. Search your colleagues and you’ll quickly get an idea of whether or not finding your book held by 2,333 WorldCat member libraries is awesome.
3. “People from X countries teach my work”
When your work begins to be taught in college classes, you know you have an influence on the profession. The Open program project contains over six million programs and you can search your own name and publications to see how many times your work has been taught. If your work appears in the Open Syllabus corpus, you will be able to click on each cited title and see a map of the countries in which your work is taught, as well as the names of specific institutions in which you have appeared on a program.
Of course, not everyone who teaches a university course chooses to upload their program to this database, but all searches of the database are then limited by the size of the dataset. The more researchers invest in the principles of openness in research in particular and in academic work in general, the more robust these datasets will become.
Personally, I find the Open Syllabus project easier to find than a platform like the Communes of human sciences, which uses a more limited set of metadata for its programs, but the latter can show you how many people have downloaded your material, which can be a useful metric to attest to your influence as an educator.
I thank Arianne for sharing these suggestions and resources with me. If you would like help developing a set of numbers that make sense in quantifying your work – or the work of your department – please consider starting with the specialist librarians at your university library.