Journal Impact Factor: Uncritical Substitute for Research Quality?

Robin Chin Roemer of the University of Washington and Rachel Borchardt of American University, authors of the book Meaningful Metrics: A 21st- Century Librarian’s Guide to Bibliometrics, Altmetrics, and Research Impact, talk to Inside Higher Ed about different research assessment tools like altmetrics. But here Robin speaks about the traditional Journal Impact Factor.  

Journal Impact Factor remains a commonly-used metric. What's wrong with it? What do newer approaches have to offer?

Robin: It’s not so much that there’s something intrinsically “wrong” with Impact Factor as it is there’s something wrong with how Impact Factor has come to be used by many parts of academia - e.g. as an uncritical substitute for individual research quality, or even worse, researcher quality. When you really look at it, Impact Factor is just another way of saying “materials published three years ago by this journal have since averaged about this many of citations.” It’s a journal-level metric, for comparing the reach and influence of journals based on a definition and window of impact that is itself only a good fit for certain research areas. For this reason, its relevance therefore can’t be generalized across different fields, let alone different disciplines. Yet that’s exactly how it’s commonly wielded, which is both misleading and frustrating to many who are just trying to do excellent work and make a legitimate impact.