This post looks at the most important metrics librarians should consider when evaluating their library collection, and subsequently when making acquisition, renewal or cancellation decisions during the selection management process. While most libraries use metrics to assist these processes, many are not getting the full picture that is available. A combination of new and traditional tools have paved the way to a much more comprehensive view of a library collection via complementary indices, which can help not only streamline a collection to be more relevant to end users, but also save the library time and money.
Using evidence to build a collection
As the library world continues to adopt more processes and methodologies from other areas of the business world, so the metrics used to measure them are changing. One particularly awkward figure is return-on-investment (ROI). More and more, librarians are put in the position of having to validate purchasing decisions with quantitative evidence, supplemented by qualitative feedback from their institution’s faculty concerning their needs for library resources. Justifying acquisition, renewal and cancellation decisions must incorporate an ability to demonstrate the impact of that decision on the library budget, and to make sure that funds are being used in the most effective way.
Any ROI calculation must involve relevant and consistent metrics as its constituent parts. We have collated a quick reference list for the most useful metrics for your collection management and to help you achieve your own ROI indicators.
Here is our top ten list:
Cost – While cost on its own is not a definitive metric, if you don’t have budget, you can’t buy content. Cost is therefore both a bottom line pressure but also one that must be combined with other metrics to make any sense. Cost does not have any fixed relationship to value.
Cost per use – this measurement is much better than cost alone, as it attempts to illustrate the relative cost of each journal or book (or other purchased content type) based on its usage within a collection. Sometimes, you will uncover titles which have very low usage and a very high cost per use. Further extensions of this metric could be cost per capita, which looks at a predicted cost of people studying or researching within a given subject area. Actual cost per capita can be found when you compare budgeted cost per capita against usage per capita, ending up with the total cost for one use of a resource per person in the target audience for that same resource.
Impact – There is a lot of discrediting of the citation-based impact metric in the academic community at the moment but as a top-level indicator for a specific type of quality, it can still be a valuable tool. In this context we apply the metric to the journal as a complete entity, which is how the metric was designed back in the 1960s (the history of the JIF is summed up nicely in “History of the journal impact factor: Contingencies and consequences”). It of course shows the average number of citations a paper in a specific journal attracts. Thomson-Reuters own the calculation methodologies for their journal impact factor, which is based on the data held within their Web of Knowledge℠ database. A great alternative is the SCImago Journal Rank, which uses a more sophisticated weighting to attribute citation quality to the equation, similar to Google’s PageRank.
Altmetrics – Altmetrics have become exponentially more popular since the term was first coined in 2010, as researchers, libraries and publishers attempt to make sense of the vast amounts of new data that is available electronically. Altmetric.com is one of the leading tools, offering a transparent means for librarians to be able to see the extra influence that content they subscribe to may be having. This can in turn influence the selection management process within the collection. The Altmetric Explorer product offers a complete view of the influence a single article, or an entire journal, has on users across a host of channels on the web. This includes social network sharing, blogs and media coverage. The journal comparison part of the tool can highlight large differences in the additional online influence a journal has, which might come in handy in a cancellation decision between two similar titles. The Mendeley API and numerous other data sources from the web and online networks are used to deliver data to other altmetric tools such as ImpactStory, ReaderMeter, and so on. A list of some of these tools can be found here: http://altmetrics.org/tools. A recent article in Serials Review also contains a more substantial list of tools, and some discussion about the value of altmetrics.
Mendeley (yep, altmetrics again)– A top source of altmetric data is the Mendeley platform. Research conducted so far (e.g. “Validating online reference managers for scholarly impact measurement”) correlates Mendeley document readership data strongly with the traditional citation-based impact factor, so it’s worth taking note if you are looking for a complementary or alternative measure of both usage and impact. The Mendeley Institutional Edition is a great place to start for librarians looking for an altmetric overlay for usage data.
10 metrics you should be using to evaluate your library collection
27 mrt 2013 Filed under: Selection ManagementPart I: Looking for ROI, and the first 5 metrics
Building the right collection
This post looks at the most important metrics librarians should consider when evaluating their library collection, and subsequently when making acquisition, renewal or cancellation decisions during the selection management process. While most libraries use metrics to assist these processes, many are not getting the full picture that is available. A combination of new and traditional tools have paved the way to a much more comprehensive view of a library collection via complementary indices, which can help not only streamline a collection to be more relevant to end users, but also save the library time and money.
Using evidence to build a collection
As the library world continues to adopt more processes and methodologies from other areas of the business world, so the metrics used to measure them are changing. One particularly awkward figure is return-on-investment (ROI). More and more, librarians are put in the position of having to validate purchasing decisions with quantitative evidence, supplemented by qualitative feedback from their institution’s faculty concerning their needs for library resources. Justifying acquisition, renewal and cancellation decisions must incorporate an ability to demonstrate the impact of that decision on the library budget, and to make sure that funds are being used in the most effective way.
Any ROI calculation must involve relevant and consistent metrics as its constituent parts. We have collated a quick reference list for the most useful metrics for your collection management and to help you achieve your own ROI indicators.
Here is our top ten list:
Cost per use – this measurement is much better than cost alone, as it attempts to illustrate the relative cost of each journal or book (or other purchased content type) based on its usage within a collection. Sometimes, you will uncover titles which have very low usage and a very high cost per use. Further extensions of this metric could be cost per capita, which looks at a predicted cost of people studying or researching within a given subject area. Actual cost per capita can be found when you compare budgeted cost per capita against usage per capita, ending up with the total cost for one use of a resource per person in the target audience for that same resource.
Impact – There is a lot of discrediting of the citation-based impact metric in the academic community at the moment but as a top-level indicator for a specific type of quality, it can still be a valuable tool. In this context we apply the metric to the journal as a complete entity, which is how the metric was designed back in the 1960s (the history of the JIF is summed up nicely in “History of the journal impact factor: Contingencies and consequences”). It of course shows the average number of citations a paper in a specific journal attracts. Thomson-Reuters own the calculation methodologies for their journal impact factor, which is based on the data held within their Web of Knowledge℠ database. A great alternative is the SCImago Journal Rank, which uses a more sophisticated weighting to attribute citation quality to the equation, similar to Google’s PageRank.
Mendeley (yep, altmetrics again)– A top source of altmetric data is the Mendeley platform. Research conducted so far (e.g. “Validating online reference managers for scholarly impact measurement”) correlates Mendeley document readership data strongly with the traditional citation-based impact factor, so it’s worth taking note if you are looking for a complementary or alternative measure of both usage and impact. The Mendeley Institutional Edition is a great place to start for librarians looking for an altmetric overlay for usage data.
Sign up for blog email updates (we will not send you anything else) and get the next post directly in your inbox.
You may also like
Part II: 10 metrics you should be using to evaluate your library collection
Read the second post in this 2-parter hereConnect met ons
About the Author