In our Library, we show value by how well a resource or service meets the needs and goals of the university. In acquisitions, this generally means how well do the resources perform.
We used to asses our resources like this: usage and cost, which equals cost-per-use price. This database costs X amount of money each time it was used. We basically reviewed the number of full-text downloads over the last few years of each database. We would also provide our University Librarian information on how much this resource price has increased from one month to the next.
That was about it.
But our budget became a lot more complicated, and we had to figure out ways to change the way we assessed these resources. For one, we started losing students, who’s tuition filled our budget coffers. Secondly, the university began highlighting a research agenda, which brought about two things:
Administrators began asking for resources to support their new resource centers
Faculty started saying, if I have to publish more, I need better access to the journals in my field.
We had a great increase in the number of resource requests. It used to be that we downplayed the resource request form so much that faculty didn’t know it was there. (Requesting books had always been easy; requesting continuing resources was much more difficult because of paying annual expenditures.)
This rush on resource requests forced us to rethink the way we handled journal and database subscriptions, which we call continuing resources. First we need to figure out which resources to approve; and secondly, we needed a way to figure out how to pay for them. We put together a diverse committee to create criteria to evaluate resource renewals. (We also created criteria for assessing new requests, but that’s a different story.)
The criteria can be seen here.
Number of searches (last year)
Number of searches (three-year trend)
Number of FT searches (last year)
Number of FT searches (three-year trend)
Cost/Use (last year)
Cost (last year)
Price increase (three-year trend)
You’ll notice that this is basically the same methodology we used before. We collect more data points for usage and pricing information, and we look back a few more years. We do this to provide a bit more context and try to even out large jumps.
But the big difference is this: we used to weigh resources individually, against themselves. Now we review all of our resources against each other. Basically, we take each data point — say full text downloads — and rank them most to least. Then we break those rankings down in quartiles Q1 is the top and Q4 is the bottom.
This new method is a world away from our old system of, “this database can be renewed because it seems to have good usage and the price has remained somewhat fair.” The new method gives us a lot of context of how each database is doing within our system and against its peers. We can even see how databases interact with the different criteria. Some of the results are surprising.
These changes have given us better insight into the context of successful databases. Context means 1) that database seems expensive but 2) the day may show its cost-per-use amount is about middle of the pack. Context could also mean 1) that set of journals has lost a lot of usage and 2) it may have gone to a very similar set of journals which have gained a lot of usage. Finally, context could also mean 1) the set of journals has seen usage increases in the past year but 2) it’s still performing worse than three years ago.
We’ve been working with this new system for a year, but it gives us a lot more insight on databases we want to renew and those we are so happy with we want to renew with a three-year license. But most of all that with this new system we’ve been able to show anyone who asks how we calculate value for our continuing resources.