Using JUSP COUNTER JR1 usage data to review DMU Library’s ScienceDirect journal titles

I have previously blogged about my work in helping DMU Library to create a framework to move towards a more data-driven process for online resources renewals. The context for this work is to ensure that DMU Library is making well informed, evidence-based decisions relating to its electronic resources subscriptions. It is vital for DMU Library to continually maximise the value of its annual investment in online resources and provide access to relevant electronic content to support learning, teaching and research across the university.

Subscribed v Non- subscribed titles

One electronic resource that I concentrated on in the summer of 2016 was Elsevier’s ScienceDirect full text journal collection. Like many UK HE institutions, DMU Library subscribes to ScienceDirect via the national Jisc/NESLi agreement. This is a long-standing subscription cost that constitutes a major spend from the library’s annual online resources budget. The construction of the ScienceDirect e-journal agreement means that DMU Library connects to content in two ways:

  • Subscribed (or core) titles. These are journals in the collection that have been selected by library subject teams to best support learning, teaching and research at the institution.
  • Non-Subscribed (or non-core) titles via ScienceDirect Freedom Collection content. These are non-core publications that the library purchases as top-up content to the Subscribed titles in the collection.

As the Jisc/NESLi agreement with ScienceDirect was to be re-negotiated for 2017, July 2016 was a suitable time for the library to re-evaluate the value and relevancy of its ScienceDirect Subscribed titles. The existing Elsevier agreement allowed UK universities (should they wish) to substitute Subscribed journals for Non-Subscribed journals in the collection, as long as the institution’s contractual spend in the deal was maintained. If the ScienceDirect journal usage data for DMU highlighted that individual titles did not continue to support academic faculty interests, then the library would look to act to increase the value of its Subscribed titles before the new 2017 journal agreement was in place. The review also made sense when looking at maintaining post-cancellation access rights for DMU Subscribed titles should the library decide not renew the Elsevier agreement.

After several days of planning, the objectives for the ScienceDirect review were agreed. These were:

  • Identify Subscribed ScienceDirect journal titles with low usage with a view to replacing them with high use Non-Subscribed titles. Any title substitutions would have to be made in line with the institution maintaining its annual contractual spend with the provider.
  • Base the review on COUNTER JR1 usage data exported from the JUSP service and 2016 journal list prices from Elsevier. This usage and cost data would then be combined to provide a cost-per-use (CPU) metric for each reviewed Subscribed and Non-Subscribed title. As libraries can mark “core” titles in the JUSP service (via access to KB+), DMU Subscribed titles could be easily filtered for display in the exported COUNTER file.
  • Usage and cost analysis to be collated by the library’s Content Delivery Team and disseminated to subject librarians in Academic Liaison for evaluation.
  • Outcomes from the evaluation process to be captured by Content Delivery and candidates for substitution communicated back to the provider.
JUSP allows you to mark up "core" titles in electronic journal collections.

JUSP allows libraries to mark up “core” titles in their electronic journal collections

Ready, steady, go…

I have mapped out the ScienceDirect review workflow in more detail below:

  1. Exported Elsevier JR1 COUNTER usage data from JUSP to Microsoft Excel. The statistics covered the period between Jan 2012 to Jun 2016. As DMU Library had already identified existing Subscribed (or core) titles in the JUSP service via KB+, I was able to filter the usage data to display Subscribed titles only and then sorted the data from low to high usage.
  2. To ensure the review was more effective and up-to-date, the Elsevier usage data was refined further to cover usage between Jan 2015 – Jun 2016. The data was resorted: Subscribed titles from low to high usage, Unsubscribed titles from high to low usage.
  3. Received breakdown of 2016 journal costs from Elsevier to aid cost analysis in the review – costs for Subscribed titles in the agreement, and list prices for top 25 most used Unsubscribed titles.
  4. Two Excel files to work with – usage data exported from JUSP and journal costs from Elsevier. Used VLOOKUP function in Excel to merge the two datasets into one spreadsheet. Ran VLOOKUP match on journal title ISSNs.
  5. Created CPU metric for current Subscribed titles and top 25 most used Unsubscribed titles. Created proposed lists of Subscribed title with high CPU (to be removed) and Unsubscribed titles with low CPU (for addition). Analysis sent to subject librarians in Academic Liaison for evaluation.
  6. Based on the CPU metric, subject teams selected proposed Subscribed titles for substitution. Proposed titles sent to Elsevier. Elsevier ratified substitutions and made relevant changes to DMU Library’s Subscribed journals list for 2017.

CPU: what constitutes good value?

CPU provides a basic appraisal of library user activity in line with the financial investment made by the library. CPU may well represent a starting point for library staff discussion with regards the value of an online resource or collection. CPU does require context though, and one way to potentially achieve this is to set up some usage indicators. I blogged about creating usage indicators to help shape resource renewal or cancellation in my Aug 2016 post asking “How do academic libraries value their electronic resources?“.

Whilst not strictly applied by subject teams at DMU in this example of a journals review, the general usage indicators below could be applied by libraries when approaching this type of journal substitutions work. Obviously, other factors may need to be considered, but it does provide a framework for a move towards a more data-driven decision making process:

  • CPU < £1: excellent value, automatically retain title.
  • CPU between £1-£5: good to fair value, recommend retaining title.
  • CPU between £5-£10: fair to poor value, investigate reasons for low use, potential substitution.
  • CPU > £10: automatic substitution.

Outcomes of the review

After the review was completed, and the revised Subscribed titles list was approved by Elsevier and Jisc, DMU Library had removed approximately 20 low use, high CPU Subscribed titles and replaced them with a similar number of high use, low CPU Non-Subscribed titles for the new 2017 agreement. When analysing DMU’s revised Subscribed titles list, Jisc reported (based on 2015 full year COUNTER usage) that the substituted titles would increase the proportion of DMU core title usage in the Elsevier deal from around 18% to 29%.

I found the ScienceDirect review process to be a worthwhile activity. The process was not without its challenges, but it did provide an opportunity for library staff collaboration and discussion. It also gave the directorate a chance to engage with useful library user activity data and make online resource selection decisions that will hopefully provide more relevant content and support for students and staff at the institution.

Posted in DMU, Libraries, Library, Usage stats | Leave a comment

Reflections on #Digifest17

This week I attended my third Jisc Digifest conference. The free-to-attend event is hosted annually at the ICC in Birmingham, and is organised by Jisc to celebrate all things “digital” in the education sector. Here are a few of my personal reflections on the #Digifest17 conference.

Digifest badge

My #Digifest17 delegate badge and lanyard.

Jisc’s Digifest conference did not disappoint once again – the 2017 event contained a plethora of interesting and insightful talks, workshops, demos and debates. Scheduling which of these talks to go to was easily done by using the interactive Digifest app on my iPad – you could access the event programme via the app and create your own personal schedule for the two days at the conference.

I think the two main themes for me at #Digifest17 were “visualisation” and “digital literacies”. A number of the Digifest talks covered these topics when highlighting online resources or services.

Digital literacies

The “Building digital expertise in your organisation” discussion featured a number of case studies highlighting different institutional approaches to develop staff digital capabilities. I especially found North Lindsey College’s “DPD Go!” programme to be a fascinating example of creating a framework for increasing digital literacies and core competencies for staff – the idea of an “app club” for library staff is one which I think will resonate with colleagues I work with at DMU. The discussion promoted the need to embed digital capabilities in the workplace and for staff to engage with this process – digital skills can raise staff confidence (staff like to be rewarded and celebrate individual achievements) and directly add value to the learning experience of students. These ideas echoed similar outcomes in the recent 2017 NMC Horizon report. There was a sense though that organisations had to be persistent and consistent in their approach, and use “multi-pronged” tactics to support staff development and training in this area.

Whilst the “Building digital expertise in your organisation” session focused mainly on the development of staff digital skills, Rafe Hallet’s “Surfing in the Shallows” presentation highlighted students’ experiences with using online resources in HE. The talk debated the erosion of old reading and academic practices (a nostalgia for the concept of the “lone scholar”?) in light of new digital tech and resources. Does this mean that the (perceived) scattered, distracted reading processes of current students are in some way associated with the formats of this new digital tech? The talk went on to present that knowledge consumption was much more visual in the 21st Century and that students employed increasing creative and non-linear methods to absorb learning online. New digital resources are responding to new forms of reading, and it was important for universities to find a balance between digital play and academic rigour.

Some of the insights in Rafe Hallet’s talk were also promoted in Sara Perry’s “Digital jamming” workshop. Sara Perry uses digital tools in her work with students at the University of York to embrace group creativity and synthesise collaboration. The workshop was a highly interactive 10 minutes which saw conference attendees drawing stick figures and using these to create an online “meme” for Digifest17.

Stick figure portraits

Stick figure family created during the “Digital jamming” workshop (courtesy of @beccihutchins).


Two sessions which took place on the second day of Digifest17 highlighted the importance of visualising content or data in digital services. The new UK Medical Heritage Library (UKMHL) collection was promoted during the “Historical Texts: visualising digital collections” talk. The UKMHL collection is a new open access digital resource which includes diverse methods for visualising archive content. The resource uses date histograms, image walls and sunburst visualisations for students to discover and engage with online content. Rafe Hallet used the UKMHL collection as a good example of a digital resource experimenting with new forms of visualised knowledge for students. Digital content is layered for students to find, participate in and co-create.

The session on “Business intelligence for higher education” demonstrated student datasets being analysed, and used, to form interactive dashboards in the Heidi Plus project. The dashboards in the Jisc/HESA project used Tableau software to present the data in a highly visual way. The project also represented examples of “agile” project management. As I work with usage data in my day-to-day work at DMU Library, I was interested to hear about how the dashboards were created and if any potential efficiencies were achieved as part of the project work. Improving data visualisation for library colleagues is an important part of my library work, and it was beneficial to hear how universities involved in the Heidi Plus project observed the work as continuing professional development for staff (around data manipulation, data visualisation and creating a shared language for planning projects).

Social media

I think the most valuable session for me during Digifest17 was Eric Stoller’s “Part Deux: why educators can’t live without social media”. The value was partly down to timing (I am due to co-present on a social media discussion in May 2017 for a Learning at Work week at DMU Library), but a number of the themes in Stoller’s presentation struck a chord. Stoller covered various points in his discussion – the value of social media (for staff and students) with regards networking (connecting with others via a shared interest), research, career development and engagement with learners. Stoller also spoke about the balance between “professional” and “social” spaces on social media. Social media is constantly evolving and is fluid in the way it can potentially blur lines between work space and private life. Stoller shared his belief that it was important for educators and educational establishments to be adaptive and progressive – Stoller suggested this organisational mindset and ethos should be set by leaders at institutions. The key function, to engage with students and learners, must remain even if social media tools come and go.


I found Digifest17 to be an enjoyable and enthralling event. As in previous years, the event is an excellent vehicle for current awareness with regards digital trends, resources and services in the HE and FE sectors. As I tweeted right after the event, your head will be buzzing for days after the event and you will want to take ideas back to your library/college/work place and share these insights with colleagues.

Digifest tweet

Twitter praise!

Roll on Digifest18!


Posted in #Digifest, Jisc, Libraries, Social Media | Leave a comment

How do academic libraries value their electronic resources?

UK academic libraries spend millions on electronic resources annually to support learning, teaching and research at their institutions. Demonstrating the value of these online resources presents a number of challenges for libraries, especially when it comes to attributing value to e-resources in a consistent and meaningful way. The concept of resource or content “value” is subjective, and will be applied differently between HE institutions, but also between different subject areas.

Usage statistics can be a starting point for discussion with regards the value of e-resources. The financial imperative for HE libraries to maximise value for money from the annual investment made in online resources has never been more important. Well-informed resource decision-making is key for libraries, so usage statistics play a crucial role for libraries in adopting a more data-driven approach when analysing user activity across their online collections. There is a mass of data out there though for libraries to work with and manipulate, so it is helpful for library staff to know what questions they want to ask of the data to inform their decision-making processes and the criteria they set out for e-resource evaluations.

Drivers for a more data-driven approach to e-resource renewals

DMU Library’s Content Delivery Team ran a “How does the library value its electronic resources?” workshop for DMU Library staff in April 2016 (I was one of the co-presenters on the session). I also attended a UKSG “Usage stats for decision-making” event in June 2016. Both events raised discussion points around how libraries currently measure resource usage and apply value to their online content. The UKSG event featured a panel of presenters from UK academic libraries, and as an information professional who deals with usage statistics on a daily basis at DMU Library, I was fascinated to hear some of the different approaches to collating usage data in UK libraries. It became clear, whilst listening to the individual presentations, that although academic libraries may review usage statistics in diverse ways (approaches seemed to be dependent upon the following factors – size of library budget and who has responsibility to ensure money is spent, subjects taught at the particular institution, number of library staff working with usage stats and the technical expertise of library staff when it came to presenting usage data) there are a number of current common “drivers” that libraries are faced with when it comes to applying value in a more objective, evidence-based manner. In no particular order of importance, these drivers include:

  • Compiling usage metrics and collating the subsequent feedback from different stakeholders can be time-consuming activities for library staff. The need to automate processes where possible is crucial to “free up” staff time to produce clear and concise data analysis
  • The need for libraries to make financial savings across increasingly stretched budgets
  • Improving online resource provision for under-invested subject areas
  • Higher student expectations of university library services and provision of online content during their studies
  • Anecdotal or arbitrary (subjective) decision-making processes leading to retention of low use, non-relevant library resources
  • Different usage analysis is often needed for different audiences (subject teams, academic staff in faculties, management and users). Applying value in a consistent way can be a challenge if different stakeholders are evaluating usage across different contexts

Status = under review? 

The key to tackling these challenges seems to be in asking “what stories do you want the usage data to tell?”. Acknowledging what criteria needs to be included in resource analysis is crucial. This criteria will obviously vary from institution to institution, but also from subject to subject. Cost per use (CPU) seems to be the “go-to” measure of value for e-resources used by HE libraries, and I think there was consensus at the UKSG workshop that this is not going to change any time soon as CPU gives a basic and simple calculation of user activity in line with the financial investment made by the library year on year.

One of the presenters at the UKSG workshop, Anna Franca from King’s College London, highlighted the potential need for usage “formulas” or “indicators” to help institutions establish more data-driven analysis for e-resource renewal or cancellation decisions. The indicators may provide more meaningful and helpful insight when reviewing CPU (or any types of usage data a library may use), and potentially provide libraries with more time to assess under-used resources on their radar and gauge the potential reasons behind this poor usage. Usage indicators will differ between institutions, and will need to fit alongside analysis of annual resource subscription costs, but the following example (for a full text resource) could be employed to shape resource renewal or cancellation discussions between library/faculty/management staff:

  • CPU below £1 per download = automatic renewal
  • CPU between £3 to £5 per download = “under review” status
  • CPU over £5 per download = cancellation (or substitution of content) recommended

The indicators above are a guide as to the type of indicators that could be applied by a library to implement a more data-driven process to resource decision-making. I appreciate that for any renewal or cancellation decision, additional context will always be required to provide a comprehensive overview of usage and relevancy of an e-resource to a particular university faculty or department. Some online resources may be deemed specialist or niche, and will be aimed at, and used by, a smaller cohort of students. The CPU indicators may therefore have to be shifted to take this “specialist” tag into consideration (for example, £3 to £5 per download may represent very good value for a particular specialist database). The UKSG usage stats workshop highlighted some other factors around resource and subject profiles that library staff may need to take into account when reviewing e-resource usage activity:

  • Fluctuating inflation/currency exchange rates over time. These may skew resource CPU figures for individual years
  • Changes to teaching and learning at the institution over time – subjects will have varying “characteristics” that define that area of study (for example, differences in online publication formats and frequencies)
  • Publisher problems – inaccessible resources or poorly designed user interfaces.
  • (Lack of) promotion and marketing of the e-resource
  • Unique content versus duplicated content (via alternative library databases or collections). This “overlap analysis” is an increasing part of my day-to-day library work
  • Digital rights management (DRM) limitations when looking at e-book usage metrics
  • Potential interlibrary loans costs incurred if e-resource subscription is cancelled.
  • Open access versus paid-for content

The future?

After listening to the different library presenters at the UKSG usage stats workshop, and discussing themes around usage statistics with some of the other workshop attendees, it became clear to me that DMU Library could benefit in a number of ways by progressing towards a more data-driven framework for e-resources renewals. I was keen to take what I had seen and listened to at the UKSG event back to DMU and converse with colleagues in the Content Delivery Team and the wider department. I believe a revised approach to usage data would create extra lead time for library staff to prioritise the analysis of under-used databases and collections purchased by the directorate. Content Delivery currently presents usage and cost analysis for all subscribed e-resources renewals that feature in the library’s portfolio. This can be time-consuming for Content Delivery staff to assemble and present, especially when several e-resource subscriptions expire at the same time. It would be wiser to concentrate on, and allow more evaluation time for, those e-resources that suffer low usage. Library subject teams could comprehensively examine factors that lie behind an e-resource’s poor use (this could mirror some of the additional context around e-resource usage I discussed earlier in this post). Action plans could be devised and set in motion to try and increase usage for the e-resource over a period of time (e.g. via targeted promotion to students and staff). These actions could then be reviewed over time to see if they had any effect. The outcome of the e-resource appraisal may result in the outright cancellation of the e-resource (depending on the immediate need to save money) but may also prompt libraries to check the viability of alternative (but more relevant?) online products on the market to replace or act as a substitute for existing content.

Going forward, I can certainly see value in creating an “under-review” status for e-resources that display high cost per use in line with usage indicators or banding set by the library. The Content Delivery Team are still at an early stage of working on this framework proposal. The April 2016 in-house DMU event on “how do we value our electronic resources?” reinforced the concept that there are many variables in reviewing the impact of, and attributing value to, e-resources in different subject areas. That saying, I also came away believing that an increased evidence-based focus for e-resource selection and retention would help strengthen the collaboration between Content Delivery (the producers of usage data) and Academic Liaison Teams (the assessors of the usage data). This improved liaison between library teams can only help the department in delivering one of its key objectives to “maximise the usage and impact of the information resources available to users” at DMU.








Posted in DMU, Libraries, Library, UKSG, Usage stats | 1 Comment

Using LibAnswers to manage e-resources user enquiries at DMU Library

The Content Delivery Team at DMU Library & Learning Services have recently started to trial Springshare’s LibAnswers service, in the hope it will assist the library to manage its e-resource troubleshooting activities more effectively and seamlessly.

Previous to using LibAnswers, Microsoft Outlook email was used to contact DMU students and staff who reported database/e-journal/e-book access problems to the library. Content Delivery’s E-Resources Mailbox email account,, was created several years ago and allowed library staff to forward user emails that highlighted access difficulties to online library content. The forwarding of these emails was usually completed by staff working in the library’s Just Ask Team (if initial “triage” of the access problem from the helpdesk could not be fixed) or by DMU subject librarians in teaching/1-2-1 sessions with students/staff. Once the E-Resources Mailbox had received a new email message, Content Delivery staff would investigate the access problem and take action as appropriate.

It soon became clear that there were limitations in using Outlook to manage troubleshooting enquiries efficiently. A number of Content Delivery staff had access to the E-Resources Mailbox. Efforts were made to keep the mailbox uncluttered, but this became an onerous task to keep up with, especially during busy times of term. E-resource enquiries can often be complex and may require several different stages of investigation by different library staff before they are resolved. The library may also need to liaise with content and systems providers to log errors and request support. Tracking the status of these enquiries in Outlook could be difficult and time-consuming (especially if the library was waiting for external providers to communicate progress). The team did experiment with using the “Categories” and “Flags” options in Outlook to try and better organise the administration of responses, but building this into already set workflows was a challenge.

So, the team decided to look at LibAnswers as a potential alternative to Outlook. DMU Library had already purchased LibGuides and LibCal from Springshare, so there was a degree of familiarity with aspects of this service (look of the site, functionality etc). The team had already created an e-resources troubleshooting LibGuide for example. A fellow colleague from Content Delivery and myself created LibAnswers accounts and started to see how we could utilise the resource to better manage e-resource troubleshooting.

The E-Resources Mailbox ( address had become somewhat familiar to library staff who regularly reported student/staff access issues, so we wanted to keep this account if at all possible to minimise confusion. LibAnswers allows us to redirect email traffic from the E-Resources Mailbox within Outlook to be active in the LibAnswers dashboard – individual enquiries in LibAnswers are called “tickets”, and each ticket is assigned an ID. Once logged into the LibAnswers site, we can view these tickets and work on them as needed.

We have been trialling the LibAnswers service for several weeks now. Early evaluation of the product has been promising, even if we haven’t had time to review/test every single part of the admin portal. Features/options we currently like are:

  • Ability for library staff to “claim” tickets when they start work on an enquiry (and also potentially “unclaim” at a later date if the ticket is passed on to another member of staff). This “claiming” of the support ticket is then highlighted in the central dashboard, so other colleagues who may be logged in to the dashboard concurrently can see which tickets are being currently worked on, and others which require potential attention.
  • Allow other team members to check and answer LibAnswers tickets in times of staff absences (illness, holidays). This means DMU students and staff can be contacted in a timely manner.
  • Assign or transfer the ticket to relevant library staff if added expertise is needed.
  • Add an internal note to support tickets if certain actions are required (e.g. removal of login details from a library access point). These internal notes are not communicated to the user making the enquiry, but stay logged in the dashboard for library staff to view.
  • Easy-to-view history of replies in any one support ticket (the most up-to-date correspondence between library and enquirer is displayed at the top of the ticket).
  • Set different ticket categories dependent upon the status of the support call (new, open, pending, closed).
  • Closing a support ticket removes the enquiry from the current log of open/pending calls in the dashboard, but are still easily accessible in the “Answers” tab if further analysis/comment is required at a later date.
  • Having an alert display when a new ticket/new reply is received.

Problems we have faced so far are:

  • Adding a document or attachment to a LibAnswers ticket which can then only be opened if you have a LibAnswers account.
  • Users or staff who copy (Cc) the mailbox in with related support replies/correspondence show up as new (separate) support tickets in the dashboard (there is a “merge” option in the LibAnswers admin site which we are looking at as a potential way around this).
  • If an already created message is resent, there is no log in the system of who this was sent to.

Features we hope to add in the future:

  • Look at using the reusable answers option to save time when replying to tickets that cover already reported or similar access problems.
  • Add category tags to ticket replies to further refine our processes and ability to filter the dashboard (e.g. e-books, e-journals, databases, authentication issues).
  • Use the popular links section to add pre-set URLs which we may need to refer to in ticket replies (DMU Library webpage, Subject Guides, E-Resources troubleshooting LibGuide). This will save time when constructing replies to support tickets.
  • Create a generic Content Delivery signature which we can add to all ticket replies.

We will continue to work with LibAnswers and see if we can fully integrate this service into our daily troubleshooting activities. Watch this space!

Posted in DMU, Libraries, Library, Troubleshooting | 1 Comment

Library e-resources usage & cost analysis – creating a template spreadsheet.

Part of my work in the DMU Library Content Delivery Team is to head up the capture, collation and dissemination of library e-resource usage stats. The library buys many different types of online resources, from many different vendors, and all of these resources need to be continually evaluated and assessed to see if they are returning on the financial investment DMU Library has made in purchasing them for DMU students and staff.

DMU Library continues to make extensive use of JUSP. JUSP saves time in journal usage stats workflows by removing the need for library staff to manually access and download journal usage reports from individual publisher admin sites. JUSP acts as a one-stop-shop for library staff to view and export COUNTER journal usage stats from different vendors, in various types of usage report formats (from specific core title usage to more general usage trends over time).

As part of DMU Library workflows, Content Delivery staff create resource usage/cost analysis documents for subject staff to review and evaluate for upcoming renewals. This assists DMU subject librarians to make prompt and effective renewal/cancellation decisions for the content the library purchases. This analysis contains raw usage data (e-journal downloads, e-book section requests or database metrics depending on the type of resource) from previous years of subscription and resource costs paid by the library during those years of subscription. To save time and create some sense of uniformity to the analysis created, the library harvests and displays annual usage based on the DMU financial (and SCONUL reporting) year – August to July.

Once the raw data is uploaded to the analysis file, Content Delivery staff then attempt to visually represent the data in some way. This representation is usually in the form of graphs, charts or tables. This visualisation of usage and cost data makes it easier for library subject or management staff to spot and interpret usage and cost trends, which in turn, should better inform their resource renewal decisions.

Content Delivery are constantly looking for ways to reduce the time it takes library staff to create these cost and usage analysis files. The files are often quite large, and creating the “visual” aspects of the analysis (charts, graphs) can be time-consuming and require an advanced understanding of Excel. If a number of renewal files need to be created simultaneously, especially around periods in the year when a number of resources expire at the same time, it is difficult to create documents quickly and disseminate to subject librarians in a timely manner (even if we are using added-value services like JUSP to organise metrics). Content Delivery strive to give library subject colleagues more time to digest the analysis we create, but sometimes this is difficult to fulfil due to the number of files being worked on at any one time.

After attending a recent JUSP Community Advisory Group meeting, librarians on the group spoke about the creation of usage “template” files, and how it would be valuable to have different examples of these templates hosted in the Community Area of the JUSP site. This would then showcase how different HE libraries process and evaluate their resource usage stats. This gave me inspiration to look at DMU processes and see if I could create an analysis file template myself, based on earlier streamlining of usage stats processes by Content Delivery staff. My hope was that DMU Library staff would be able to upload raw data to a template file, and then use set Excel formulas within the file to create usage/cost analysis at the touch of a button. As long as the raw data was entered into the correct cells in the file, the formulas should create successful data outputs.

I created tabs in my template Excel file. There would be tabs to data “dump” raw usage stats from specific DMU subscription years (2011-12, 2012-13, 2013-14 etc). I then had a tab for subscription costs (these would have to be transferred from a separate file we keep to record e-resource subscription allocations and costs). The final tab, marked “Usage & cost analysis” was where the analysis calculations would occur. I set formulas in the analysis tab to create automatic data outputs and visualisations.

Tabs were created for annual raw data and subscription costs.

Tabs were created to upload annual raw data and subscription costs.

The data outputs in the template are not ground-breaking – I wanted them to be basic and clear to review (in a table format), showing % increase/decrease in use and cost, and also calculating an annual cost per download figure. I also inserted two visual representations of the data contained in the table – a column chart to show cost per download for the years selected, and a line chart to map total usage against total subscription costs. Again, as long as the correct data is entered in the correct cells, the formulas should do the rest and automatically configure the graphs/charts.

Set Excel formulas create figures for analysis and drive visualisations of the data.

Excel formulas create figures for analysis and drive visualisations of usage and cost data.

You can see the usage stats template I have created by logging on to JUSP (your institution has to be signed up to the service) and visiting the “Community Area” section of the site. The template contains dummy usage data to show how the template works, but hopefully, you will be able to remove the data and add your own, and the outputs should still work (as long as the raw usage data totals and sub costs are allocated to the correct cells within each tab!).

DMU Library have also recently invested in ProQuest’s Intota assessment tool to manage library online content, so I am hoping this service will provide more online tools for Content Delivery to use to effectively interrogate and review resource usage and costs.

Please do let me know what you think of the usage template. I hope to see more of these types of templates appear on JUSP in the future – it would be great to see how other libraries analyse usage stats and subscription costs and highlight best practice in this area.


P.S. I want to say a big thank you to a former DMU Library colleague, Chris Voss. A lot of Chris’s hard work earlier in 2014 has gone into the creation of this current usage template.

Posted in DMU, Libraries, Library, Usage stats | 1 Comment

SmartSpaces Awareness Campaign

During February and March 2014, DMU Library and Learning Services hosted two awareness weeks to help raise the profile of a DMU green initiative called SmartSpaces. The SmartSpaces project aims to save energy use (electricity, gas and water consumption) in several Leicester public buildings through the use of IT. The project encourages engagement with building users via a SmartSpaces website and online discussion forum. The website measures current energy use in participating buildings by displaying the data in the form of a spectrum of “smiley” (or not so smiley) faces. The faces are colour-coded to visualise current levels of energy use. A sad red face highlights increased energy use, a yellow face means neutral performance, while a smiley green face indicates reduced energy consumption.

SmartSpaces smiley face icon

SmartSpaces smiley face logo.

As the Kimberlin Library is one of the DMU buildings which features in the SmartSpaces project, the library’s Green Impact Team arranged two awareness weeks a month apart to try and promote the initiative to library users. The Green Impact Team also produced a SmartSpaces LibGuide to coincide with these promotional activities. The first awareness week was hosted the week commencing 3rd Feb 2014. Green Impact Team staff set up a SmartSpaces display in the Learning Zone on the ground floor of Kimberlin Library. The display featured the smiley faces spectrum on display boards, posters marketing the SmartSpaces website, red, yellow and green balloons and a plasma screen presentation with slides highlighting the library’s involvement in the project.

Kimberlin Library SmartSpaces display

Kimberlin Library SmartSpaces display

The display was then staffed over set lunchtime sessions during the week. Library staff would hand out SmartSpaces flyers and discuss the project with interested users. A library laptop was also on hand, allowing staff to show the SmartSpaces website and online forum. The library’s Green Impact Team worked closely with DMU’s Sustainability Team during the preparation for the awareness weeks. The Sustainability Team kindly sent over some SmartSpaces smiley face cakes which were handed out to library users in the building.

SmartSpaces smiley face cakes!

SmartSpaces smiley face cakes

The second awareness week ran from 3rd-7th March 2014. The Green Impact Team repeated the SmartSpaces display in Kimberlin Library’s Learning Zone, but made some changes to the organisation of the publicity display after the first awareness week was reviewed. A single lunchtime session on 4th March 2014 was staffed (rather than having several lunchtime sessions throughout the week), and the plasma screen presentation was removed entirely. The library also tied in its marketing of SmartSpaces with Fairtrade Fortnight. The Fairtrade promotion was already up and running around DMU campus in the first week of March 2014, so it seemed like a good way to increase awareness of two green initiatives at the same time. The theme for this year’s Fairtrade Fortnight was fairtrade bananas. Bananas were given out at the display to library users in Kimberlin’s Learning Zone. Members of staff from both the library and Sustainability teams also dressed up in banana costumes in an attempt to gain the maximum attention of building users – what troopers!

Both SmartSpaces awareness weeks were promoted via DMU Library social media pages, such as @LibraryDMU and Facebook. DMU Sustainability and SmartSpaces Twitter feeds also joined in with this marketing, using hashtags such as #smartspaces and #dmu to widen exposure of the tweets:

SmartSpaces was promoted via library social media pages

SmartSpaces was promoted via library social media pages

During the run up to the first awareness week, the library ran a survey of its users to see if they had already heard about SmartSpaces from posters up around DMU campus. DMU’s Sustainability Team held an official campus-wide launch for SmartSpaces on 16th Jan 2014 in the Queens Building, and several big SmartSpaces posters were put up in participating DMU buildings (including the Kimberlin Library). The library’s Green Impact Team was joined by a project assistant from the IESD faculty who assisted with the creation and carrying out of the survey. The survey was then repeated during the second awareness week in March 2014, to see if the marketing display and events hosted by the library during the two publicity weeks had helped to increase users’ awareness of SmartSpaces in the library and around DMU campus. The Green Impact Team is currently digesting these figures in its evaluation of the SmartSpaces awareness campaign and related project work.

Posted in DMU, Library | Leave a comment

Library Camp 2013

I haven’t blogged for a while, but feel invigorated to do so once again after attending Library Camp 2013 last Saturday (30th Nov 2013). The event was hosted in the very impressive new Library of Birmingham building (brilliant and chaotic all in one – it has a glass elevator for goodness sake!), but the real bonus was to spend the day with some mighty fine people, all interested in discussing and debating issues central to libraries/librarians/info professionals in the UK. The atmosphere was vibrant and eclectic – it was fantastic to see so many people join together in their own time on a Saturday (lots travelling many miles early in the morning or the night before) to share their experiences, views and feedback on so many diverse library themes. Attendees pitched sessions on catalogues, social media, management styles, the future of public libraries, open access repositories etc. I could go on, so please check the Library Camp wiki for more info about the sessions that were ran (Willy Wonka did not pitch, in spite of the elevator)…

A glass elevator. In a library?

I think the essence of Library Camp (apart from the cake; “librarians love the CAKE!”) is the spirit of debate, whether face-to-face discussion in the sessions, quietly chatting over tea and cake in one of many “nooks and crannies” at the venue, or in the online conversations/communication taking place on Twitter, Facebook and other social media in the run up to, during and after the event. Views can be leftfield, challenging and tackle the status quo, but as there is no set agenda to the event, this type of discussion fuels the day. The attendees drive the content. The first session I sat in on was about social media and libraries. This is a topic I have a great interest in, being a “prolific” user of social media myself (stop posting pictures of yourself eating ice cream I hear you cry!), and having responsibility for contributing posts to a number of DMU Library’s official social media pages. You can also learn lots of new stuff at Library Camp (the photo above is now on the Wikimedia Commons site after I heard about this platform at the event!). A later session concentrated on teaching and libraries. This is not something I have direct experience of, but it was valuable to sit in the session and listen to attendees’ personal views, responses and anecdotes. This stuff can be gold for personal development.

I deliberately wanted to keep this blog post short (as I said, visit the Library Camp website, follow what was tweeted via the event’s #libcampuk13 hashtag or read others’ blog posts for more insight into the “flavour” of the day), but will end that I heartily recommend attending next year’s event, even if you only having a passing personal interest in libraries. It is very much worth it…

Posted in DMU, Library, Personal, Social Media | Leave a comment

Troubleshooting LibGuide

Over the past few weeks, DMU Library’s Content Delivery (CD) Team have been working on constructing a LibGuide to assist DMU Library staff with online resources troubleshooting. The objective of the team was to better help library staff recognise online resources problems (e.g. problems affecting library online databases, eJournals and eBooks) that users may report via an information desk, the library’s Just Ask email service or over the phone. The team not only wanted to help staff identify such problems, but aimed to provide general troubleshooting guidance in a “one-stop-shop” portal – guidance which staff could access quickly to improve library interaction & communication with users.

There seemed to be an appetite amongst library staff for such an online guide. Content Delivery hosted team days for library colleagues in late November 2012, and the idea of a troubleshooting guide gained a positive reaction from attendees. I also gave a presentation on online resources troubleshooting with a colleague at a COMPI (Content Management Planning & Innovation) Team away-day in December 2012, and again the feedback from team members about the creation of a CD LibGuide was upbeat.

So, after the Christmas / New Year holiday, the team began the prep to create the guide. The LibGuides structure was already in place as DMU subject librarians had built new individual subject pages for library users to consult during 2012. Content Delivery referred to these exisiting guides as a starting point – looking at the types of information other subject guides featured and covered. LibGuides allows you to create separate page tabs within your online guide; each page can then feature boxes of content or information for users to view, read and interact with.

The first job for the team was to attempt to “categorise” online resources problems. This was a challenging task, as there are no general “hard and fast” rules in this area it seems – most access problems may be a mix of errors or systems not working together (authentication, security settings etc). I have previously blogged on this online resources “definition” difficulty. In the end, the team came up with five general resource problem “types”:

  • Resource error message
  • Subscription problem
  • Authentication error (e.g. DMU Single Sign On)
  • Mobile or tablet access difficulty
  • “User” education – clicking on the wrong link etc

The next part was to try and provide troubleshooting info for library staff to pass on and advise users at frontline library service points (info desks, roving) in an engaging and easy-to-read format. This may be general technical guidance staff could impart to users to try and fix an access problem, or key questions to ask users to help diagnose resource errors more effectively. 

The CD Team decided to create separate page tabs for each of the resource problem types (as mentioned above). These individual page tabs give an outline of the resource problem “type” in brief bullet points (e.g. a subscription problem usually displays a pay-wall message asking for the user to pay for access), then go on to inform staff how they might logically go about potentially fixing the problem for the user (e.g. check that the user is entering the correct Single Sign On login).

The LibGuides format also allowed CD to provide bitesize “snippets” of technical information in small boxes which feature on the LibGuide page tabs (e.g. Clearing Your Internet Cache). This “box” approach seems to de-clutter the pages quite nicely, especially where a page is text-heavy, and filter information to staff in a clear and concise way. LibGuide boxes can also be re-used and copied to other pages within the same LibGuide – this saved a lot of admin time for CD staff when creating the page. A number of the information boxes appear more than once in the guide (e.g. linking to the @LibraryDMU Twitter feed), so the ability to immediately re-use content was a time saving bonus for the team.

The final two page tabs on the troubleshooting LibGuide were dedicated to what the CD Team can do to help if the user is still having access problems, and future work CD are putting into place to try an improve the way resource problems are dealt with in the library. We also set up a “feedback” box for library staff to engage with and provide comments on the use and value of the troubleshooting LibGuide. As the LibGuide is quite easy to edit, the CD team are very interested to hear about additional content staff want to appear on the guide.

I would be interested to hear how other libraries have tackled the issue of troubleshooting online resources access problems. Have you created a LibGuide? Something else for staff to use? Please do feel free to leave comments or tweet me with your feedback / views!

The troubleshooting LibGuide URL is



Posted in DMU, Library, Troubleshooting | Leave a comment

DMU Library Online 2012

I blogged in July 2012 ago about attending a DMU in-house introductory training course on project management. I was keen to start to apply some of the knowledge gained from this course, so began to think how I could integrate this learning into some of my own library project work. One project in particular seemed a “good fit” for this integration – organising DMU Library’s annual eResources Roadshow.

Event Working Group

For the 2012 roadshow, a small working group of library staff convened to start to plan and prep for the event. A project proposal was drafted to overview the objectives of the one-day event. These aims included promotion of online library resources and raising the profile of the library within DMU as a whole. As unofficial “chair” of the working party, I provided group members with some material at the first meeting to provide a starting point for discussion around the 2012 show. Alongside some observations and recommendations taken from the 2011 roadshow, I sketched a draft “roadshow network diagram” (by hand) – this diagram was taken from a model highlighted at the DMU project management training workshop:

DMU Library Online Network Diagram

The working group used a network diagram to inform their planning for the DMU Library event.

The group found this network diagram useful as it displayed important components involved in the planning for the roadshow. The groundwork was split into four separate stages – location, exhibitors, promotion / publicity and event feedback. These components were set against a timeline at the foot of the page, and provided group members with a visual representation of the tasks and milestones which would need to be completed in the run-up to the event.

Venue Location & Name Change

The 2012 event would be DMU Library’s 5th annual showcase of some of the online resources it purchases for the benefit of its stakeholders. Past library roadshows had taken place in different locations in Kimberlin Library (e.g. Learning Zone). The working group discussed the venue and format of the roadshow and decided to change some aspects of the event. Firstly, the group wished to incorporate some breakout workshops and talks alongside the traditional publisher exhibition at the event. This was to add a different dimension to the event, freshen up the format and allow publishers and library staff to run sessions on particular services, resources or topics that may interest attendees. With this in mind, the working group also recommended a location change for the event – moving from the Kimberlin Library Learning Zone to another library building called The Greenhouse. The Greenhouse was deemed to be a potentially more effective ‘space’ for holding the breakout talks programme alongside the publisher exhibition.

The working party also reviewed the name of the event. It was decided that a name change was needed – “eResources” was a tad too vague for users and its meaning could be lost in translation; the “roadshow” element was just not true as the event was a one-off each year and we were not going to be running the event at any other locations! So, after some close discussion and liaison with library management, the event was renamed as “DMU Library Online 2012”. In renaming the event, the show also had to have a wider scope to incorporate all library online services, not just online databases, indexes and full text journal collections.


It was agreed to host the event on Weds 21st Nov 2012. The running time of the show would be from 11am to 4pm. The working group met a number of times between July – Nov 2012 to organise, update and keep the event preparation on target. This approach seemed to work very well. As the working group consisted of library staff from different teams, the collaboration and liaison between teams was much more effective and seamless, and most of the communication flowed via the working group representatives. It was also valuable to call on expertise within the working group, whether this be specific subject or resource knowledge, logistics / risk assessment know-how or previous experience of hosting library events.

To help manage the event preparation, each member of the working group was tasked with numerous individual responsibilities. This included catering, A/V set up, creating publicity material, social media promotion and exhibitor liaision. The update meetings were a chance for the group to come together regularly to review progress and highlight any problems / changes to the preparation schedule. Dealing with ‘unknown’ factors seems to be of major importance when managing a project – there are some things you may not be able to directly predict, but you can leave “wiggle room” when setting out your project mandate.

The Day Arrives…

Library Online - Publisher Merchandise

Some of the publisher freebies on show at DMU Library Online

It was an early start for the working group on Weds 21st Nov 2012. We had access to The Greenhouse from 9am on the day of the event. This meant we had a couple of hours to arrange the venue as we had planned – setting up the publisher exhibition, library stands and room for breakout talks. The one factor we could not control was the weather – it literally rained non-stop from 9am until 4pm (The blue sky only appearing when we were packing up the show and waving goodbye to exhibitors on their journeys home!).

A number of publishers exhibited at DMU Library Online 2012.

Even in light of the bad weather the event itself was a success, with many of the changes to the 2012 event working well. Feedback from several attendees was very positive, and all of the publishers who exhibited at the event felt it was a useful and enjoyable show. The working group’s planning and coordination was excellent, and the collaboration between library staff in the run-up to the event, and on the day itself, was fantastic. It was great to see DMU Library promoted as “one” service in one place at one time. So much library work is team-based within individual departments, it was very effective for staff to get together and be on hand to talk to DMU students and staff under the guise of a single directorate.

Talks on the library’s new reference management product were extremely popular with students and staff, highligting that the event promotion and publicity worked well. Some of the more general breakout talks were not so well-attended – this may have been down to the bad weather but also publicity for the talks being too vague and content of the workshops having too much of a staff-focus.


The DMU Library Online working group are currently reviewing this year’s event. I am sure this evaluation of what worked well and what could have worked better will provide some useful recommendations for how the library continues to promote its online resources and services to DMU students and staff in the future. Targeted promotion to different faculties seems to be a good place to start, with promotion centred around specific resources and products. This may be the legacy of the DMU Library Online event, providing a template for staff to use, edit and implement at future library events. The DMU Library Online working group was a major success and it was great to see the eResources Roadshow evolve into a bigger, more productive event. I think I also applied learning obtained from the project management course I attended at the outset of the Library Online planning – using this learning definitely improved, streamlined and enhanced the organisation and preparation involved.

Posted in DMU, Library | Tagged , | Leave a comment

Online library resources: usability & user interaction

I recently read Bohyun Kim’s interesting blog on research “flow” and serendipity in digital library collections, and a number of Bohyun’s comments immediately struck a chord with me. I have previously blogged about some of the challenges which can arise around eResources troubleshooting for library staff, and I think issues surrounding resource usability, navigation and functionality can affect all types of library user (students and staff).

I deal on a daily basis with online resource admin and configuration and regularly liaise with students, academic / library staff and content providers. Library users are (increasingly) demanding easy and seamless 24/7 online access to electronic content, using a multitude of different devices to do so (PCs, tablets, smart phones etc). These devices have a diverse range of internet browsers, security settings and display features which may potentially affect (block?) how this content is consumed, managed or downloaded. Students want “one-click” access to content – as Bohyun suggests, users do not want to be consistently confronted with browser pop-up security messages or numerous platform login welcome screens after already submitting valid credentials. Unfortunately for libraries and its users, “one-click” access appears to be not that easy to achieve in reality for some content providers!!

Publishers own content, and protect their content with authentication mechanisms (Athens, Shibboleth etc) which libraries and users need to set up and comply with to obtain access. These authentication mechanisms can sometimes be very complex, and rather than allow access to content (which is their primary function), can potentially act as barriers for users – proving to be one step too far for many disgruntled library stakeholders. The barrier or blockage to online content upsets the research “flow” which Bohyun refers to in her blog post. There is also a financial context to this as well though, for libraries and publishers to consider. Barriers to online content can damage a user’s experience of using library resources, potentially meaning that a user may not return to a particular subscribed system or product. Non-returning users may, in turn, affect overall usage of a resource. Low usage of a resource may lead to cancellation of said resource, as maximum value for money is not being achieved (these evaluations are especially key in the current economic climate).

I think it is important for libraries to keep the pressure on publishers and providers to keep improving, streamlining and simplifying their authentication mechanisms where possible and appropriate. Longstanding access barriers are no good to anyone – especially university students and staff. It is these stakeholders who need to continue to be the main focus for libraries when delivering access to online resources.


Posted in DMU, Library, MashDMU | 8 Comments