San José State University MLIS E-Portfolio

Erica Krimmel, May 2014

Core Competency N: Evaluation

“Evaluate programs and services based on measurable criteria.”

Programs and services are an integral part of libraries, as well as other types of information organizations. In public libraries, programs and services may include the reference desk, technological support, interlibrary loans, children’s storytime, etc. Academic and special libraries have similar offerings, though they are tailored to different target audiences. Beyond the library, information organizations such as data management providers define specific programs and services to meet their business goals and serve the needs of their customers. Although here we will focus on libraries, both these for-profit companies and nonprofit libraries rely equally on evaluation to support and improve their programs and services.

Evaluation, in turn, must depend on measurable criteria because they provide an objective baseline, and also serve as a common denominator to compare programs at different institutions. In addition, measurable criteria acknowledge the multifaceted natures of programs and services. For example, Fitzpatrick, Moore & Land (2008) conducted an evaluation of the University of Massachusetts Amherst W.E.B. Du Bois Library’s reference services. The library had recently undergone a physical and operational remodel, in which the new reference services became more centralized and direct; the researchers were curious as to whether or not this restructured service model satisfied staff and student expectations. Measuring “satisfied expectations” would be impossible without defined criteria, so they decided to focus on user perceptions, user experience, and staff experience.

In order to measure these criteria, the researchers developed a mixed-methods strategy using surveys, focus groups, reference desk transcripts, and tally questions. Each of these methods itself focused on measurable criteria: for instance, the number of students who replied “yes” to the survey question, “Did you get the answer you hoped for?” Although the specific questions asked and variables measured will obviously vary from case to case, using standard research methods like the ones noted here helps guarantee that final evaluation is based on objective, interoperable measures. Aside from these methods, clearly defined goals and objectives can also serve as excellent measurable criteria–either they are accomplished or they aren’t.

Services (and programs to a lesser extent) extend into the digital realm, where measurable criteria continue to enable meaningful evaluation. Digital library services include online reference help, catalog search tools, online account access, and other website features. In evaluating these services, principles of interface and interaction design come in handy and offer an existing domain of measurable criteria. In his book Designing for Interaction, Saffer (2010) recommends heuristic, or hands-on, evaluation while keeping an eye out for roadblocks to user success (p. 184). For example, to evaluate a library’s online chat reference service a heuristic evaluator might pretend to be a patron, go through the reference chat process herself, and make notes on challenges like “unclear whether librarian was paying attention,” or “went through multiple steps to get to suggested resource, only to discover it wasn’t desired.” These heuristic suggestions become measurable criteria when framed in the context of interaction design principles: the unclear attention is an issue of “Status Indicator,” and the undesired resource problem could be solved with “Feedforward,” a way to preview results before the user makes navigation decisions.

The field of interaction design has a rich community of measurable criteria to draw upon, making it important to consider when evaluating library and information services. Tidwell (2011) presents an entire book of collected design patterns, such as “feature, search, browse” and “thumbnail grid,” which help create original interfaces while maintaining a consistent framework that supports user expectations. Again, evaluating existing digital services against best-practice patterns is an effective assessment method.

Aside from being based on measurable criteria, evaluation must be ongoing. As Fitzpatrick, Moore, & Land (2008) say about library reference services, “any approach needs constant evaluation,” (p. 238). The Institute of Museum and Library Services (IMLS) agrees, and is a strong advocate for “evidence-based knowledge of the value of innovative museum and library services.” To this extent, they provide a compilation of evaluation resources, including definitions, toolkits, methodologies, case studies, and professional groups. The input required to maintain constant evaluation is high, even with support from professional organizations like the IMLS; however, well-done evaluation highlights success, points out areas for improvement, and overall strengthens the ability of an organization to provide support for its programs and services.

Applications

EVIDENCE 1. LIBR 204 – Information Organizations and Management gave me comprehensive experience with program evaluation, through a team project to develop a strategic plan for the library of our choice. One of my teammates had volunteered at the U.C. Hastings College of the Law library, which was conveniently (for us) undergoing some structural turmoil. We decided to work with the Hastings library circulation department to analyze their current situation and make recommendations on their future.

The first part of our strategic plan included commentary on Hasting’s mission, vision & value statements, as well as an environmental scan and SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis. The library was facing major budget shortfalls, and needed to dramatically streamline their operations. Through our research for the environmental scan and SWOT analysis, we saw an opportunity for Hastings to not only make their circulation much more cost efficient, but also to improve upon its existing services. In part two, we developed and presented five strategic goals, complete with actionable objectives and assessment measures.

This semester-long project allowed me to experience different aspects of the evaluation process, from identifying problems, to contextual research, to designing goals and the pathways to reach them. I enjoyed working with my four teammates because we brought diverse skills and backgrounds to the table, and managed to make this diversity a team strength rather than a weakness. Although we divided the research load and writing by topic (I researched and wrote about circulation technology), we kept each other informed of our findings, and made all evaluation decisions as a group. The final strategic plan not only benefitted our learning, but was also a critically useful document for the circulation department at the Hastings library.

EVIDENCE 2. Qualitative research methods are important tools for evaluation, as they produce measurable results. I learned about various qualitative research methods in LIBR 285 – Research Methods, and one of our course assignments was to design an eight-question survey about social media adoption in public libraries. Survey design requires a fine balance between asking many broad questions to provide data that matches the breadth of the study, and asking fewer specific questions to acquire meaningful measurements and insight.

I worked in a team with three other classmates on this assignment. We began by writing operational definitions and determining the specific aspects of social media adoption in public libraries that we wanted to research: platforms, purposes, frequency, staffing, interaction, and perceived effectiveness. From here we branched out, and each of us wrote three questions. Back as a team, we compared our individual questions, modified ones that everyone agreed were important, and discussed what was still missing. In the end, we created one question for each aspect above, one demographic question, and one free response question.

Our final product was the survey, an attached cover letter, and our design rationale. I believe that by going through this design process I came to better understand surveys and other qualitative research tools, and how they can help us transform emotion and other less tangible data into something that fits under measurable criteria.

EVIDENCE 3. In LIBR 251 – Web Usability, I learned about many different design principles, and how to apply them to different types of online interfaces. The assignment presented here, Library Website Design Evaluation, asked us to choose two principles of interaction design and for each, find one library website that exemplified the principle and another that blatantly disregarded it.

I first chose the concept of “feedforward,” which says that users should be able to predict what will happen when they choose a navigation option. I applauded the Belvedere-Tiburon Library for incorporating this into their website, with hover descriptions of each link. The Plumas County Library, on the other hand, requires users to click on a link just see where it takes them. My other principle was Tesler’s Law of Conservation, which states that although complexity is often inherent, it should be incorporated into the background whenever possible (Saffer, 2010). The San Francisco Public Library presents an excellent example of Tesler’s Law in its historical photograph search, where users can access the same information by searching or browsing, and where the browsing interface displays levels of complexity only as needed. The Carlsbad City Library, on the other hand, presents its users with a chaotic level of complexity as soon as they access its website.

Both feedforward and Tesler’s Law are examples of design-themed measurable criteria, though they are only measured in a presence/absence format. By focusing on principles such as these, I was able to evaluate specific aspects of these four library websites and the services they provide.

EVIDENCE 4. Finally, some library services come from discrete products, such as LibGuides designed to present discipline-specific resources to patrons. For LIBR 220 – Maps & Geographic Information Systems, I evaluated a geospatial LibGuide from the University of Waterloo; its “Open Data Guide” is an online resource that provides commentary on and directs users to other geospatial data resources. In my report, I outline the contents of the LibGuide, both analytically and as a visual diagram. I also commend features that support the LibGuide’s core concept–geostudies students need access to raw data–and suggest improvement for areas that detract from this concept–for instance, the LibGuide offers very little tutorial in using the resources it links to.

Although products like this LibGuide cannot always be evaluated with meaningful measurable criteria themselves, they are still important to the evaluation of the program or service they originated from. For instance, evaluating the Open Data LibGuide provides context for an evaluation of the University of Waterloo’s reference services program. If, for example, evaluators were to look at the site visit statistics of this LibGuide and find that the average visitor only spends 1.5 minutes on the site, then perhaps this is a signal for the library to enhance the content of the guide. Further evaluation and research could focus on the needs of geospatial students, or other facets of the problem. In any case, the LibGuide product could drive program evaluation and improvement.

Conclusion

Evaluation using measurable criteria is a necessary component of any successful program or service. Through my coursework at SLIS, I have prepared myself to evaluate real world programs and services by becoming familiar with the underlying research methods and discipline-specific best practice principles that lead to measurable criteria, and by recognizing the relationship of products to the programs that create them.

References

Fitzpatrick, E.B., Moore, A.C., & Land, B.W. (2008). Reference librarians at the reference desk in a learning commons: A mixed methods evaluation. Journal of Academic Librarianship, 34(3): 231-238.

Institute for Museum and Library Services. (n.d.) Research: Evaluation resources. Retrieved from http://www.imls.gov/research/evaluation_resources.aspx

Saffer, D. (2010). Designing for Interaction. Berkeley, CA: New Riders.

Tidwell, J. (2011). Designing Interfaces. Sebastopol, CA: O’Reilly.