Category Archives: Assessment

Information Literacy Symposoium at Houston-Tillotson

On Friday, January 22nd  the Teaching and Learning Services Department participated in the first-ever Information Literacy Symposium held at Houston-Tillotson University in East Austin. The Symposium was coordinated by Patricia Wilkins, Library Director, Ana Roeschley, Public Services Librarian, and Stephanie Pierce, Technical Services Librarian, and it brought together librarians from across public, school, and college and university libraries.

The all-day symposium was a great opportunity for us all to exchange ideas, share teaching strategies, brainstorm about potential partnerships, and get updates on the ways our libraries and services have evolved in response to curriculum and student learning.

There were four sessions offered throughout the day and TLS gave a panel presentation entitled, “Measuring Learning from Classroom to Program,” about the different ways we integrate assessment into our information literacy instruction and how we address the challenges we encounter.  You’ll find our presentation along with supporting documentation that we referred to during the presentation in this shared folder; please feel free to adapt them, but we would love it if you would credit the UT Libraries somewhere in your adaptation and let us know if you do!

Our sincere thanks to Ana and the rest of the staff at the Downs-Jones Library at HTU for organizing this Symposium.

Spring CATs or Easy Assessment of Student Learning

Resource:  Bowles-Terry, M., & Kvenild, C. (2015). Classroom assessment techniques for librarians. Chicago, Illinois: Association of College and Research Libraries, a division of the American Library Association.

One of the most common questions I’m asked as Learning & Assessment Librarian is how to quickly and effectively assess learning in the classroom. I always feel like my answers are unsatisfying, but the reality is that there is no perfect way to do this. When I went to Assessment Immersion a few years back, much attention was given to Angelo’s Classroom Assessment Techniques, a giant tome full of assessment examples that I believe was referred to at one point as an “assessment bible.” While I agree that it’s a great resource (though perhaps not at the biblical level), it is also huge and sometimes daunting. Not all of the techniques in the book lend themselves to the kind of one-shot teaching we often find ourselves engaging in. All of this to say, I was excited to get my hands on the recently published “Classroom Assessment Techniques for Librarians.” Inspired by Angelo, the authors tailor various classroom assessment techniques (CATs… meow!) for the kinds of outcomes and learning situations that librarians often engage with.

Their model is to simplify CAT usage by breaking it down into three steps:

1) Plan. (Choose a session and a technique.)

2) Do it! (Explain to students that you’re going to be checking their understanding during the session, tell them why, provide clear instructions, and execute your plan.)

3) Respond. (This is the “closing the loop” part. Read and interpret student responses and address what you learn by letting students know what difference that information makes. An example of this is sending a follow-up email to the instructor detailing changes you’ve made to the course guide based on students’ understanding. You should also think about changes you might make to your instruction based on what you learned, and make specific notes for the next time you work with that class.)

The book is broken down into chapters based on the kinds of skills being assessed, and includes examples of CATs being used in various class types and levels. For this RIOT, I’ll give examples of a few that I’m going to try out this semester, and we can talk about things that you have tried/want to try, challenges and possible solutions, and anything else related to CATs (purring, claws, etc).

Assessing Prior Knowledge and Understanding

I used to sometimes send specific pre-assessment questions to classes to gauge where students were at, but eventually learned that first-year students (with the possible exception of honors classes) are almost always going to be all over the place. “CATs for Librarians” includes an example of using pre-assessment in a way I haven’t done before: asking questions before or at the beginning of class to find out about students’ conceptions on how information is available on the Web. Their example questions are as follows (pg. 8):

  • “Google indexes everything on the Web” (answer choices in a Likert scale ranging from agree to disagree)
  • “Information is free”

I love the idea of using this pre-assessment to not only find out more about students’ beliefs, but to set the tone for a session and let students know that we’re not just going to talk about where to click on the Libraries website. I can that this could be a way to introduce multiple threshold concepts, and I’m excited to try it out. I’ll probably use a Google form linked from classes’ Subjects Plus pages and have students respond as they enter the classroom and get settled.

Assessing Skill in Synthesis and Creative Thinking

After going through an explanation of what peer review is, I often wonder how much of my diatribe the students absorbed. This semester, I’ll try the “One-Sentence Summary” CAT (pg. 40). For this technique, students are asked “Who does what to whom, when, where, how, and why?” about a particular concept, in this case, “How do scholarly articles get published?” Bowles-Terry and Kvenild suggest that this technique works particularly well for difficult new ideas and threshold concepts, and offer examples using each of the frames.

A few notes on analysis

Bowles-Terry and Kvenild include the always-useful reminder that assessment is not the same as research. Your goal is to see what your students learned, not to draw sweeping conclusions that can be applied in other settings. Do make sure to set aside time to close the loop, but don’t feel like you have to spend hours carefully categorizing each student response. For some of the higher-level skills (like the “one-sentence summary” example above) it might be useful to score responses with a simple rubric, or even a yes/no checklist. Here’s a very rough “rubric bank” that I sometimes pull from to assess relevant CATs; feel free to use it if it’s helpful to you, but don’t get caught up in “doing it right.” Even if you don’t have time to utilize a rubric for analysis, you can learn a lot by sorting through student responses and thinking about how to respond (to students themselves and in your own teaching).

What CATs are you going to try this semester? What has worked well for you in the past, and what have you learned about your students using CATs?

p.s. We have a copy of “Classroom Assessment Techniques for Librarians in our collection, and TLS also has an office copy that I’m happy to share. There are many more ideas than I was able to address in this post, and I highly recommend browsing it. If you want to see the “assessment bible” itself, we have collection copies and an office copy of that one, too. J

TLS Tips: Reflection as Assessment

At the end of the semester, I recommend scheduling time to reflect and perform a self-assessment of your teaching throughout the fall. If you incorporated any in-class assessments into your practice this fall, now is the perfect time to sit down with all the data you collected and look for patterns. Even if you didn’t get a chance to do any formal assessment, think about your high points in the classroom and the challenges you encountered. Ask yourself some questions.

Can you trace any identified patterns to something specific you did or didn’t do as part of your practice this semester? How do you think your particular successes and failures affected student learning? Most importantly, try to think of a concrete change you can implement next semester to try and address the challenges you faced this fall.

Don’t dwell on what went poorly, but use what you learn through your self-assessment to set goals for the next semester. You may want to take a moment to write down your reflections and goals to make them feel more concrete. Revisit them at the end of the semester, and see if you made any progress.

An example from my fall:

At the beginning of the semester, I tried to incorporate a simple “End of Class Questions” form into my instruction sessions by including a link in the Course Guide and setting aside time at the end of class for students to answer the questions. While I didn’t do a good job of prioritizing this exercise as I became busier, the few results I did get line up with my own reflections. While overall satisfied, students in one class suggested having the class search for something specific to make the session more interesting. I agree! In that particular class, students hadn’t yet been given their assignment when they met with me. Reflecting on my semester as a whole, I realized that several of my challenges (students who didn’t know their assignments or why they were at the library, poorly timed instruction sessions, etc.) could be traced back to a lack of strong, clear communication with the faculty members or TAs I was planning with.

In the future, I will build more time into my planning process to ensure that instruction sessions are scheduled at the point of need. Neglecting to do so likely results in lesser student learning through a failure to tie research concepts to immediate student interests (such as getting a good grade on an upcoming assignment). Next semester I will also try to focus on fewer learning outcomes so that I have time for quick assessments during sessions.

What did you learn through self-assessment?

Discussion: A Major Professional Shift, or Comparing the Framework with the Standards

Cindy led our discussion of this week’s RIOT on the new ACRL Framework for Higher Education with a view to an in-depth discussion open to all Libraries staff coming up this January. Our conversation centered around a few themes:

1. What’s the difference between the new ACRL Information Literacy Framework for Higher Education (Framework) and the previous ACRL Information Literacy Standards for Higher Education (2000) (Standards)?

The first clue is in the titles. The Standards offered proscriptive standards for information literacy and specific learning outcomes connected to each standard. This model was similar to education standards models used in some social sciences and STEM disciplines for accreditation.

The new Framework offers instead a series of frames through which to see central concept in information literacy.

2. What do we think about the definition of Information Literacy?

Information literacy is a spectrum of abilities, practices, and habits of mind that extends and deepens learning through engagement with the information ecosystem. It includes:

  • understanding essential concepts about that ecosystem;
  • engaging in creative inquiry and critical reflection to develop questions and to find, evaluate, and manage information through an iterative process;
  • creating new knowledge through ethical participation in communities of learning, scholarship, and civic purpose; and
  • adopting a strategic view of the interests, biases, and assumptions present in the information ecosystem.

The Good:
It’s focusing on critical thinking!
This definition is how we think about information literacy
This seems like what students should be learning in college
This definition makes clear to faculty that we have an expertise: a broad understanding of the information landscape beyond a single specific field.

The Bad:
It could be hard to use this with faculty
The previous definition seemed more concrete: Information literacy is a set of abilities requiring individuals to “recognize when information is needed and have the ability to locate, evaluate, and use effectively the needed information” (Standards)
One criticism is that we’re trying to make students into “little librarians.”

The Questions:
If we were to take this to an administrator or professor, they would likely think this is what faculty are doing in their classes. How do we divide that labor? (We have specific outcomes akin to previous standards.)

3. Do the frames resonate with us?
The Framework is built around six frames, presented alphabetically:

  • Authority Is Constructed and Contextual
  • Information Creation as a Process
  • Information Has Value
  • Research as Inquiry
  • Scholarship Is a Conversation
  • Searching Is Strategic

Background: These six frames were initially called “threshold concepts” (Cindy referenced Meghan’s post about threshold concepts), and after pushback the Framework now identifies these as our six frames. In the description of each frame, the document describes the differences between how experts and novices understand the concepts of the frame. For example, for “Authority Is Constructed and Contextual,” the Framework describes: “Experts understand the need to determine the validity of the information created by different authorities and to acknowledge biases that privilege some sources of authority over others, especially in terms of others’ worldviews, gender, sexual orientation, and cultural orientations. An understanding of this concept enables novice learners to critically examine all evidence—be it a Wikipedia article or a peer-reviewed conference proceeding—and ask relevant questions about origins, context, and suitability for the current information need.” This seemed helpful to identify the novice and the expert as on the same journey.

Uses: The frames can help us reflect on how many of these pieces we are putting into one class session or whether we are working on a novice or expert level and whether that fits our student group.

Politics: In some fields, the first frame might seem political; in others, straightforward. Are we taking a risk here?

4. How can we apply these frames?

Background: The Framework includes for each frame a set of Knowledge Practices, or specific descriptions of what a learner in this frame can do, and a set of Dispositions, or how learners in this frame might feel motivated or where they might ask questions. We described the Knowledge Practices as the practical steps and the Dispositions as the affective influences.

Uses: The frames seem easier to scaffold across a departmental curriculum; while the Standards had their own learning outcomes and were more static.

The frames open up classrooms to critical thinking; while the Standards’ Learning Outcomes focused on tools, now we can teach critical thinking and learn tools along the way.

In the spring, look for a workshop from TLS on this professional shift – we are already doing this work, and now we get to see how deep it is and think in a different way about what we are doing.

DISCUSSION: “Does Library Instruction Make You Smarter?”

Since we didn’t actually have Michele’s RIOT back in the summer, it was good to finally get to this topic. The busy start to the fall semester puts the assessment and instructional effectiveness on all of our minds. A lively discussion followed Michele’s presentation.

All were chagrined that the studies didn’t show any correlation between student success and library instruction (LI), and we wondered about what kinds of instruction interventions do show a correlation to student success. Real-world guest lecturers? Recitation? What?

The tendency for education researchers to work with giant datasets  instead of qualitative researcher leads to literature that doesn’t adequately characterize students’ motivation, an important part of student success. Assessing students’ research papers would show the effect of LI, but one first needs to think about the audience for this information. Creating a controlled teaching environment could help us know what’s effective, but that’s neither practicable or desirable. Besides, there are so many variables that it’s better to collect information on the perceived value of LI, and whether LI changes students’ behaviors.

From here the conversation shifted to library anxiety and creating a culture that takes away the imposter-syndrome discomfort students may feel when they ask for help. Outward-facing attitudes benefit LI, and outreach and campus partnerships can help to create a culture of comfortable help-seeking.

 

Discussion: Grab bag of conference ideas

I was lucky enough to attend both of these conferences that Krystal mentions. It’s great to be able to send two delegates from our institution because we were able to split up, see more talks between the two of us, share what we learned and bring back twice as much to share with our colleagues here in the Libraries.

In RIOT, our conversation led us to think about how we can present our own assessment findings in a more public way – to students and to faculty. We are all excited at the idea of translating the data we gather into infographics and tossed around the idea of looking into easy-to-use softwares, like Piktochart, that could help us out.

We also talked about our own struggles with reaching graduate students. The ideas Krystal brought back inspired us to think about what we can do in our changing space here in PCL to more meaningfully engage with graduate students. We have thus far implemented a few initiatives, including a photo show of research at the Benson, so we would like to think of translating ideas that the branch libraries have thought up over here in the new Learning Commons.

Lastly, in response to our conversation about wanting to focus less on teaching skills (tool-based searching) and more on concepts, Janelle promised to share with us what sounds like a great infographic that explains the cycle of information succinctly.

RIOT: Grab bag of conference ideas

I recently attended two different conferences in the Pacific NW, Library Instruction West (LIW) and ARL’s Library Assessment Conference (LAC). I encountered tons of great ideas and inspiring work at both conferences, and was reminded of how intertwined instruction and assessment really are. While I could probably prattle on for way too long about things I’m interested in trying, I chose three ideas to bring to the group for discussion. I had trouble choosing sessions to discuss that fit a particular theme so I just went with a grab bag approach, figuring we can just talk about whatever catches y’all’s attention the most.

1) Rubric Assessment Project at Claremont Colleges Library

Poster
Preprint
Libguide with project info

I saw a presentation on this project at LIW, and have linked to a poster that was presented at LAC. Librarians at Claremont undertook this research project through the Assessment in Action program in order to determine what impact librarian intervention in first-year courses has on IL performance in student work. They assessed student papers produced in First-Year Experience programs (similar to Signature Courses) using a broad IL rubric, and analyzed the results to see if different levels of librarian involvement in the courses impacted student performance. They found that librarians did positively influence student performance in the first three levels of involvement, but a fourth higher level of involvement had no impact.

I think my favorite aspect of this study is how they are using the results to communicate with their constituents, like in this infographic: http://libraries.claremont.edu/informationliteracy/images/FYS_Infographic.jpg. I like the idea of using data to communicate our value on campus.

Questions:

  • What research questions do we have regarding the impact of our instruction?

  • What would be convincing evidence to our faculty?

2) Assessment in Space Designed for Experimentation: The University of Washington Libraries Research Commons

Lauren Ray and Katharine Macy (University of Washington)

See abstract here: http://libraryassessment.org/bm~doc/2014-program-abstracts.pdf

At LAC, I attended a lightning talk on assessing the University of Washington Libraries Research Commons, described in the program abstract as “a space intended to meet collaborative needs, foster interdisciplinary connections and provide a sandbox for innovating and testing new library space designs, service models and programs.” While I was inspired by the way their assessment plan focused on student learning in the space rather than just satisfaction and use, I was also really excited to learn about the programming they do in the space, which is targeted at created interdisciplinary connections between scholars.

Their Scholars’ Studio series (http://commons.lib.washington.edu/scholarsstudio) consists of a series of 5-minute lightning talks delivered by grad students and postdocs doing research on whatever the interdisciplinary theme of the evening is. Example themes include “predictions” and “Pacific Northwest.” The talks are followed by a Q&A and a reception. They also provided students with guidance on distilling their research into a short talk and presenting to an interdisciplinary audience before the event.

The presentation also covered Collaborating with Strangers workshops (http://commons.lib.washington.edu/news-events/colab) in which students, faculty and researchers sign up to connect with one another in 3-minute speed meetings – like speed dating for research. Each session is organized around a particular interdisciplinary topic, such as food research, so that participants can connect with others who have similar interests.

In one-on-one interviews with past graduate student presenters from the Scholars’ Studio series librarians learned that the program helped participants rethink their research and think about how other disciplines would approach what they do, as well as how to be more concise in talking about research. I thought that these were both interesting ideas to consider as we think about ways to include graduate student services in our Learning Commons plans.

Questions:

  • Could we adapt these ideas to fit in our Learning Commons plans?

  • How can we ensure that we assess student and researcher learning in the new spaces and programs we’re designing?

 3) Teaching “Format as Process” in an Era of Web-Scale Discovery

Kevin Seeber (Colorado State University-Pueblo)

Slide deck: http://kevinseeber.com/liw2014.pdf

 In this session at LIW, the presenter focused on how he changed from teaching skills based on database navigation to focusing on teaching publishing processes that lead to different formats of information. He stated that “instruction tied to static interfaces, locations, or appearances is not sustainable” because of rapidly changing developments in the technology and delivery systems. I liked an activity he uses to start instruction sessions in which he gives out cards with different formats on them (scholarly journal article, blog post, etc.) and has students arrange them by least to most editing, research, time to publication, and other attributes and uses that as a launching point for discussion on different formats and their uses. This seems like a nice way to discuss source evaluation, as it gives students skills that last beyond university library-based access to information and sets them up for critical reflection to extend beyond the source and examine funding models and how sources are distributed.

I often find myself trying to spend less time discussing interfaces and the like, and am planning on challenging myself to cut down on time spent doing demos even more this fall. I also thought that this was a good example of the pending new ACRL standards being put into action.

 Questions:

  • What ideas do you have for teaching “Format as Process” and other threshold concepts while still making sure students know how to access the materials in question? How can we work together to develop strategies?

  • Now that we’ve had scoUT for a while, how do you see it affecting (or not) students’ understanding of information formats?

RIOT: Does Library Instruction Make You Smarter?

All across UT (and higher education in general), people are attempting to assess student learning and articulate the value of their programs to student success, measured by retention, on-time graduation, GPA, post-college success and more.  While we are successfully measuring the impact of our sessions on student learning, meaning we know they are achieving our learning outcomes in our sessions for at least some of our programs, we haven’t measured whether what they are learning translates to more general success in or after college.   Since Megan Oakleaf’s Value of Academic Libraries Review and Report in 2010, I have been wondering just what impact one-shot instruction sessions have on student success, whether that is defined as GPA, retention or on-time graduation.  I am clearly not the only one wondering this so I put together this post as an attempt to answer that question.

In 2007, Joseph Matthews published the book “Library Assessment in Higher Education” which I haven’t read yet but have read about many times.  He looked at studies up to 2007 and found that they are pretty evenly split between finding a correlation between library instruction, GPAs and retention and finding no correlation.   I found a few more articles published since 2007 that represent what has been happening since his book came out.  This list is by no means comprehensive but the articles illustrate the state of the research on the question and the ways people are approaching the question.

Vance, Jason M., Rachel Kirk, and Justin G. Gardner. “Measuring the Impact of Library Instruction on Freshman Success and Persistence: A Quantitate Analysis.” Communication in Information Literacy 6.1 (2012): 49–58.

Librarians from Middle Tennessee State University attempted to find out whether one-shots for freshmen impacted their GPAs and/or their likelihood of returning for a second year (retention).  To do so, they gathered information about the one-shot classes they were offering to freshmen over a two year period, noting that these were introductory rather than research intensive classes.  They also gathered information about high school GPA, family income, ACT scores, race, gender, and major (all variables that have been correlated with retention).  The results of the study were that they could not find a direct connection between library instruction and student retention, although library instruction does appear to have a “small measurable correlation with student performance” (which, in turn, is tied to success and persistence).  There were a lot of issues with the study that the authors themselves point out, including the fact that the students they included as having attended instruction sessions may not have – they were enrolled in the courses that came in but they may have skipped.

Wong, Shun Han Rebekah, and Dianne Cmor. “Measuring Association Between Library Instruction and Graduation GPA.” College & Research Libraries 72.5 (2011): 464–473.

Librarians from Hong Kong Baptist University looked at the correlation between GPA and library workshop attendance for 8,000+ students who graduated between 2007 and 2009.  The findings were that GPAs were positively correlated with increased workshop offerings.  In programs that offered 5 workshops, GPAs were highest.  In those that offered 3 or 4, GPAs were positively affected and in those that offered 1 or 2, there was no positive correlation.  Workshops, in this case, were a mix of required and voluntary, stand-alone and course integrated.  One issue with this (and many) study is that it is only about correlation, not causation.

Bowles-Terry, Melissa. “Library Instruction and Academic Success: A Mixed-Methods Assessment of a Library Instruction Program.” Evidence Based Library and Information Practice 7.1 (2012): 82–95.  

This study from the University of Wyoming used a mixed-methods approach, with qualitative data provided by focus groups with 15 graduating seniors and quantitative data provided by transcripts for about 4,500 students.  The interesting thing about this study is that it provided some evidence for the idea that scaffolded information literacy instruction is most effective for student success.  Students in the focus group said the ideal form of instruction was a session their freshmen year and then at least one more when they were farther along in their majors to focus more on doing research in their discipline.  Transcript analysis showed a correlation (not causation) between GPA at graduation and getting upper division library instruction.  Once again, the authors identified issues such as the fact that they didn’t know if students in the transcript analysis actually attended sessions or skipped that day, and the fact that the analysis only showed correlation.

So what is the answer to our question?  A definitive “we don’t know.”   And where does that leave us as we struggle to demonstrate our value to the teaching & learning mission of UT?  It is clear that researchers in libraries are attempting to answer the question of whether what we do in library instruction is transferrable and positively impacts student’s retention, graduation and academic success.  It is also clear that we can’t definitely say it does.  On the plus side, I didn’t find anything saying it harmed students.

Questions for discussion:

  • How do you articulate the value of library instruction to the students you work with?  To the faculty?
  • Is there something we could or should be doing here in the Libraries to attempt to answer the question?
  • Does the fact that we don’t know affect your plans for library instruction provision
  • Does the fact that we don’t know (beyond anecdotal evidence from our faculty) even matter?

 

 

Discussion: Want to Improve your Teaching? Be Organized.

AJ kicked off the meeting by discussing the article, “Teaching Clearly Can Be a Deceptively Simple Way to Improve Learning,” by Dan Berrett published in the November 22, 2013 issue of the Chronicle of Higher Education.  The article discussed how teaching clearly is basic to improving student learning.  This conclusion was drawn from an analysis of 3 studies that looked at how organization and clarity of professors is connected to deeper student learning.

The group then talked about different strategies we use in our attempts to explain things clearly and be organized in our teaching.  The strategies included:

  • When you explain a concept, have the students reflect it back or explain it to you.  This not only serves as a check for student understanding, but improves the chances of students who initially didn’t understand now “getting it” since it has been explained in more than one way.
  • At the beginning of class, tell the students your plan and goals for the class.  Write the goals on the whiteboard or project them on the screen if possible.  Check back in along the way so they see how they are accomplishing those goals.
  • At the beginning of class, ask students to tell you what they need to know in order to do their assignment.  Structure the class around their stated needs.
  • Give yourself time markers when you plan the class so you know how long different sections and activities should take and you don’t end up rushing through parts.  Be sure to build in some flexibility, too, and be prepared to sacrifice some content if students end up needing more time on a concept than you intially planned.
  • Give students time markers.  For example, tell them how long they have for an active learning activity and then give them a 1 minute warning before the end of that activity so they can wrap-up.
  • Use a variety of examples and illustrations to explain a point, recognizing that students have different backgrounds and different approaches to learning.
  • One example of how to explain the difference between formats is to show them a journal article, magazine article, newspaper article, and blog post and ask them to tell you which is which, how they know and possibly when different types of information might be useful to their research.
  • Watch other people teach so you don’t get stale in your own teaching.  This is a way to find new ideas to organize your classes and explain difficult concepts.

We also discussed time constraints, which is a problem everyone faces with one-shots. It is hard to build in repetition (so that you explain the same concept in more than one way), formative assessment (to check on student understanding as you go) and even summative assessment (to check on understanding at the end of the class so you can follow-up later and change things next time) into one-shots because of this time constraint.  However, it isn’t impossible and we discussed some useful approaches such as asking students to post resources they find during active learning into a GoogleDoc you can review right away, or taking a few minutes at the end of class to have them write 3 things they learned or the muddiest point.  Krystal mentioned that LIS has a book called “Classroom Assessment Techniques” on our shelf that anyone is welcome to borrow and she is also available to consult with anyone who wants to build assessment into their class.

One outcome of this RIOT is that we decided to start each one with a 15 minute discussion of things we are doing in the classroom in order to learn from each other and get new ideas.  These will be captured in the blog posts and categorized as active learning, assessment and/or “in the classroom” so we can easily find them again.  In addition, people want to observe LIS teaching so we will make that happen in the spring.