Tag Archives: assessment

TLS Tips: Reflection as Assessment

At the end of the semester, I recommend scheduling time to reflect and perform a self-assessment of your teaching throughout the fall. If you incorporated any in-class assessments into your practice this fall, now is the perfect time to sit down with all the data you collected and look for patterns. Even if you didn’t get a chance to do any formal assessment, think about your high points in the classroom and the challenges you encountered. Ask yourself some questions.

Can you trace any identified patterns to something specific you did or didn’t do as part of your practice this semester? How do you think your particular successes and failures affected student learning? Most importantly, try to think of a concrete change you can implement next semester to try and address the challenges you faced this fall.

Don’t dwell on what went poorly, but use what you learn through your self-assessment to set goals for the next semester. You may want to take a moment to write down your reflections and goals to make them feel more concrete. Revisit them at the end of the semester, and see if you made any progress.

An example from my fall:

At the beginning of the semester, I tried to incorporate a simple “End of Class Questions” form into my instruction sessions by including a link in the Course Guide and setting aside time at the end of class for students to answer the questions. While I didn’t do a good job of prioritizing this exercise as I became busier, the few results I did get line up with my own reflections. While overall satisfied, students in one class suggested having the class search for something specific to make the session more interesting. I agree! In that particular class, students hadn’t yet been given their assignment when they met with me. Reflecting on my semester as a whole, I realized that several of my challenges (students who didn’t know their assignments or why they were at the library, poorly timed instruction sessions, etc.) could be traced back to a lack of strong, clear communication with the faculty members or TAs I was planning with.

In the future, I will build more time into my planning process to ensure that instruction sessions are scheduled at the point of need. Neglecting to do so likely results in lesser student learning through a failure to tie research concepts to immediate student interests (such as getting a good grade on an upcoming assignment). Next semester I will also try to focus on fewer learning outcomes so that I have time for quick assessments during sessions.

What did you learn through self-assessment?

RIOT: Does Library Instruction Make You Smarter?

All across UT (and higher education in general), people are attempting to assess student learning and articulate the value of their programs to student success, measured by retention, on-time graduation, GPA, post-college success and more.  While we are successfully measuring the impact of our sessions on student learning, meaning we know they are achieving our learning outcomes in our sessions for at least some of our programs, we haven’t measured whether what they are learning translates to more general success in or after college.   Since Megan Oakleaf’s Value of Academic Libraries Review and Report in 2010, I have been wondering just what impact one-shot instruction sessions have on student success, whether that is defined as GPA, retention or on-time graduation.  I am clearly not the only one wondering this so I put together this post as an attempt to answer that question.

In 2007, Joseph Matthews published the book “Library Assessment in Higher Education” which I haven’t read yet but have read about many times.  He looked at studies up to 2007 and found that they are pretty evenly split between finding a correlation between library instruction, GPAs and retention and finding no correlation.   I found a few more articles published since 2007 that represent what has been happening since his book came out.  This list is by no means comprehensive but the articles illustrate the state of the research on the question and the ways people are approaching the question.

Vance, Jason M., Rachel Kirk, and Justin G. Gardner. “Measuring the Impact of Library Instruction on Freshman Success and Persistence: A Quantitate Analysis.” Communication in Information Literacy 6.1 (2012): 49–58.

Librarians from Middle Tennessee State University attempted to find out whether one-shots for freshmen impacted their GPAs and/or their likelihood of returning for a second year (retention).  To do so, they gathered information about the one-shot classes they were offering to freshmen over a two year period, noting that these were introductory rather than research intensive classes.  They also gathered information about high school GPA, family income, ACT scores, race, gender, and major (all variables that have been correlated with retention).  The results of the study were that they could not find a direct connection between library instruction and student retention, although library instruction does appear to have a “small measurable correlation with student performance” (which, in turn, is tied to success and persistence).  There were a lot of issues with the study that the authors themselves point out, including the fact that the students they included as having attended instruction sessions may not have – they were enrolled in the courses that came in but they may have skipped.

Wong, Shun Han Rebekah, and Dianne Cmor. “Measuring Association Between Library Instruction and Graduation GPA.” College & Research Libraries 72.5 (2011): 464–473.

Librarians from Hong Kong Baptist University looked at the correlation between GPA and library workshop attendance for 8,000+ students who graduated between 2007 and 2009.  The findings were that GPAs were positively correlated with increased workshop offerings.  In programs that offered 5 workshops, GPAs were highest.  In those that offered 3 or 4, GPAs were positively affected and in those that offered 1 or 2, there was no positive correlation.  Workshops, in this case, were a mix of required and voluntary, stand-alone and course integrated.  One issue with this (and many) study is that it is only about correlation, not causation.

Bowles-Terry, Melissa. “Library Instruction and Academic Success: A Mixed-Methods Assessment of a Library Instruction Program.” Evidence Based Library and Information Practice 7.1 (2012): 82–95.  

This study from the University of Wyoming used a mixed-methods approach, with qualitative data provided by focus groups with 15 graduating seniors and quantitative data provided by transcripts for about 4,500 students.  The interesting thing about this study is that it provided some evidence for the idea that scaffolded information literacy instruction is most effective for student success.  Students in the focus group said the ideal form of instruction was a session their freshmen year and then at least one more when they were farther along in their majors to focus more on doing research in their discipline.  Transcript analysis showed a correlation (not causation) between GPA at graduation and getting upper division library instruction.  Once again, the authors identified issues such as the fact that they didn’t know if students in the transcript analysis actually attended sessions or skipped that day, and the fact that the analysis only showed correlation.

So what is the answer to our question?  A definitive “we don’t know.”   And where does that leave us as we struggle to demonstrate our value to the teaching & learning mission of UT?  It is clear that researchers in libraries are attempting to answer the question of whether what we do in library instruction is transferrable and positively impacts student’s retention, graduation and academic success.  It is also clear that we can’t definitely say it does.  On the plus side, I didn’t find anything saying it harmed students.

Questions for discussion:

  • How do you articulate the value of library instruction to the students you work with?  To the faculty?
  • Is there something we could or should be doing here in the Libraries to attempt to answer the question?
  • Does the fact that we don’t know affect your plans for library instruction provision
  • Does the fact that we don’t know (beyond anecdotal evidence from our faculty) even matter?

 

 

Discussion: Want to Improve your Teaching? Be Organized.

AJ kicked off the meeting by discussing the article, “Teaching Clearly Can Be a Deceptively Simple Way to Improve Learning,” by Dan Berrett published in the November 22, 2013 issue of the Chronicle of Higher Education.  The article discussed how teaching clearly is basic to improving student learning.  This conclusion was drawn from an analysis of 3 studies that looked at how organization and clarity of professors is connected to deeper student learning.

The group then talked about different strategies we use in our attempts to explain things clearly and be organized in our teaching.  The strategies included:

  • When you explain a concept, have the students reflect it back or explain it to you.  This not only serves as a check for student understanding, but improves the chances of students who initially didn’t understand now “getting it” since it has been explained in more than one way.
  • At the beginning of class, tell the students your plan and goals for the class.  Write the goals on the whiteboard or project them on the screen if possible.  Check back in along the way so they see how they are accomplishing those goals.
  • At the beginning of class, ask students to tell you what they need to know in order to do their assignment.  Structure the class around their stated needs.
  • Give yourself time markers when you plan the class so you know how long different sections and activities should take and you don’t end up rushing through parts.  Be sure to build in some flexibility, too, and be prepared to sacrifice some content if students end up needing more time on a concept than you intially planned.
  • Give students time markers.  For example, tell them how long they have for an active learning activity and then give them a 1 minute warning before the end of that activity so they can wrap-up.
  • Use a variety of examples and illustrations to explain a point, recognizing that students have different backgrounds and different approaches to learning.
  • One example of how to explain the difference between formats is to show them a journal article, magazine article, newspaper article, and blog post and ask them to tell you which is which, how they know and possibly when different types of information might be useful to their research.
  • Watch other people teach so you don’t get stale in your own teaching.  This is a way to find new ideas to organize your classes and explain difficult concepts.

We also discussed time constraints, which is a problem everyone faces with one-shots. It is hard to build in repetition (so that you explain the same concept in more than one way), formative assessment (to check on student understanding as you go) and even summative assessment (to check on understanding at the end of the class so you can follow-up later and change things next time) into one-shots because of this time constraint.  However, it isn’t impossible and we discussed some useful approaches such as asking students to post resources they find during active learning into a GoogleDoc you can review right away, or taking a few minutes at the end of class to have them write 3 things they learned or the muddiest point.  Krystal mentioned that LIS has a book called “Classroom Assessment Techniques” on our shelf that anyone is welcome to borrow and she is also available to consult with anyone who wants to build assessment into their class.

One outcome of this RIOT is that we decided to start each one with a 15 minute discussion of things we are doing in the classroom in order to learn from each other and get new ideas.  These will be captured in the blog posts and categorized as active learning, assessment and/or “in the classroom” so we can easily find them again.  In addition, people want to observe LIS teaching so we will make that happen in the spring.

 

Lesson + Study = Lesson Study! Researching learning in the college classroom

Sometimes you find a RIOT topic, and sometimes a RIOT topic finds you. I was jumping around from article to article last week when Michele sent out the latest issue of LOEX and this article caught my eye (perhaps because I was hungry): From Prix Fixe to A la Carte: Using Lesson Study to Collaborate with Faculty in Customizing Information Literacy. It describes using a research method called “lesson study” to collaboratively design and assess a one-shot information literacy session for a first year composition course at University of Wisconsin-Eau Claire. I ended up reading more about lesson study and found the concept really intriguing. This entry focuses on the above article, as well as one that informed it.

What is lesson study?
Lesson study originated in Japan, and is a way of systematically designing and assessing a lesson plan in a way that contributes to our knowledge on teaching and learning. In lesson study, a group of teachers works in a small team to plan, teach, observe, analyze, and refine individual lessons (Cerbin and Kopp). In Japan, the resulting lesson plans and studies are published and disseminated for other teachers to build from and use so that “in essence Japanese lesson study is a broad-based, teacher-led system for improvement of teaching and learning” (Cerbin and Kopp). The Cerbin and Kopp article I keep quoting proposes a model of using lesson study in the college classroom. Instructors at the University of Wisconsin-La Crosse (where Cerbin and Kopp teach) have actually been using the lesson study method since 2006, and have a really great site that documents the process and shares all of their results. This is the context through which the LOEX article I found originated, as the project has spread to multiple UW campuses and throughout departments.

Planning
To embark upon a lesson study, a small team of teachers comes together to select a course, topic, and goals for student learning. Often, these teams are composed of instructors teaching the same course but they can also be interdisciplinary teams (hello, librarians!) working toward common goals. In the Jennings et al. study, 4 librarians collaborated with 3 composition faculty. Their initial list of goals was huge, and they reported that the process of sitting down with faculty to discuss the meaning of their goals, how they could be taught and assessed, whether they could reasonably be taught in a one-shot, and how they fit into the overall curriculum was one of the most valuable aspects of the lesson study. Eventually, they whittled down to two outcomes, leaving the other concepts to be taught in preceding classroom activities or discussions.
The next step is to plan the lesson. One of the main goals of lesson study is to design lessons that make the process of student learning visible, so that it can be observed. Once the lesson is planned, teams must design the study of the lesson. This involves deciding what data they will collect to assess student learning and thinking, and what observation guidelines they will use when team members observe the lesson being taught. To me, this focus on how students learn is what makes lesson study different than other forms of assessment.

“…the primary focus of lesson study is not what students learn, but rather how students learn from the lesson. To investigate how students learn, teams focus on student thinking during the lesson, how they make sense of the material, what kinds of difficulties they have, how they answer questions, how their thinking changes during the lesson and so forth.” Cerbin and Kopp

Teaching and Observing
Once the lesson and study have both been planned, one team member teaches the lesson while the others observe and collect data. Data collection may involve field notes, checklists, rubrics, etc. Cerbin and Kopp note that lesson studies promote observation of students rather than the performance of the teacher, and that the collaboratively planned lesson (not the teacher or the students) is what is being judged.* This takes the heat off of individuals and “helps pave the way for public knowledge building.”

Analysis
After the first go-round, the team debriefs to talk about their experiences, analyze data for evidence of student learning, and discuss possible changes based on what they found. The revised lesson can then be used in another class, continuing the cycle of evidence-based improvements.

Documentation and Sharing
The idea behind lesson studies is that they will produce valuable knowledge to be shared with others. Teams extensively document both the lesson portion and the study portion so that they can be disseminated and shared.

Why is it valuable?
Some of the reasons Cerbin and Kopp value lesson study include:

  • it encourages scaffolded, reflective design and assessment
  • it can help build a shared language for teaching and learning among instructors
  • it offers an evidence-based approach to teaching improvement
  • it provides a framework for investigating teaching and learning in the classroom

Jennings et al. reported that using lesson study to design the composition one-shot began a process of continual improvement and engendered a “culture of collaboration” among members of the lesson study team. It also led to further library involvement in the integration of information literacy into some curricular revision in the department. Eventually, it also led to similar studies in courses within other departments. They reported that in a science lesson study group, the process revealed that faculty assumed their students came to them with much more robust information literacy skills than they actually did. Through the process of planning the lesson study, “Faculty began to recognize and internalize the idea that if they wanted their students to use information in the sciences effectively and appropriately, it was incumbent upon them to integrate information literacy into the curriculum rather than assuming the students were gaining these students elsewhere” (Jennings et al.). Hooray!

I think this could be a great way to promote collaboration, and to work in some meaningful assessment work. It is difficult to assess the actual process of student learning, and I think this is one angle from which to approach it.

*Yes, this method requires IRB approval

Questions to think about:

  • Where might a lesson study fit in our teaching?
  • Who on campus might we partner with?
  • How would we share our findings?

References:

Cerbin, W. & Kopp, B. (2006). Lesson Study as a Model for Building Pedagogical Knowledge and Improving Teaching. International Journal of Teaching and Learning in Higher Education, 18(3).

Retrieved from http://www.isetl.org/ijtlhe/pdf/IJTLHE110.pdf

Jennings, E., Kishel, H., & Markgraf, J. (2012). From Prix Fixe to A la Carte: Using Lesson Study to Collaborate with Faculty in Customizing Information Literacy. LOEX Quarterly, 38(4).

Retrieved from http://commons.emich.edu/loexquarterly/vol38/iss4/4

That’s What I Want… I think: Assessing User Needs for Maximum Library Outreach

Twitter, Facebook, Flash Tutorials, Instant Messenger, iPhone applications – all of these new technologies have captured the attention of librarians in the hopes that they’ll help us to connect with our users in a more meaningful and timely way. But is the time invested in brainstorming, developing, and launching these applications worth it? Do our users care? And more importantly, are we even on the same page, technologically? Much has been made of the millennial generation that they are technologically savvy and are early adapters of new cool tools, but is that the case?

A study by Char Booth, a UT-Austin iSchool alum, published March 2009, attempted to analyze precisely that. In “Informing Innovation: Tracking Student Interest in Emerging Library Technologies at Ohio University (A Research Report),”Booth writes of the environmental scan and assessment process used to get inside student’s perceptions of these library-tailored emerging technologies, but also to make sure they weren’t just using technology for technology’s sake.  While I didn’t read the entire research report, she has received a lot of well-deserved attention for her receearch, and recently gave two interviews about her experience. In a a recent interview with ALA Tech Source’s Daniel Freeman, she notes that after slow adoption of their video-chat service, it allowed them to reflect about their technology implementation process.  She writes,

This experience led us to question whether we were creating new tools in a somewhat off-the-cuff way, using our assumptions about how students were interacting with technology without much actual investigation or needs assessments. We were starting to run the risk of becoming stretched too thin – the more services we added, the more we asked of our coworkers in terms of learning and staffing new tools, and the lower the potential service quality we were providing in general.

When we begin to think about implementing new technologies, personally, I’m not sure if I think “What’s the need?” or “What’s the point?”. I usually think, “Ooh, let’s try that!”.  Granted, I don’t think this is a necessarily bad way of going about trying new things and keeping oneself appraised of new technolgies, but when staff time, maintainence and training are part of the mix, it stands to reason that we assess the situation before jumping in.

In order to gage the environmental climate at Ohio University, Booth used web-based surveys that yielded a rather high response rate which tried to get at two different “use cultures”: technology use and library use.Respondents reflected what seems to be a pretty good representation of the Ohio University student population, although it seems more females than males finished the surveys. Most respondents were between 17 – 22 years .

However, many of their findings differed from what they expected. For instance:

  • Graduate students ended up spending more time online
  • Older students are those using more web 2.0 technologies (based on Ohio’s student pop!)
  • Younger students using more web-based applications (IM, Facebook, YouTube)

Using the data, one could conclude that lower level undergraduates may not yet see the benefits of the libraries (especially in terms of efficient library technologies) because they may not yet understand or recognize their own research needs. Booth explains this as “library predisposition”:

“…returning undergraduates and graduate students simply seemed more receptive to library products that offered them new ways of researching and/or interacting with librarians, regardless of whether they currently used the platforms in question. Whether this comes from greater information needs or more experience using libraries over the course of their lives and/or academic careers, either way it confirms the considerable outreach challenge of marketing not only new services to younger users, but the idea of libraries in the first place.

A positive discovery in terms of age and library technology receptivity was that despite their lower levels of library receptivity and awareness, undergraduates seemed eager to learn more about what was available to them – we received literally hundreds of open-ended comments communicating that students had no idea the library had so much to offer across the board, and encouraging us to promote better visibility.”

So in the end, at Ohio State, it’s possible that it was simply an issue of mis-marketing. Older users may have been using technologies that they weren’t necessarily promoting as heavily, while younger users didn’t necessarily care about the technologies that were being marketed. This is why Booth emphasizes knowing your user population and customizing it to meet their needs.Of course, customization takes effort, both in putting forth the effort to study your user population, as well as to maintaina culture where you can keep up with the rapid changes in users needs and perceptions (of course, we may find out after doing a user survey that our users’ needs actually don’t change as rapidly as we expected!).As Booth notes, for example, technology usage in one service may directly affect uses in other areas:

“This is also important as older social tools become outmoded by newer platforms – for example, many are now finding that apps like Twitter and Facebook are reducing the number of people who are signed into IM clients, meaning that it is likely time for many to reevaluate their IM reference services and consider whether another communication platform would have better potential impact. It’s all about building informed flexibility into the way you evaluate and implement new services.”

Takeaways:

  • Know your users before you embark on a wave of technology implementation and customization
  • Generalizability, but obviously different colleges/universities will have different cultures. Thus, what might be right for you, may not be right for some.
  • Old school, web-based surveys still work; no need to go all Rochester.
  • Ask what the library offers and see what students know; working backwards could help.

Questions:

  • What are some assumptions we have about UT students?
  • Could we see ways in which this would benefit the UT population?
  • Would we want to do something like this?
  • What would we want to gain from the experience?

Other sources

  • Watch a webcast of Char’s presentation at ACRL; it goes into more depth about the data and technologies that succeeded & failed at OU, and it’s less than 1/2 hour.