All posts by rb

Science Instruction Librarian at the Life Science Library, University of Texas at Austin Librarian for Nutritional Sciences, Nursing, Human Development & Family Sciences

RIOT–May 17, 2016

Instruction for graduate students

Janelle and I will discuss our experiences with instruction for graduate students. This sort of sharing is important, since there isn’t a lot of how-to literature out there for guidance (though we can highlight a couple of articles). The discussion will address the differences between instruction for undergraduates versus graduate students, especially focusing on systematic reviews, scholarly communication, and data management.

 

We hope this RIOT will be like a Reddit AMA on instruction for graduate students. Please submit your questions by leaving a comment.

CC Kati Fleming, July 4, 2013. https://commons.wikimedia.org/wiki/File:Horned_Lark,_Eremophila_alpestris,_nestlings_begging,_baby_birds,_gape_colors,_leaping_in_nest_Alberta_Canada_(1).jpg
CC Kati Fleming, July 4, 2013.
https://commons.wikimedia.org/wiki/File:Horned_Lark,_Eremophila_alpestris,_nestlings_begging,_baby_birds,_gape_colors,_leaping_in_nest_Alberta_Canada_(1).jpg

“…how we teach in the classroom can be as important as what we teach…”

In July 2015, David Gooblar wrote a pithy Chronicle Vitae column, the crux of which is that we should sometimes “model stupidity” for our students. Gooblar cites a couple of other short pieces, notably “The Importance of Stupidity in Scientific Research,” that support the idea that students need instructors to pull back the curtain on their learning process.

“Modeling stupidity,” as Matthew Fleenor writes in a 2010 article for Faculty Focus, “is one of the best ways we can provide an example to our students. It’s important for them to understand that learning involves seeking out the gaps in our knowledge.”

One commenter on Gooblar’s column prefers to call this “modeling curiosity,” but points out the particular pitfalls of exposing ignorance in the classroom for female instructors. That’s almost certainly an issue for instructors of color as well. And it may well pose concerns for instruction librarians, who are already regarded by students as “guest speakers” instead of as experts. Yet clearly, students need to know how instructors recognize and deal with their own ignorance.

Probably we’ve all had an “I don’t know” experience in one-on-one reference encounters, and have pursued answers/solutions with some measure of poise. But “I don’t know” in front of a classroom full of students is a whole ‘nother ball of wax.

How do instructors get comfortable with saying “I don’t know,” and what action plan follows?

Can instruction librarians model stupidity in the often limited time we have with students?

Is there a cost to authority if an instructor models stupidity?

Possible to provide faculty with examples of opportunities to model stupidity with respect to literature searching/information resources?


Fleenor, Matthew. (2010). “Responding to student questions when you don’t know the answer.” Faculty Focus. http://www.facultyfocus.com/articles/teaching-and-learning/responding-to-student-questions-when-you-dont-know-the-answer/

Gooblar, David. (2015). “Modeling the behavior we expect in class.” Chronicle Vitae, Pedagogy Unbound. https://chroniclevitae.com/news/1067-modeling-the-behavior-we-expect-in-class

Moore, Katherine. (2015). Comment on “Modeling the behavior we expect in class.” Chronicle Vitae, Pedagogy Unbound. https://chroniclevitae.com/news/1067-modeling-the-behavior-we-expect-in-class

Schwartz, Martin. (2008). “The importance of stupidity in scientific research.” Journal of Cell Science (121)11:1771. http://jcs.biologists.org/content/121/11/1771.full

 

DISCUSSION: “Does Library Instruction Make You Smarter?”

Since we didn’t actually have Michele’s RIOT back in the summer, it was good to finally get to this topic. The busy start to the fall semester puts the assessment and instructional effectiveness on all of our minds. A lively discussion followed Michele’s presentation.

All were chagrined that the studies didn’t show any correlation between student success and library instruction (LI), and we wondered about what kinds of instruction interventions do show a correlation to student success. Real-world guest lecturers? Recitation? What?

The tendency for education researchers to work with giant datasets  instead of qualitative researcher leads to literature that doesn’t adequately characterize students’ motivation, an important part of student success. Assessing students’ research papers would show the effect of LI, but one first needs to think about the audience for this information. Creating a controlled teaching environment could help us know what’s effective, but that’s neither practicable or desirable. Besides, there are so many variables that it’s better to collect information on the perceived value of LI, and whether LI changes students’ behaviors.

From here the conversation shifted to library anxiety and creating a culture that takes away the imposter-syndrome discomfort students may feel when they ask for help. Outward-facing attitudes benefit LI, and outreach and campus partnerships can help to create a culture of comfortable help-seeking.

 

RIOT: Guides

I was quite pleased to see the 2014 ACRL IS Innovation Award-winning library research guide, Library DIY. I love it. I’ve always liked to idea of a step-by-step guide or a flowchart guide, and in fact I’ve played around with developing a dichotomous key for common library research tasks.

There’s nothing wrong with our setup, but I often think that students, especially first-year students, may not be so good at knowing the steps of what they need to do, and therefore not so good at articulating what help they need. And many libraries’ setups, including ours, rely on the users being able to identify the steps of their process and then go looking for that specific help.

The beauty of the Library DIY is that the standard steps are articulated for the users. This is lovely teaching, I think—providing answers, plus showing users what the steps are at the same time. It’s also great teaching for instructors—if any chance to check it out—to see what it is that their students need help learning to do and to show the instructors the level at which their students are thinking about the research process. It could even be helpful as a guide to instructors as they design assignments.

I have several thoughts and questions about this for our discussion.

Do we have a definitive rubric for guides? If we brainstorm to construct one, what are some criteria? Here’s a start:
1) Teachability—how well it teaches; how many different audiences it teaches (students on which levels, instructors, library staffers who aren’t in user services…)
2) Ease of use
3) Comprehensiveness of topics covered
4) Timeless/long-lived or will content units become dated?
5) Is it interactive? Is it an info dump?
6) Frustration level?
7) Goals?—different guides have the same goals or different goals
8) Placement and findability—Is it in a place on the site where users are likely to look for this information? Could it popup if users spend a designated amount of time doing or not doing something? How to describe/tag it, for users who want to search the library website?
9) Macro or micro level of assistance/coverage? General or specific?

What formats are being used for guides? What are pros and cons of different formats? There’s a reason why they use dichotomous keys in plant biology, for example, but there may be no analogous work to do in, say, social work. Certain guide formats may be a more natural fit for certain subject areas. What are some Helping Tools for common tasks in other disciplines?
*****
_______. 2014. Dichotomous key. Wikipedia entry. Retrieved 02/10/14. http://en.wikipedia.org/wiki/Dichotomous_key

American Library Association. 2014. Farkas, Hofer, Molinelli and Willson-St. Clair win 2014 ACRL IS Innovation Award. ALA News. http://www.ala.org/news/press-releases/2014/02/farkas-hofer-molinelli-and-willson-st-clair-win-2014-acrl-innovation-award

Farkas, Meredith. 2012. In Practice: The DIY Patron. American Libraries 43(11/12):29.

“When It Comes To Brain Injury, Authors Say NFL Is In A ‘League Of Denial'”

“If 10 percent of mothers come to believe that football is dangerous—to the point of brain damage, effectively—that’s the end of football as we know it.”

NPR ran this story about brain injury in major league football players, interviewing the authors of a book which is the basis for a forthcoming documentary. The story opened with the death of a celebrated football player, and what happened when a pathologist who decided to study the player’s brain sent his findings of brain injury to the NFL. Here are the money paragraphs:

“He thought that well, this is information that the National Football League would probably like to have,” Fainaru says. “He says he thought [the NFL] would give him a big wet kiss and describe him as a hero.”

That’s not what happened. Instead, the NFL formed its own committee to research brain trauma. The league sent its findings to the medical journal Neurosurgery, says Fainaru-Wada. “They publish in that journal repeatedly over the period of several years, papers that really minimize the dangers of concussions. They talk about [how] there doesn’t appear to be any problem with players returning to play. They even go so far as to suggest that professional football players do not suffer from repetitive hits to the head in football games.”

You can find this stuff easily. The PubMed search
(“Neurosurgery”[Journal]) AND national football league
takes you to the articles, several of which bear the affiliation “Mild Traumatic Brain Injury Committee, National Football League.”

I think this could be fertile ground for an evaluation exercise. I’d like to talk about/brainstorm around it.

Clean vs Messy

Eyre, Jason. (2012). “Context and learning: the value and limits of library-based information literacy teaching.” Health Information and Libraries Journal 29(4): 344-348.

This article discusses the findings of a project that used social media to support the academic progress of social work students who were off campus in work placements. The author’s reflections deal with the students’ use, or lack thereof, of academic resources, including library information resources of obvious relevance to the students’ casework. The main point of Eyre’s article that I want us to consider is that while there exists literature that is clearly germane and professionally useful, the realities of on-the-ground practice often militate against literature searching.

“When asked where they went for information to inform their decision-making in a workplace setting, the predominant answer from students was that they spoke to another person… Fewer students cited printed sources of information and when they did they were more commonly agency-based procedures/protocols, case records or broader policy documents. By contrast, formal research publications were cited far less frequently.” p.345

Eyre discusses a “theory/practice dichotomy” and students’ struggles applying research to practice, which they recognized a something that a professional should be expected to do. He points to the “clean” information-seeking environment of the university and how it doesn’t adequately model the often “messy” info-seeking situations on the job.

“…librarians and others are perhaps inadvertently reinforcing the dichotomy between the realm of the university and the real-world realm of the workplace. Even the most relevant, authentic task remains divorced from the realities experienced by the students on placement… The fact remains that these skills are taught in the university, in the context of assessment and formal, ‘clean’ learning outcomes, and away from the ‘messy, real world’ of social work practice.” p.346

We need to consider that a lot of the sources that our learners may most commonly use in their practice will not be the formal, authoritative things we teach and preach about. This comes up again in the abstract of an article I’m waiting to read—“Information needs of clinicians and non-clinicians in the Emergency Department: a qualitative study” (Ayatollahi, Bath and Goodacre, 2013)—where the authors interviewed ER staff and found that “Patient information was considered the most important type of information, and verbal communication was the most frequently used source of information.”
Some thinking on the transferability of what we teach is in order. What disciplines are characterized by “messy” on-the-ground practice? What sorts of info resources—policy documents? electronic health records? other?—are basic to those disciplines?
Teaching should include discussion on what one might use when—the utility of formal research more often used for the clean times of reflection and assessing one’s professional practice, and the “action” info resources usually deployed for the messy day-to-day practice.

Teaching About the Literature

Here’s a scenario:

A local professor has published an article that has led to controversy. Other professors, both from your institution and elsewhere have published a flurry of responses to this article. A cursory Google search shows that it’s even appeared in several news outlets.
You face a class full of undergraduates. How can you help them make sense of/think critically about this controversy?
Well, initial critical thinking relates to IL Competency Standard #1—figuring out what to ask. But further critical thought must be fed with evidence.
In the shift from BI to ILI, librarians have somewhat moved away from teaching specific tools and techniques to teaching more big-picture stuff. That’s not a bad thing, but check out what Bodemer (2012, p.337) says about what professors want:

First, while faculty may say that they want students to find good, scholarly sources for their papers, what they ultimately want is for students to learn how to find such sources. Moreover, though never put in such broad terms, they also want students to exclude sources.

I’m going to say that students in such cases need to be taught specifics—not just to find what they specifically need but how to think and explore the literature. In the case of a controversy like this, the novice researcher needs context. A controversial study like this, with findings that are at variance with the previous literature, needs careful critiquing, in a systematic way that isn’t just savaging it on PC grounds.
I don’t see anywhere in the IL Standards where Knowing About The Literature is explicitly mentioned. But in a situation like this, it’s exactly what students need to know. What are the norms of literature in this discipline? What’s the shape of the literature? Some of this can be known without extensive reading. Some of this might be known in a quantified way, via thorough searching.
How many other studies are on the same/similar topics as Regnerus’s? What proportion of social science articles on this topic have this philosophical or methodological approach? What is Regnerus’s methodology anyway? After reading criticisms of Regnerus’s article, students might need to know, for example, how to read the his article to identify its methodology and find search terms they can use to find other Social Science Research articles using that methodology.
Teaching students about the literature—faculty I know aren’t doing it. I don’t typically get to do it. How about y’all? How do we work it in, and how do we make the case to faculty that we should be teaching this?

Bodemer, Brett B. (2012). The importance of search as intertextual practice for undergraduate research. College & Research Libraries 73(4): 336-348.
Brown, Matthew. (2012). Social scientists defend Mark Regnerus’ controversial study on same-sex parenting. Deseret News.
Olson, Walter. (2012). The Regnerus Gay-Parenting Study: More Red Flags. The Huffington Post.
Osborne, Cynthia. (2012). Further comments on the papers by Marks and Regnerus. Social Science Research 41(4): 779-783.
Regnerus, Mark. (2012). How different are the adult children of parents who have same-sex relationships? Findings from the New Family Structures Study. Social Science Research 41(4): 752-770.

My PollEverywhere experience

Last fall, I got to teach in the labs of a lower-division nutrition course. The students were mostly non-majors, and freshmen. The lab met for three hours in a computer classroom. TAs were present during the labs, but did not assist in the ILI, which was preparation for a graded literature searching assignment in which students would find a popular article on one of four nutrition topics, and then find either a research article or a review article on the same topic as the popular article.

In the lab, students did “cold turkey” searching and answered questions using the internet feature of PollEverywhere. Students were directed to use UT Libraries databases to find articles on sustainability and the locavore movement. Student participation was quite gratifying.

  1. Which LIBRARY DATABASE did you use for your search? (Class 1, 28/30; Class 2, 30/30; Class 3, 31/29)
  2. What SEARCH STRATEGY did you use [Ex: (women OR female*) AND (osteoporo* OR bone loss)]? (Class 1, 26/30; Class 2, 27/30; Class 3, 31/29)
  3. Give the article title and publication title of the POPULAR article you found. (Class 1, 17/30; Class 2, 26/30; Class 3, 19/29)
  4. Give the article title and publication title of the SCHOLARLY article you found. (question skipped)

I adjusted the content covered and time spent on each concept in response to the students’ poll answers, alternating lecture/discussion with exercise questions. I believe that being able to see what the students knew and what they were confused about helped me make the IL session much more valuable to the students. I used the free version of PollEverywhere, which let’s you have 40 responses per poll. For larger classes, you can game the system by having Poll A, Poll B, etc. Or, $65.00 a month gets you 250 responses per poll, and the service appears to be month-to-month. I recommend wide deployment of PollEverywhere or a similar technology. According to the ECAR study, 55% of undergraduates own a smartphone and 87% own a laptop. With the internet option for poll response, students needn’t be on the hook for texting charges.

Let’s spend $65.00!

  1. What’s the most successful/useful feedback tool or technique you’ve used in a class?
  2. If we built the ideal feedback tool, what would it look like?

ECAR National Study of Undergraduate Students and Information Technology, 2011 Report, http://www.educause.edu/2011StudentStudy

 

 

 

 

Good-enough searching

I read a post
on Scientific American’s blogs
about the myths of organic farming versus
conventional agriculture. It’s a pretty hot-button topic, and as expected,
there were many comments, though of a more elevated nature than those I
typically see after articles in a daily paper.

It struck me, not for the first time, that the
debate that took place in the comments would muddy the waters for anyone not
particularly versed in this topic, and embroil them in an ever-spiraling
pursuit of yet more articles, yet more data, yet more pros and cons. How would a
newbie know when she could stop information-gathering?

I read a
2011 article by Jill Newby
(heh) of University of Arizona, on
information-seeking strategies of graduate students in interdisciplinary
programs. Its relevance has to do with these students’ need to familiarize
themselves with fields that are not their major disciplines, and their
difficulties in knowing when enough is enough.

Newby describes a model for a course to prepare grad
students for interdisciplinary info-seeking, that mentions the importance of
“chaining,” (“following references from one source to other relevant
information sources,” p.225) but glosses over the “when to stop” problem. However,
chaining from her bibliography led me to skim a
2007 article on “satisficing” information needs
. Satisficing is defined as
“an information competency whereby individuals assess how much information is
good enough to satisfy their information need,” (p.75) which means that
individual had best be good at determining the nature and extent of information
needed, IL Standard #1.

But satisficing turns out to be more about a
cost-benefit calculation than being actually satisfied that one has what one
needs. It’s more of a due-diligence situation. Prabha et al. list three
stopping rules (p.77): satiated searcher, disgusted searcher, and combination
searcher (Kraft, D.H. and Lee, T. 1979. Stopping rules and their effect on
expected search length. Information
Processing & Management
15(1): 47-58.)

We all have encountered students who have satisficed
at a stage where we would say need to look at more and more appropriate
information. Prabha et al. were involved in focus groups of students and
faculty, and identified quantitative and qualitative criteria for stopping
searching. I was particularly struck by the qualitative criterion of the same
info being confirmed in several consulted sources, cited by both students and
faculty, and the qualitative criterion of “representative sample of research
was identified,” cited by faculty only. It seems to me that students would be
less likely to confidently identify a representative sample of research.

For discussion: why searchers stop searching, when
searchers don’t/can’t stop searching, searchers’ question-definition skills

Because most searchers don’t want to read another book!

**************

Newby, Jill. 2011. Entering
unfamiliar territory : building an information literacy course for graduate
students in interdisciplinary areas. Reference
& User Services Quarterly
50(3): 224-229.

Prabha, Chandra, Lynn S. Connaway, LawrenceOlszewski & Lillie R. Jenkins. 2007. What is enough? Satisficing
information needs. Journal of
Documentation
63(1): 74-89.

ALL RIOT: New metrics for academic library engagement

Gibson, Craig and Christopher Dixon. (2011). New metrics for academic library engagement. Paper presented at ACRL 2011. http://www.ala.org/ala/mgrps/divs/acrl/events/national/2011/papers/new_metrics.pdf

These authors composed a draft definition of library engagement, then tested it by interviewing library administrators about their libraries. They defined engagement as:

Sustained, strategic positioning of the academic library to create collaborative, reciprocal relationships with identified partners in order to advance institutional, community, and societal goals; to solve institutional-level and community-level problems; to create new knowledge, new products and services; and to effect qualitatively different roles for academic libraries themselves through impact, integration, and outreach to their varied constituencies.(p.342)

Though they mention Return On Investment as a measure that’s being looked at a lot by academic libraries in these parlous times,  Gibson and Dixon point to libraries’ value as more of an academic quality of life issue.  Noting that there a whole lot of strategic planning going on (this crisis of identity brought to you by… the economy), they suggest that academic libraries will want to move beyond “the traditional service role in favor of an ‘engaged partner’ role—both on their own campuses, as well as beyond their institutions.” (p.341)

On the basis of their interviews, they recognized some deficiencies in their draft definition of engagement, particularly with regard to reciprocity in partnerships and to proving success. Nevertheless they were able to create five categories of potential metrics with qualitative and quantitative indicators. I reproduce the table (p.346) here, for discussion (click to embiggen).