Tag Archives: evaluation

TLS TIPS: Evaluating Sources in the Classroom

This semester I started teaching source evaluation differently and wanted to share this approach in case it can be adapted for use in anyone else’s classroom.

Using the Assignment in Your Classroom

Step 1:  Split the room into groups.   This works really well in PCL 1.124 because they’re already facing each other at tables.  Tell them the first thing they have to do is assign a recorder and a presenter.  This gets their attention.

Step 2:  Explain the exercise and pass it out on a half sheet of paper.  It is  interesting to see the types of things students write down, some of which they will have learned by the end of the exercise isn’t really helpful (for example, “if it is an .edu you can use it but if it is a .com you shouldn’t”).   Here is the exercise – just click on it to make it bigger:

eval sources

Step 3:  Have a student explain the exercise back to you.   This way students hear it two ways and it ensures they understand what they are supposed to be doing.  I didn’t do this the first time and they didn’t really get it, but I didn’t know that until they were reporting out.

Step 4:  Assign each group a source.  I pre-pick the sources and put links to them on the SubjectsPlus course guide.

Step 5:  Give them about 7 minutes to do the exercise and then have each group report out one criteria.  As you add it to the board, ask questions and hold a discussion.

My experiences with it

This has worked well in every class in which I’ve used it (all freshmen classes, though).  Sometimes I mix up the source types and other times I’ll stick to one or two types.  I tie the types of sources I use to the assignment and learning outcomes for the session.  It can be used as a viewpoint evaluation exercise, a web evaluation exercise, a scholarly versus popular exercise, or a more general source evaluation exercise.

I always do this at the beginning of the class, after I’ve introduced myself, gotten them logged on and told them the goals (LOs) and agenda for the class.  It works nicely as an ice breaker, but more importantly, it lays the groundwork for weaving source evaluation in to discussion of tools.   When they are doing their own searching during class, they can refer to the criteria list they generated and apply it to the sources they are finding.

I think you could do this exercise in a classroom with no technology and just hand out print sources.

In the honors classes I’ve taught, they gotten really into it and don’t want to stop talking.  It brings up all sorts of issues they want to know more about including evaluating (or arguing with each other about) Wikipedia, figuring out how funding may impact a web site or figuring out which journals are more important than others (not really a freshmen thing but this exercise has led to that question).  Other times it takes a while because they aren’t quite getting it but they always do eventually and I see that they have begun to move away from black and white criteria (all blogs are bad!  don’t use opinions, etc).

It is really fun and establishes a nice connection with the students.  If I start with this exercise, students seem to ask more questions during the rest of the class and seek out my help more readily.

While I would love to know how effective this is beyond what I can learn from anecdotal evidence, I only have that anecdotal evidence right now.  I’d be interested to know what other people’s experiences are if they adapt this exercise for use in their own classroom.

“When It Comes To Brain Injury, Authors Say NFL Is In A ‘League Of Denial'”

“If 10 percent of mothers come to believe that football is dangerous—to the point of brain damage, effectively—that’s the end of football as we know it.”

NPR ran this story about brain injury in major league football players, interviewing the authors of a book which is the basis for a forthcoming documentary. The story opened with the death of a celebrated football player, and what happened when a pathologist who decided to study the player’s brain sent his findings of brain injury to the NFL. Here are the money paragraphs:

“He thought that well, this is information that the National Football League would probably like to have,” Fainaru says. “He says he thought [the NFL] would give him a big wet kiss and describe him as a hero.”

That’s not what happened. Instead, the NFL formed its own committee to research brain trauma. The league sent its findings to the medical journal Neurosurgery, says Fainaru-Wada. “They publish in that journal repeatedly over the period of several years, papers that really minimize the dangers of concussions. They talk about [how] there doesn’t appear to be any problem with players returning to play. They even go so far as to suggest that professional football players do not suffer from repetitive hits to the head in football games.”

You can find this stuff easily. The PubMed search
(“Neurosurgery”[Journal]) AND national football league
takes you to the articles, several of which bear the affiliation “Mild Traumatic Brain Injury Committee, National Football League.”

I think this could be fertile ground for an evaluation exercise. I’d like to talk about/brainstorm around it.

Helping students evaluate

A colleague sent round this interesting blog post the other day–>

Anderson, Kent. Improving Peer Review: Let’s Provide An Ingredients List for Our Readers. The Scholarly Kitchen, March 30, 2010.

Anderson wants articles to include more information about the process of peer review–reviewers’ credentials, number of revisions, etc., to help readers distinguish more rigorously reviewed work from less rigorously reviewed work. He writes

“Here are some potential categories I’d like to see:

  • Number of outside reviewers
  • Degree of blindedness (names and institutions eliminated from the manuscript, for instance)
  • Number of review cycles needed before publication
  • Duration of the peer review portion of editorial review
  • Other review elements included (technical reviews, patent reviews, etc.)
  • Editorial board review
  • Editorial advisers review
  • Statistical review
  • Safety review
  • Ethics and informed consent review”

While it might inform practioners, would this information help students evaluate material?

What about this sort of information (from Mosby’s Nursing Consult)?

“Levels of Evidence
Studies are ranked according to the following criteria:
Level I    All relevant randomized controlled trials (RCTs)
Level II   At least one well-designed RCT
Level III  Well-designed controlled trials without randomization
Level IV  Well-designed case-controlled or cohort studies
Level V    Descriptive or qualitative studies
Level VI   Single descriptive or qualitative study
Level VII  Authority opinion or expert committee reports”

How useful will such guidelines be for students who lack subject expertise? Debra Rollins’ recent post on ILI-L, on the thread “evaluating resources,” considers who should best deliver this aspect of IL instruction.