Recap by Mitch Cota
The latest DART session began with a working paper by Sam Wineberg and Sarah McGrew, called “Lateral Reading: Reading Less and Learning More When Evaluating Digital Information.” It was utilized to initiate a discussion around how we evaluate sources as individuals from different levels of education and different professional backgrounds. The article was about a case study into how four different groups of people at Stanford, history PhDs and faculty, undergraduates, and fact checkers, evaluate sources for their trustworthiness. They were each given two articles to evaluate under a time limit. Next, they were given a legal case and asked to find out who funded the plaintiffs legal fees, since they were children. Each group included in the study approached the problem with their specific skill sets in an effort to evaluate the source which led to different outcomes.
The historians approached the problem through a deep evaluation of the articles themselves, a vertical reading. Their specialization in primary documents led them to analyzing the articles for themselves, and shying away from activating any links that would lead to other sites and lateral reading. The fact checkers focused on examining each aspect of the article through lateral reading. This meant that they would activate multiple windows in their browser to search for more information on different people, organizations, and topics. It also meant they utilized the different links presented throughout the articles.
I am of course simplifying the process and I would encourage anyone invested in source evaluation to look further into the article for more details. The reason for presenting this article to our DART discussion was to examine how we can effectively include the idea of lateral reading into literacy instruction within the different UT groups we each support. In the age of linked data, we need to reevaluate the way we evaluate sources. Lateral reading provides for less reading while increasing the researchers ability to enact a more critical eye when evaluating sources.
Particular attention was paid to the current climate in relation to information consumption and evaluation. When multiple news outlets and content producers are being called into question, it is of the utmost importance that we begin to evolve our discussion around source evaluation in an effort to provide information consumers with the tools necessary to be successful. There was a consensus around the difficulties in assessment of the success of different approaches. The CRAP test was called into question specifically. The use of a checklist was seen as a failure in providing students with the skills necessary to discern quality resources. With every checklist, there is a new issue introduced that can negate its effectiveness.
So, if we are moving away from the CRAP test and checklists are showing to be less effective, where does that leave us in the classroom? The gamification of evaluation was discussed as a highly effective way to get students engaged around source evaluation without producing checklist-like results. An apt comparison was given about the students ability to effectively evaluate the social media presence of an individual they know, and an analogy drawn to the same toolset being effective in source evaluation. It was a bridge that was stated to be a move in the future that could reframe the students perspective on their ability to evaluate sources.
The takeaways from the discussion focused on:
- Lateral reading as a necessary skill in source evaluation
- Balancing vertical reading versus lateral reading based on topic, discipline, and professional background
- Encouraging a broader and greater level of critical analysis
- Accepting that experience still plays a large role in source evaluation, so there are limitations to what can be gained by incoming students and researchers
- Focusing on habits of mind and ways of assessing these different skill sets we are providing
- Establishing a way to break down the thought process that has become involuntary to those who have fine-tuned their ability to evaluate resources
- Increasing a general assessment of the “lay of the land” when approaching the analysis of an article
- Emphasizing the positive outcomes of “leaving the page” when analyzing online content
This topic is something that affects us all to some extent, whether in our professional library setting or personal lives. The article is a bit longer, but definitely worth the read as it stimulated quite a bit of conversation around what we are doing now and what we could move towards in the future. I would invite everyone to definitely take a look!
Do you have an article or topic you would like to bring to DART? Feel free to contact Elise Nacca with any ideas and feedback!