Session T-4C

Education

11:55 AM to 12:45 PM | | Moderated by Annabel Cholico


Los Caminos de La Vida: The Impact of COVID on Education in Rural Communities
Presenter
  • Karina Flores, Junior, Sociology McNair Scholar
Mentors
  • Michael Spencer, Social Work/Public Health
  • Santino Camacho, Social Welfare
Session
  • 11:55 AM to 12:45 PM

Los Caminos de La Vida: The Impact of COVID on Education in Rural Communitiesclose

In 2020, the COVID-19 pandemic caused students living in rural areas to experience exacerbated educational disparities. This included familial financial stresses, which also pushed many migrant students living in rural communities to prioritize work over school. The pandemic shed light on educational disparities featured in rural public-school education systems. The purpose of the study was to examine how the education trajectory of students in rural communities had been affected by the social and economic impacts of COVID-19. To accomplish this purpose, we examined the extent to which familial needs impacted students’ post-high school educational plans, how financial strain influenced their post-graduation choices, and how students practiced resourcefulness and resilience despite experiences of economic hardship. In this community-based qualitative research project, we conducted semi-structured interviews with Eastern Washington high school seniors who are 18 years of age or older and used a phenomenological thematic analysis to gather themes related to our research questions. As part of the research, we collaborated with a community advisory committee composed of teachers and recent high school graduates from Eastern Washington communities to develop the project’s research methods and to ensure the analyses and interpretation of interviews are reflective of the students’ experiences. We predicted that students will plan to alter their post-high school paths to accommodate their families’ needs. Anti-racist - strength-based - frameworks were used to make academic support recommendations for students in rural communities. Ultimately, our study can help inform collaboration with community members to find solutions so we can best support students and encourage them as they navigate pathways after high school graduations.


Investigating the Accuracy of Constructed Response Computer Scoring
Presenter
  • Abigail Jane (Abbie) Gilbert, Senior, Biology (General)
Mentor
  • Jennifer Doherty, Biology
Session
  • 11:55 AM to 12:45 PM

Investigating the Accuracy of Constructed Response Computer Scoringclose

  Constructed response short answer assessments provide greater insight into student understanding than multiple choice evaluation, but involve time-intensive grading. To increase scoring efficiency, we worked with the Automated Analysis of Constructive Responses (AACR) Research Group to use supervised machine learning to generate a computer scoring program for a biology constructed response formative assessment question. However, ensuring accurate and unbiased scoring is necessary before this technology enters classrooms. Due to underrepresentation of first-generation and minority students in STEM classrooms, and the assessment rubric’s potential specificity to University of Washington curriculum, I hypothesized a decreased scoring accuracy for first-generation and minority student responses and for students not attending the University of Washington. Responses for the constructed response formative assessment question were collected from five institutions, including public universities and community colleges, and were scored by myself, another trained human scorer, and the scoring program. Previous research found that this question, human-scored, shows no bias by student demographic (i.e., there is no differential item functioning). Responses were de-identified prior to human scoring, and human scores were reviewed by the supervising researcher before analysis. I analyzed the scoring program’s accuracy for dependence on students’ reasoning level, GPA, university, timing of assessment, first generation status, race or ethnicity, and gender, using logistic regression and model selection. My analysis found no significant demographic or institutional bias in the scoring program. However, results did indicate a decreased computer scoring accuracy for higher-level reasoning scores (when students had more accurate responses to the question). For this assessment question and scoring program, my results indicate that further training of the program on higher-level responses is needed before scoring bias is eliminated. This bias-analysis research ensures the increased scoring efficiency offered by computer scoring programs does not come with an increase in assessment bias.


 Effectiveness of Video Autopausing to Elicit Active Responses
Presenter
  • Sheharbano Jafry, Senior, English, Biochemistry UW Honors Program
Mentor
  • Jennifer Doherty, Biology
Session
  • 11:55 AM to 12:45 PM

 Effectiveness of Video Autopausing to Elicit Active Responsesclose

The COVID-19 pandemic has caused many college courses to shift to an online format. Many instructors have altered their teaching methods to maximize student learning. For example, some active-learning instructors have embedded autopause questions, which appear when the video stops at certain points, in recorded lectures. We wanted to understand how autopause questions could change student behavior in approaching class questions. We hypothesize that if a professor has the video “autopause” before the video provides the answers to these questions, then students will be more likely to generate their own answer prior to hearing the correct answer. We investigated this hypothesis over two quarters of introductory biology III. Students complete “lecture follow alongs (LFAs)” assignments as they watch the video. In one quarter, students were asked to pause the video and answer the questions. In the second quarter, the video autopaused, and the students had to positively affirm they answered the question before the video continued. Only the LFA responses, not the autopause questions, are graded. We are investigating differences in student responses to the LFAs between quarters. In our initial scoring of two LFA sets in summer and autumn quarters, about 50 percent of students copied the instructor’s own answers for about a third of the LFAs, while less than 30 percent copied in an earlier set. Looking at such responses will inform us about how students are approaching the questions and whether they are initially thinking about and forming their own answers. Our results will give insight into the effectiveness of autopause questions in changing student behavior, so that students approach the questions themselves before hearing the correct answer. If autopause questions are able to significantly change student behavior in this manner, then this method can be successfully implemented in other online courses to maximize student learning.


Investigating Biases in Physiology Learning Progression Computer Models
Presenter
  • Jill Kazuko (Jill) Kumasaka, Senior, Biology (Physiology) UW Honors Program
Mentor
  • Jennifer Doherty, Biology
Session
  • 11:55 AM to 12:45 PM

Investigating Biases in Physiology Learning Progression Computer Modelsclose

 Multiple-choice assessments do not adequately gauge students’ understanding of principle-based reasoning in physiology. Open-ended formative assessments provide a more accurate method to assess students’ reasoning, however, large classrooms and short-staffing can limit their implementation. Utilizing machine learning to develop computer scoring models trained to score according to principle-based reasoning rubrics can make categorization of student responses efficient and feasible. However, potential biases in computer scoring models is yet uninvestigated. The aim of this study is to investigate biases, specifically towards age and accessibility to English in high school, by comparing demographic data with trends in computer and human code differences. We collected demographic data and responses to two physiology questions which were then coded by experts and the computer model. Utilizing generalized linear models including fixed effects to control for potential discrepancies, we investigated relationships between demographic data and computer scoring accuracy. There were no significant differences in computer model accuracy for our demographics of interest. However, the probability of the computer being accurate depended on the reasoning level of the response as categorized by human coders. The computer had a higher probability of being accurate for lower-level responses. When examining how the computer scored if it was incorrect, we found the computer was more likely to score higher levels of reasoning lower than the human categorization and vice versa for lower reasoning levels. While we did not identify any biases due to our demographics of interest, we observed a pattern in the accuracy of the computer depending on the reasoning level of the response. Our investigation provides essential feedback for computer model developers. Improving the computer model will more accurately measure students’ reasoning level and guide physiology education. Instructors from other disciplines will benefit from this research as it proposes a framework for better assessing student understanding.


Using students’ answers to open-ended questions to understand student learning
Presenter
  • Anushka Manish Ladha, Senior, Biology (Molecular, Cellular & Developmental), Microbiology
Mentor
  • Jennifer Doherty, Biology
Session
  • 11:55 AM to 12:45 PM

Using students’ answers to open-ended questions to understand student learningclose

Undergraduate students in large introductory biology classes often struggle with understanding physiology topics covered in their classes. The fundamental idea of flux is that the rate of passive movement of substances (e.g., air, sucrose, Na+) is proportional to the magnitude of the gradient divided by resistance to movement. We aimed to examine if flux helped students develop a more comprehensive understanding of physiology. We analyzed how students used flux reasoning skills by using the respiratory system as a model. We collected student responses to an item about flux during an asthma attack from an introductory physiology class at the University of Washington. Responses were collected both before and after instruction to examine changes in student learning regarding physiology over time. Responses were also collected from an advanced physiology class after instruction to investigate if undergraduate students develop more sophisticated ideas about physiology through their college career. We used a constant comparative approach to uncover patterns of reasoning in responses. We organized these patterns into a framework categorizing types of reasoning into a hierarchy of understanding. This framework ranges from Levels 1 to 4 where each increasing level demonstrates a more sophisticated ability to reason with flux. The highest level of reasoning is understanding that increasing the magnitude of the pressure gradient between the lungs and atmosphere can offset an increase in resistance to maintain the same rate of flow while the lowest level of reasoning completely avoided flux reasoning and did not answer the question. We found students increased in the reasoning level used after instruction such that introductory students matched the performance of students in the advanced class. This framework can be used by physiology instructors and additionally serve as a reproducible method of pattern finding and qualitative data analysis that can be used by instructors in other disciplines.


Investigating Patterns in Student Flux Reasoning
Presenter
  • Aida Moghadasi, Senior, Extended Pre-Major
Mentor
  • Jennifer Doherty, Biology
Session
  • 11:55 AM to 12:45 PM

Investigating Patterns in Student Flux Reasoningclose

The flexibility of general models, such as flux, provides physiology instructors with a great tool to enhance students’ ability to apply physiology concepts to many different scenarios. The general model of flux is that the magnitude of the gradient divided by the magnitude of the resistance is proportional to the rate of flow. Flux can be used to understand the bulk flow of a wide variety of fluids (e.g., air, blood, sap) as well as diffusion and osmosis across physiological systems. In order to design effective curricula to support students’ mastery of flux, it is necessary to understand the types of reasoning about flux that students bring to class. Our study focuses on students’ use of flux in the specific case of air movement from the atmosphere to lungs while taking a breath. We collected more than 1000 student responses from five institution to a question about how people take a bigger breath. We used a constant comparative analytical approach to uncover recurring patterns in the way students approached this problem and to what extent they utilized flux in their reasoning. We summarized these patterns into a framework describing four levels of student reasoning: Level 4, highest level, explains how air pressure gradient is manipulated to create air flow into lungs by increasing volume and therefore decreasing pressure; Level 3, explains the inverse relationship between volume and pressure OR the concept of pressure gradient between lungs and atmosphere; Level 2, explains that lung volume increases to cause more air flow; Level 1, describes air moving into lungs causes a big breath or describes the anatomical steps in a big breath. Our framework can be used by physiology instructors to improve their teaching. Moving forward we will validate our levels of reasoning through oral interviews conducted for the same question.


The University of Washington is committed to providing access and accommodation in its services, programs, and activities. To make a request connected to a disability or health condition contact the Office of Undergraduate Research at undergradresearch@uw.edu or the Disability Services Office at least ten days in advance.