Session T-6C
Information Science
2:15 PM to 3:05 PM | | Moderated by Charles Kiene
- Presenter
-
- Nikki R. Demmel, Senior, Informatics, Psychology
- Mentors
-
- Katie Davis, The Information School
- Caroline Pitt, The Information School
- Session
-
- 2:15 PM to 3:05 PM
Digital badges are part of a larger informal credentialing system with many uses for tracking and rewarding extracurricular achievement. Scout badges, videogame trophies, and badges on educational websites like Khan Academy are all examples of microcredentials that represent one’s experiences and skills. This research is part of a larger six-year study that has explored the role of digital badges in encouraging youth to connect their out-of-school science learning to other aspects of their lives. My work specifically focuses on uncovering sources of support students receive in their daily lives to pursue science as a career or field of study, and how digital badges fit into their existing support systems. This study aimed to answer the following three research questions: 1. What factors influence students’ science identities? 2. What are the primary supports students receive to develop their science identities? 3. How do students perceive the role of badges in supporting their science identities? Using interviews, case studies, and surveys I was able to assess students’ relationships with science and the support they received from various sources to continue developing their interest in science. I used qualitative coding methods to identify common themes in each participant’s experience using the badge system. This study’s preliminary results indicate that students did not find the badge system useful in developing their science identities. Data from the participant interviews suggests that this is partially because the badges were not integrated into participants’ existing support systems like their friend groups and families. The findings of this study support a central idea of sociocultural learning theory—that students’ interest in topics is sustained through their interactions with others. Insights from this study could be used to inform the approach of programs that encourage students to participate in STEM.
- Presenters
-
- Johnny He, Sophomore, Pre-Sciences
- Harper Zhu, Senior, International Studies, Biochemistry
- Mentors
-
- William Kearns, Biomedical Informatics and Medical Education
- Weichao Yuwen, Nursing and Healthcare Leadership Programs, University of Washington Tacoma
- Hidy Kong, Computer Science & Engineering
- Session
-
- 2:15 PM to 3:05 PM
There are 50 million family caregivers caring for their loved ones in the United States. Caregivers experience significant stress and burnout, and need on-demand support with minimal resources investment. With the burgeoning development of artificial intelligence, conversational agents (chatbots) emerged as a solution for symptom self-management and self-care. Our research team developed Caring for Caregivers Online (COCO) - an AI-enhanced platform providing on-demand, empathetic, and tailored caregiving support. One of the key challenges in designing a chatbot is to ensure effective communication. To address this problem, our research team aims to determine the best practice in conveying health symptoms and solutions in a conversational agent to improve users' understanding of their health data and increase their trust in the technology. User testing and surveys are our main tools to address this question. After the initial session with the chatbot, users were asked to fill out a post-conversation survey. Based on users' ratings on their symptom intensity and solution effectiveness in the post-conversation survey, the chatbot generates personalized health recommendations. Next, we randomly assigned users to one of the two surveys that present the same recommendations in different ways—one with text, the other with a visualization. Users then provided ratings on how well they understood the visualization, their trust toward the chatbot, and some additional feedback. The results of the rating will be analyzed with an average score comparison between the two groups, and qualitative analysis will be employed to evaluate users' feedback. We expect the group presented with visualizations to give higher ratings in areas of both trust and data comprehension. Effective data visualizations inform caregivers of their health progress which motivates them to continue monitoring their health conditions and further practice proposed solutions. In addition, visualization increases caregivers' level of trust in chatbots and helps improve their chronic health conditions.
- Presenter
-
- Isabelle Schlegel, Senior, Anthropology
- Mentor
-
- Rachel Moran, The Information School, Center for an Informed Public
- Session
-
- 2:15 PM to 3:05 PM
The recent measures taken by social media platforms to limit the spread of misinformation by flagging or removing posts and accounts was met with outrage and a doubling-down of posting by conspiracy theorists. The QAnon conspiracy theory, which poses that child trafficking and sexual abuse are occurring at the hands of political and Hollywood elites, has grown into a movement named #SaveTheChildren which aims to spread awareness, rescue children from trafficking and obtain justice against abusive elites. This research focuses on the perceived censorship existing among members of the #SaveTheChildren movement and identifying which forces are deemed responsible for controlling the narrative and/or excluding their narrative. We compiled a data set of social media posts linked to QAnon and #SaveTheChildren from Instagram and Twitter, using qualitative coding methods to find links between popular narratives shared by users and what factors they perceive to be under threat of censorship. We conducted interviews with users who shared content linked to #SaveTheChildren and later experienced flagging, removal or bans of their content or profile. Preliminary findings highlight a deep-rooted lack of trust in "mainstream news media,” which drives users to search for alternative knowledge providers. Overwhelmingly, this takes the form of community-constructed knowledge building conducted via social media. This is concerning as it leads to the sharing of information that is often unverified, emotionally charged and increasingly conspiratorial in nature. Emergent narratives from our thematic analysis highlight the dominance of visceral images, unverified statistics and conspiratorial claims. Further, these claims seem to gain traction in communities that previously do not engage in conspiracy theorizing, but who bought into the movement because of its moral claims. Further analysis is being undertaken that focuses on how narratives of conspiracy interact with broader claims of distrust in news media and perceived censorship by technology platforms.
- Presenter
-
- Stephanie Lanxiang Zhang, Junior, Pre-Major (Arts & Sciences)
- Mentors
-
- Prerna Juneja, The Information School
- Tanu Mitra, The Information School
- Md Momen Bhuiyan, Computer Science & Engineering, Virginia Tech
- Session
-
- 2:15 PM to 3:05 PM
Many search engines and social media platforms employ personalization algorithms that present users with content based on their previous activity on the platform. While personalization can enhance users’ experience, critics worry that it can also reinforce human biases by constantly feeding users only one side of the viewpoint. In recent times, YouTube, the most popular video sharing platform, was accused of harboring videos promoting misinformation surrounding the 2020 presidential elections. What kind of videos are users exposed to when they search about election misinformation? What is the effect of personalization due to watch history, where it is built progressively by either watching videos promoting or debunking election fraud? Does YouTube’s up-next algorithm drive users into a rabbit hole of election fraud misinformation? Does users’ partisan bias have an effect on the election misinformation present in search results and recommendations? To answer these questions, we conducted a comprehensive audit study on YouTube by recruiting a diverse group of survey participants. Every participant installed a browser extension that enabled us to collect their personalized search results in response to search queries related to election fraud and personalized up-next trails --- Youtube’s 10 consecutive up-next videos starting from a seed video that either promote or debunked election fraud misinformation. The extension also collected unpersonalized search results and up-next trails via incognito window. By comparing the results from standard and incognito windows, we have determined the role of YouTube’s personalization algorithms in exposing users to election misinformation. Overall, our study adds to the growing body of work that examines the role of algorithms in surfacing misinformative content.
- Presenter
-
- Martin Zhang, Senior, Informatics: Data Science
- Mentor
-
- Jevin West, The Information School
- Session
-
- 2:15 PM to 3:05 PM
During the 2020 US election, tremendous misinformation and disinformation were attacking the election integrity and challenged Google's ability to deliver information. The goal of this study is to examine Google's role of delivering information about the 2020 US election. We investigate when and where do Election-related misinformation and disinformation appears on Google homepage. We collected data as it appears on the homepage of Google search for a set of pre-defined search queries — some neutral like "how do I vote" and some fraud-related like "ballot harvesting" — across US cities between August 2020 and January 2021. After focusing on the variations across search-terms, search-locations and search-timeline, we found that: (1) the top 3 national stories that appear on the Google search homepage are often more credible (measured using adfontes media 6.0 data) than the national stories that appear in lower ranks; (2) Google did not publish any local stories about election-related topics unlike covid-related search topics; (3) short-lived advertisements can sometimes serve as an entrypoint into misleading content. Our findings suggest that although Google does a credible job of highlighting more credible election-related search results on its search engine, there are opportunities to further refine how search engines can deal with delivering content that is prone to misinformation.
The University of Washington is committed to providing access and accommodation in its services, programs, and activities. To make a request connected to a disability or health condition contact the Office of Undergraduate Research at undergradresearch@uw.edu or the Disability Services Office at least ten days in advance.