menu
  • expo
  • expo
  • login Sign in
Office of Undergraduate Research Home » 2023 Undergraduate Research Symposium Schedules

Found 18 projects

Poster Presentation 1

11:00 AM to 12:30 PM
Detecting Fluorescent Readout from Molecular Reactions Using a Smartphone
Presenters
  • Zoe Evelyn Mohalakealoha Derauf, Senior, Biology (Molecular, Cellular & Developmental)
  • Derek Zhu, Junior, Pre-Major
Mentors
  • Chris Thachuk, Computer Science & Engineering
  • Jason Hoffman, Computer Science & Engineering
Session
    Poster Session 1
  • MGH 241
  • Easel #75
  • 11:00 AM to 12:30 PM

  • Other students mentored by Jason Hoffman (1)
Detecting Fluorescent Readout from Molecular Reactions Using a Smartphoneclose

As diseases like COVID-19 become endemic, it becomes more apparent that access to low-cost, user-conducted tests with high sensitivity and rapid results are necessary to help reduce the spread of disease and mitigate burden on healthcare and laboratory infrastructure. While paper-based colorimetric tests attempt to fill this gap, they have reduced sensitivity when compared to “gold-standard” tests such as RT-qPCR, which typically exhibit results with fluorescent reporters. The goal of this project is to detect fluorophore activity using a smartphone camera and flash, with zero modifications (or as minimal as possible). As many people have smartphones and the ability to take a picture, but fewer possess lab skills or access to a lab, we aim to develop a smartphone-based system with the highest sensitivity and lowest barrier for entry, that is capable of detecting a fluorescent output. We are experimenting with both biological and technological levers, including combinations of time-delay using FRET and long-lasting fluorophores. On the software side, we are looking into whether we can leverage a smartphone’s built-in bayer filter to better delineate between emission wavelengths of fluorophores, and using timed flashing and recording methods to detect the biological time delay. So far, we have collected preliminary data on colorimetric readout reactions and shown that the difference between reactant and negative control is apparent at fairly low concentrations (250 uM). We expect that with a simple filter setup, we will be able to excite and detect the fluorescent output from a fluorophore. Further research will aim to simply things setup further, to reduce the number of external modifications required for use. When coupled with a diagnostic test, these ideas could potentially bring any test that can be coupled to a fluorescent readout from the lab to the user, increasing accessibility and lowering the costs of such tests.


Oral Presentation 1

11:30 AM to 1:00 PM
Software-level Enforcement of Privacy Policies
Presenter
  • Theo Gregersen, Sophomore, Computer Science UW Honors Program
Mentor
  • Franziska Roesner, Computer Science & Engineering
Session
    Session O-1J: Technology and Society: Privacy, Misinformation, Consent, and Transparency
  • MGH 288
  • 11:30 AM to 1:00 PM

  • Other students mentored by Franziska Roesner (1)
Software-level Enforcement of Privacy Policiesclose

Software services often depend on storing or processing users' personal data. To promote responsible handling of this information, modern privacy legislation such as the General Data Protection Regulation and California Consumer Privacy Act impose strict regulation around demonstrable privacy enforcement for personal data. In addition to privacy legislation, increased social emphasis on accountability for privacy policies and individual user preferences has added requirements to many systems. This landscape creates interest in technical mechanisms for privacy compliance. Traditional privacy methods such as encryption or anonymization are important, but not sufficient, to address the more nuanced aspects of privacy regulations and policies such as data use requirements based on purpose, fine-grained personal data control, or obligations. An influx of research in both industry and academia seeks to confront this challenge of policy-based privacy enforcement. However, the interdisciplinary nature of privacy, wide variety of approaches, and common gap between theory and software development makes it difficult to navigate the space. To help, this research project presents a systematization of policy-based privacy enforcement with a focus on practical software mechanisms, implementations and frequently adopted privacy-by-policy design patterns. It considers deriving software requirements from natural language requirements, expressing privacy conditions in privacy languages, managing data access with privacy conditions, restricting data flow for privacy, and leveraging logs and audits. Within these domains, the research project explores common approaches, mechanisms, and methodologies and further describes key insights, gaps, and future directions for policy-based privacy enforcement. 


ASL Consent in the Digital Informed Consent Process
Presenter
  • Ben S. Kosa, Junior, Computer Science
Mentor
  • Richard Ladner, Computer Science & Engineering
Session
    Session O-1J: Technology and Society: Privacy, Misinformation, Consent, and Transparency
  • MGH 288
  • 11:30 AM to 1:00 PM

  • Other Computer Science & Engineering mentored projects (22)
ASL Consent in the Digital Informed Consent Processclose

There are an estimated 500,000 people in the U.S. who are deaf and use American Sign Language (ASL). Compared to the general population, deaf people are at greater risk of having chronic health problems and experience significant health disparities and inequities (Sanfacon, Leffers, Miller, Stabbe, DeWindt, Wagner, & Kushalnagar, 2020; Kushalnagar, Reesman, Holcomb, & Ryan, 2019; Kushalnagar & Miller, 2019). The longstanding history of inequitable access to language and education, and a lack of printed information and materials, leave people who are deaf and who use ASL unaware of opportunities to participate in cutting-edge research/clinical trials (Kushalnagar & Miller, 2019; Lesch, Brucher, Chapple, R., & Chapple, K., 2019; Smith & Chin, 2012). An unintended consequence, therefore, is that Principle Investigators (PIs) neglect to include ASL signers who are deaf in their subject sample pools, and this marginalized population continues to be at disparity for both health outcomes and clinical research participation. One barrier is the unavailability of informed consent materials that are accessible in ASL. The current research study conducted by our team at the Center for Deaf Health Equity at Gallaudet University attempts to address the language barrier to the consent process through a careful reconsideration of its traditional English format and the development of an American Sign Language (ASL) informed consent app. As part of the project, I successfully leveraged existing machine learning methods to develop a way to navigate and signature an informed consent process using ASL. I call this new method of navigation and signature “ASL Interactability.” In my findings, I found that deaf people who are primarily college educated felt that the process for obtaining ASL consent through an accessible app is just as fluid and easy to understand as traditional English consent. These findings not only show the potential of ASL Interactability in the informed consent process, but in any other digital application that requires the user to interact (e.g. to move between pages, to provide signature, etc).


Examining Safety Systems and Community Response to Harassment in Social Virtual Reality
Presenter
  • Simona Liao, Graduate, Computer Science & Engineering (BS/MS Program)
Mentor
  • Amy Zhang, Computer Science & Engineering
Session
    Session O-1M: Computing & Machine Learning
  • MGH 238
  • 11:30 AM to 1:00 PM

  • Other students mentored by Amy Zhang (2)
Examining Safety Systems and Community Response to Harassment in Social Virtual Realityclose

Although social Virtual Reality (VR) has attracted increasing attention as a new way for people to interact, it faces challenges with harassment, a problem other social platforms face as well, online gaming communities in particular. The embodied environment social VR provides also brings new forms of harassment compared to social media, requiring effective responses from social VR platforms. We examined the safety features of four popular social VR games: VRChat, Horizon World, Altspace, and RecRoom to learn the standard safety practices. To understand how social VR communities share and respond to harassment experiences, we collected 134 posts and comments from online communities for these games on Reddit, Twitter, and Oculus Forum. We used inductive coding to identify themes and trends. We found that the four social VR games have common safety features such as Personal Bubble, Block, and Report, but these features differ in name, effect, and ease of access. This can pose an increased learning curve for players and make them less aware of these functionalities. From the online posts, we found the most common harassment experiences include hate, unwanted sexual attention, and embodied sexual harassment. The most common response to harassment experiences is suggesting strategies or resources. However, these responses include a mix of positive (e.g., empathetic, supportive), neutral, and negative (e.g., gaslighting) tones. We also found a difference between the most commonly adopted safety feature and the most recommended feature, where the former is Personal Bubble and the latter is Block. Based on the findings, we provide design implications to improve safety features and build easier-to-access and informed safety systems for social VR games. This research contributes to developing a more inclusive environment for players from diverse backgrounds and identities by identifying opportunities to provide better safety features and improve safety norms in virtual worlds.


Predictive Modeling for Nanopore Protein Sequencing
Presenter
  • Sammy Yang, Junior, Computer Science
Mentor
  • Jeff Nivala, Computer Science & Engineering, Molecular Engineering and Science
Session
    Session O-1M: Computing & Machine Learning
  • MGH 238
  • 11:30 AM to 1:00 PM

  • Other students mentored by Jeff Nivala (1)
Predictive Modeling for Nanopore Protein Sequencingclose

Our research group is exploring the feasibility of utilizing nanopore sensors for protein sequencing, whose compact size and ability to facilitate extremely long, uninterrupted reads of protein strands upstage the current procedure of using complex, expensive mass spectrometry (MS) devices. My project predicts the sensor’s raw signal data using a carefully tested combination of each amino acid’s volume and charge properties. Using my model to generate predictions for a specific database of proteins, I can compare the unknown raw signal to each of the predicted signals to single out the best matching/correct sequence. While the protein space of De Novo sequencing is vast (20 to the power of protein sequence length), this method effectively shrinks the protein space to a group of substantive, feasible sequences. Employing the current predictive model on a database of synthetic and natural proteins, when compared against an unknown protein’s raw signal, I found that, on average, the correct prediction consistently ranked within the 99th percentile of matches among a predicted test set of >20,000 sequences. Advancing single-protein sequencing can revolutionize protein research by enabling the identification of low-abundance proteins. Additionally, the increased sensitivity of the nanopore sensor could shed light on the so-called "human dark proteome," composed of approximately 3,000 human proteins that have not yet been identified despite genetic evidence of their existence.


Poster Presentation 2

12:45 PM to 2:00 PM
Designing a Toolkit to Bridge Different Communication Channels in Remote Team Collaboration
Presenters
  • Pranati Dani, Junior, Computer Science
  • Shreya Sathyanarayanan, Junior, Computer Science
  • Lin Qiu, Senior, Computer Science
Mentors
  • Amy Zhang, Computer Science & Engineering
  • Ruotong Wang, Computer Science & Engineering
  • Justin Cranshaw, Computer Science & Engineering
Session
    Poster Session 2
  • Balcony
  • Easel #56
  • 12:45 PM to 2:00 PM

  • Other students mentored by Amy Zhang (2)
Designing a Toolkit to Bridge Different Communication Channels in Remote Team Collaborationclose

Remote collaboration today rarely involves a single communication channel. Instead, teams frequently juggle a myriad of communication tools, such as video conferencing, group chat, and email. Each of these platforms provides different mechanisms for relaying information and media to ultimately meet the needs and goals of the team. While discussions occurring on different platforms are often related, existing tools used to support each type of communication are disconnected. To research how to bridge this gap and support seamless collaboration and communication across different platforms, we developed a toolkit that connects conversations between three of the most commonly used remote collaboration platforms: Slack, Google Docs, and Zoom, covering both synchronous and asynchronous modes of communication. We iteratively designed and implemented features such as adding information from Slack chat directly to Google Docs notes to build up meeting agendas and selecting specific snippets of Zoom meetings to be embedded into notes or sent to chat. We also plan to evaluate the effectiveness of our toolkit in helping streamline the transfer of information across different team communication sites and enhancing the remote collaboration experience for teams via subsequent qualitative user studies. Specifically, we will be conducting a week-long field study with existing teams, such as teams from industry, teams working on school projects, research groups, committees, etc. We will use a combination of experience sampling, diary study and post-study interviews to understand their experience. The results we expect to get from these exploratory user studies will help us answer the following questions: Which aspects of the tools work best for the users? Does the current UI and design make sense for how the user interacts with the toolkit? In which scenarios is the toolkit being used most effectively? These results will also guide us in designing additional features for the toolkit in the future.
 


Designing a Justice-based Intermediate Computing Curriculum
Presenters
  • Sonia Fereidooni, Graduate, Computer Science & Engineering (BS/MS Program) Mary Gates Scholar
  • Iris Zhou, Senior, Mathematics NASA Space Grant Scholar
  • Anna Batra, Graduate, Computational Linguistics
  • Chongjiu Gao, Senior, Computer Science
  • Suh Young Choi, Senior, Statistics, Classics UW Honors Program, Mary Gates Scholar
  • Audrey (Drey) Kim, Senior, Sociology
Mentor
  • Kevin Lin, Computer Science & Engineering
Session
    Poster Session 2
  • Balcony
  • Easel #55
  • 12:45 PM to 2:00 PM

  • Other Computer Science & Engineering mentored projects (22)
Designing a Justice-based Intermediate Computing Curriculumclose

Justice-centered approaches to equitable computer science (CS) education frame CS learning as a means for advancing peace, antiracism, and social justice rather than war, empire, and corporations. However, most research in justice-centered approaches in CS education focus on K--12 learning environments. In this position paper, we review justice-centered approaches to CS education, problematize the lack of justice-centered approaches to CS in higher education in particular, and describe a justice-centered approach for undergraduate Data Structures and Algorithms. Our approach emphasizes three components: (1) ethics: critiques the sociopolitical values of data structure and algorithm design as well as the underlying logics of dominant computing culture; (2) identity: draws on culturally responsive-sustaining pedagogies to emphasize student identity as rooted in resistance to the dominant computing culture; and (3) political vision: ensures the rightful presence of political struggles by reauthoring rights to frame CS learning as a force for social justice. Through a case study of this \emph{Critical Comparative Data Structures and Algorithms} pedagogy, we argue that justice-centered approaches to higher CS education can help all computing students not only learn about the ethical implications of nominally technical concepts, but also develop greater respect for diverse epistemologies, cultures, and experiences surrounding computing that are essential to creating the socially-just worlds we need.


GazePointAR: A Context-Aware Multimodal Voice Assistant for Pronoun Disambiguation in Wearable Augmented Reality
Presenters
  • Jun Wang, Junior, Computer Science
  • Liam Gene Ping Chu, Junior, Applied & Computational Mathematical Sciences (Scientific Computing & Numerical Algorithms)
Mentors
  • Jon Froehlich, Computer Science & Engineering
  • Jaewook Lee, Computer Science & Engineering
Session
    Poster Session 2
  • Balcony
  • Easel #54
  • 12:45 PM to 2:00 PM

GazePointAR: A Context-Aware Multimodal Voice Assistant for Pronoun Disambiguation in Wearable Augmented Realityclose

Voice assistants (VAs) are transforming how humans interact with technology. While promising, state-of-the-art VAs like Siri and Alexa do not incorporate a user’s spatiotemporal context such as their surrounding objects or gestures, which results in degraded performance and unnatural dialogue. Since pronoun usage is inherent to everyday speech, we expect future VAs to support ambiguous speech queries. We introduce GazePointAR, a wearable augmented reality (AR) system that resolves ambiguity in speech queries using eye gaze, pointing gestures, conversation history, real-time computer vision, and a large language model (OpenAI’s text-davinci-003). With GazePointAR, a user can ask “what’s over there?” or “how do I solve this math problem?” simply by looking and/or pointing. Upon voice activation, GazePointAR listens for the query, takes a screenshot, and narrows the focus by incorporating information from eye gaze, replaces the pronoun in the query with the detected objects and texts, and utilizes a language model to answer the modified query. To assist in this project, Liam and I reviewed relevant literature, brainstormed technical solutions for multimodal integration, constructed user study scenarios, and conducted reflexive thematic coding on qualitative data. To evaluate GazePointAR, we conducted a three-part lab study that compared GazePointAR to two other state-of-the-art query systems (Google Voice Assistant and Google Lens), examined GazePointAR’s pronoun disambiguation for three tasks, and concluded with an open-ended component where users could suggest and try their own queries. Participants appreciated the improved simplicity and human-likeness of context-aware queries; however, they preferred faster response times and better explanations for query results. By combining visual and voice inputs to answer a broader range of questions, GazePointAR provides a foundation for future works of VAs, such as designing a more anthropomorphic VA.


Oral Presentation 2

1:30 PM to 3:00 PM
Determining the Quality of Images for Smartphone Detection of Anemia using Machine Learning
Presenter
  • Hannah Lee, Senior, Applied Mathematics, Computer Science UW Honors Program
Mentors
  • Shwetak Patel, Computer Science & Engineering
  • Jason Hoffman, Computer Science & Engineering
Session
    Session O-2A: Computing for People: Devices and Algorithms
  • MGH 271
  • 1:30 PM to 3:00 PM

  • Other Computer Science & Engineering mentored projects (22)
  • Other students mentored by Shwetak Patel (2)
  • Other students mentored by Jason Hoffman (1)
Determining the Quality of Images for Smartphone Detection of Anemia using Machine Learningclose

Smartphone detection of anemia using patient photos has the potential to provide a non-invasive method of measuring hemoglobin levels, introducing the possibility of increasing the accessibility and cost-effectiveness of current practices. While traditional methods of anemia detection require a complete blood count by a trained healthcare professional, smartphone detection instead relies on the user to take a high quality picture of their fingernails. However, it currently lacks the ability to provide feedback to the user on the quality of their image. For example, an overexposed image or one with low fingernail visibility can lead to inaccurate predictions of hemoglobin levels. We propose that machine learning classification methods can analyze these patient images to estimate the image quality and predict the effectiveness of smartphone detection of anemia for a given image. With various classical machine learning models, we demonstrate and compare the capabilities of each in classifying images of patients’ hands as being of “good” or “bad” quality (or on a more granular numerical scale) when given features of the images. Preliminary results show that a logistic regression model reaches 91.4% accuracy labeling images when compared to empirically assigned labels, and we expect iterative models to achieve improved performance. When completed, we would propose that this classifier could be used in the field to identify if patient image is of high enough quality to produce an accurate measurement of hemoglobin levels in real-time, providing feedback on the phone to adjust or correct the image-taking process.


Confidence Contours: Uncertainty-aware Annotation for Medical Semantic Segmentation
Presenter
  • Andre Ye, Sophomore, Center for Study of Capable Youth
Mentor
  • Amy Zhang, Computer Science & Engineering
Session
    Session O-2A: Computing for People: Devices and Algorithms
  • MGH 271
  • 1:30 PM to 3:00 PM

  • Other students mentored by Amy Zhang (2)
Confidence Contours: Uncertainty-aware Annotation for Medical Semantic Segmentationclose

Medical image segmentation modeling is a high-stakes task where direct communication and interpretation of uncertainty is crucial for addressing visual ambiguity. Prior work has developed segmentation models utilizing probabilistic or generative mechanisms to infer uncertainty from labels where annotators draw a singular boundary. However, as these annotations cannot directly represent an individual annotator's uncertainty, even specialized models trained on these standard representations produce uncertainty maps that are difficult to interpret. We propose a novel segmentation representation, Confidence Contours, which uses high- and low-confidence ``contours’’ to capture uncertainty directly, and develop a novel annotation system for collecting contours. We collect both standard and Confidence Contours annotations on the Lung Image Dataset Consortium (LIDC) and a synthetic dataset simulating the structural ambiguity of many medical segmentation problems, FoggyBlob. Our analysis show that Confidence Contours provide high representative capacity without requiring significantly higher annotator effort. Moreover, general segmentation models trained on Confidence Contours can produce significantly more interpretable uncertainty maps than models with specialized mechanisms for uncertainty, and they can learn Confidence Contours at the same performance level as singular annotations. We conclude with a discussion on how we can infer regions of high and low confidence from existing segmentation datasets. Our data-centric approach crucially brings attention to the importance of human factors in responsible and robust AI, which have often been overlooked in model-centric medical segmentation work. By troubling and rethinking the very way that the ground truth is represented, our work opens up new paths of inquiry towards more human-friendly models -- paths which begin from the data.


Asymmetric Traveling Salesman Problem (ATSP) and the Generalization of Sampling Technique on Arborescences
Presenter
  • Jinghua Sun, Senior, International Studies, Computer Science
Mentor
  • Shayan Oveis Gharan, Computer Science & Engineering
Session
    Session O-2A: Computing for People: Devices and Algorithms
  • MGH 271
  • 1:30 PM to 3:00 PM

Asymmetric Traveling Salesman Problem (ATSP) and the Generalization of Sampling Technique on Arborescencesclose

The algorithmic design of traveling salesman problem (TSP) is one of the most famous graph based problems. With recent developments, one approximation algorithm for the asymmetric case of this classical problem became the milestone in the field due to its novel application of modern continuous optimization techniques onto discrete mathematical objects. The purpose of our project is to find the probability distribution that maximizes randomness (max entropy distribution) over a rooted and directed version of spanning trees arborescences). Pprevious work shows that max entropy distribution over undirected spanning trees is essentially the uniform distribution, which makes spanning tree sampling extremely fast. Our goal is to find whether the max entropy distribution over arborescences assume similar convergence behaviors. We hypothesize that through convex programming formulation, the eventual outcome of the max entropy distribution of arborescences is also the uniform distribution, due to structural similarities between the two objects. However, the final result we arrived at asserts that the uniform distribution over arborescences from a graph does not maximize randomness. Based on this finding, we further compared other behaviors of arborescences against spanning trees, and through the discovery of graphic examples, we found out that arborescences essentially fail to possess the concentration properties known for spanning trees. Therefore, our work aims to further motivate for a generalized explanation behind such distinct behaviors of mathematical objects. Through extending the probabilistic lens onto directed versions of well-studied graph structures, we hope the new techniques based on their properties would lead to future algorithms that factor in the real world complexities associated with cost or distance asymmetry, one example would be the asymmetric costs of traveling between two cities. In the long run, we hope to develop more robust algorithms with less reliance on ideal mathematical conditions in market operations and data analysis research.


Evaluation and Design of Accessible Eyedropper Prototype
Presenter
  • Krish Jain, Junior, Computer Science
Mentors
  • Jerry Cao, Computer Science & Engineering
  • Shwetak Patel, Computer Science & Engineering
  • Jerry Cao, Computer Science & Engineering
Session
    Session O-2A: Computing for People: Devices and Algorithms
  • MGH 271
  • 1:30 PM to 3:00 PM

  • Other students mentored by Jerry Cao (2)
  • Other students mentored by Shwetak Patel (2)
  • Other students mentored by Jerry Cao (2)
Evaluation and Design of Accessible Eyedropper Prototypeclose

Ophthalmic drug administration has been increasingly prevalent in recent years, with eyedroppers being utilized to administer costly medication like that for glaucoma. There haven’t been many solutions addressing eyedropper instillation for those with preexisting conditions like arthritis, who often deal with a host of problems when administering them: producing the necessary force to distill a drop, aiming the drop into the eye, and contamination of the eyedropper tip. We are testing the question of whether accessible eye drop aids can significantly improve eyedrop compliance and distillation for the elderly. Solutions to eye drop administration can save money and make the overall process easier for many patients. Existing solutions on the market seem to address the issue of contamination using apparatuses that press onto the lower eyelid, but there is still much to be desired with the force and aim required. Many require the use of gripping or squeezing, motions that many elderly patients can’t apply as much force with. I propose a couple of solutions to these problems in the form of eyedropper aids that each make use of a few different methods, including translating the motion, applying the force with different limbs, and even mechanizing the force required. Through a quantitative study, I hope to eventually test these prototypes through an ophthalmology clinic among a wide variety of elderly. Assessing these prototypes through both questionnaires and observation, I hope to notice an increase in effectiveness from previously existing apparatuses. We will use a survey to ask a variety of questions to around 100 elderly patients with varying expertise in eye drop instillation. The survey will ask whether the tool was more useful, easier, how hard it was to assemble, and we will also monitor quantitatively whether the accuracy of drops actually instilled was better. This work hopefully saves patients money from medication cost from a reduction in wastage, allows for better administration of medicine, and eases the process of distillation.


Poster Presentation 3

2:15 PM to 3:30 PM
Virtual Reality Rubber Hand Illusion
Presenters
  • Iman Tanumihardja, Senior, Computer Science (Data Science)
  • Medha Gupta, Freshman, Center for Study of Capable Youth
Mentor
  • Jeffrey Herron, Computer Science & Engineering
Session
    Poster Session 3
  • MGH 206
  • Easel #139
  • 2:15 PM to 3:30 PM

  • Other Neurological Surgery mentored projects (4)
Virtual Reality Rubber Hand Illusionclose

In this study, we present a dexterous implementation of the Rubber Hand Illusion (RHI) in virtual reality (VR). The RHI is a classic perceptual illusion in which a sense of embodiment of a non-self object is elicited by synchronously and congruously stroking both a visible non-self object (i.e., a rubber hand) and the subject’s actual hand, hidden from view. While powerful, the classic RHI experiment is constrained by physical reality. Here, we present a new VR-RHI implementation that integrates Unity’s collider-based physics system and SteamVR’s hand pose estimation algorithm to achieve real-time rendering of real-world collisions. This enables precise visuotactile concordance and thus induction of the RHI over a virtual hand. Data from healthy, right-handed human VR-RHI participants (n=17) demonstrated a strong, bounded, linear correlation between VR render offset and proprioceptive drift till a certain threshold. We have designed and validated a new gaze drift metric that uses integrated eye-tracking hardware and SDK support for gaze-object collision to allow gaze-based self-localization. Based on preliminary results, we believe using gaze may refine the proprioceptive drift metric by minimizing the required movement of the subject’s body and contralateral hand while self-localizing after RHI induction. In addition, we have implemented a new feature of the experiment to separate the visual and tactile sensations by showing the subject the actual hand location rendered in the virtual environment during the induction. During these trials, the subject is aware of the offset, but preliminary results suggest that we are still able to induce the illusion. Furthermore, we have also implemented a new induction method where we use movement to induce the illusion rather than tactile sensations. Finally, we have improved the experiment protocol by automating data collection and experimental loops so that the experiment can run without a third party.


Developing a Virtual Reality Platform to Study Freezing of Gait in Parkinson's Disease
Presenters
  • Anjali Singh, Junior, Computer Science
  • Tasnim Alam, Junior, Computer Engineering
  • Kianna Roces (Kianna) Bolante, Sophomore, Computer Science UW Honors Program
Mentor
  • Momona Yamagami, Computer Science & Engineering
Session
    Poster Session 3
  • MGH 258
  • Easel #128
  • 2:15 PM to 3:30 PM

Developing a Virtual Reality Platform to Study Freezing of Gait in Parkinson's Diseaseclose

Freezing of Gait (FoG) is a disabling symptom of Parkinson's disease that prevents movement of feet despite one’s intention to walk. Virtual reality (VR) has potential in simulating real-life environments that cause FoG, eliminating safety risks. In this project, we extended VR environments that can be used as a rehabilitation tool to assess and treat FoG. To enhance previously developed environments, we added 1) visual cues that enable the person to compensate for FoG, 2) optic flow manipulation that enables researchers to quantify the effect of visual flow on FoG (i.e., how fast or slow the world moves with respect to the person), and 3) an avatar that enhances realism of the virtual environment. We observe how the participant interacts with these features within the virtual environment and how this affects the frequency of FoG. We anticipate that these developments will improve usability when deployed in a clinical setting and enhance the realism of the VR environment for the patient.


Oral Presentation 3

3:30 PM to 5:00 PM
Associations between Visual Attention and Developmental Skills: Effects of Age and Low Birth Weight
Presenter
  • Arya Ajwani, Senior, Psychology Mary Gates Scholar
Mentor
  • Frederick Shic, Computer Science & Engineering, Pediatrics, Psychology
Session
    Session O-3A: Language, Cognition, & Identity
  • MGH 271
  • 3:30 PM to 5:00 PM

  • Other Pediatrics mentored projects (25)
Associations between Visual Attention and Developmental Skills: Effects of Age and Low Birth Weightclose

This project examines developmental atypical patterns of visual attention in infants in relation to Autism Spectrum Disorder (ASD). Research in this area could help identify additional, specific risk groups or factors that could facilitate focused research that translates to real-world applications. Specifically, this project examines how cognitive development relates to visual attention to faces versus activities at 12 and 24 months of age among different birth weight groups. Developmental scores will be evaluated through data collected using: the Mullen Scales of Early Learning (Mullen), a developmental test measuring cognitive and motor development, and the Vineland Adaptive Behavior Scales (Vineland), a caregiver-interview measuring child adaptive skills. Visual attention will be quantified using eye tracking data which measured proportions of looking towards faces versus activities in social scenes. Participants in the lab were split into two groups, low birth weight and regular birth weight, and were seen by researchers at both 12 months and 24 months. Mullen, Vineland, and eye tracking tests were conducted at both timepoints. Science shows that as infants grow, they focus less on faces and more on the activities they are doing. I anticipate similar effects in eye tracking, with increasing age associated with a higher mean difference in preference for activities versus faces. Uniquely, I hypothesize that the strength of the relationship between looking at activities and developmental skills will be greater at 24 months than it will be at 12 months, and the opposite will be true for looking at faces. We will test our hypotheses on a linear regression model that predicts developmental skills from factorial effects of time point, birth weight, and region of eye tracking preference. This project hopes to seek to understand the interaction between birth weight, age, and attentiveness to faces versus activities as they relate to developmental skills.


Nostalgic Analysis of Tweets During Crisis Events
Presenter
  • Jazminh (JazMinh) Diep, Senior, Computer Science & Software Engineering
Mentor
  • Afra Mashhadi, Computer Science & Engineering, UWB
Session
    Session O-3K: From Moral Reasoning to the Cosmos: Exploring the Intersection of AI, Digital Communities, and Space Analysis
  • MGH 238
  • 3:30 PM to 5:00 PM

  • Other students mentored by Afra Mashhadi (2)
Nostalgic Analysis of Tweets During Crisis Eventsclose

Nostalgic contents are social media posts that refer to past collective memories or events. When crisis events do occur, it affects the way of living life. In these unprecedented times, people turn to social media to express their concerns and feelings. By studying the engagement and interactions of users in social media, we can create new ways of understanding nostalgic longing. This research explores the nostalgic activity of tweets during crisis events. The NLP (Natural Language Processing) classifier is a pre-trained algorithm that enables us to detect whether the tweet is nostalgic or not by using references to human language and then classifying them into categories. The performance of our classifier is 98% accurate. This accuracy ensures the detection of nostalgic tweets is correct when formulating an analysis. Once the nostalgic tweets are obtained from the classifier I can begin performing a deeper analysis of the tweets by using machine learning tools. A descriptive analysis of time is used to gain insight into how people react to events and the progression of the nostalgia feeling humans have. Especially pre-crisis, during a crisis, and post-crisis are time periods that are significant because it gives insight into the progression of human behavior. Sentiment analysis is also performed on the data to understand how people feel about certain events. This is a useful method to gain information about whether there is a positive or negative reminiscent during the time the tweet is posted. The analysis has shown that less than 1% of tweets are nostalgic and the contents tend to be more negative than positive. The content of the tweets ranges from informational to political with a reminiscent of the time during crisis. The results will help us understand human behavior and how it can be leveraged as public assistance during a crisis.


Poster Presentation 4

3:45 PM to 5:00 PM
Nucleation Site Analysis of HIV Through Recombinase Polymerase Amplification
Presenter
  • Hugh X. March, Junior, Computer Science Mary Gates Scholar
Mentor
  • Jonathan Posner, Computer Science & Engineering, Mechanical Engineering
Session
    Poster Session 4
  • Commons East
  • Easel #50
  • 3:45 PM to 5:00 PM

  • Other Mechanical Engineering mentored projects (16)
  • Other students mentored by Jonathan Posner (1)
Nucleation Site Analysis of HIV Through Recombinase Polymerase Amplificationclose

As of 2021, there were approximately 38.4 million people living with HIV who require routine viral load testing. Viral load testing returns a quantitative measure of viral concentrations and is indicative of antiretroviral therapy efficacy and adherence compliance, with lower viral loads correlated to better health outcomes. Quantitative polymerase chain reaction (qPCR) is the gold standard for measuring viral load, but is not accessible to many clinics and patients around the world due to its long assay times and requirements of specialized equipment and highly trained personnel. As a result, qPCR is limited to centralized testing facilities far from the point-of-care (POC), leading to delayed results or loss of follow-up. Our group has addressed this by developing an HIV viral load test using recombinase polymerase amplification (RPA), which has a 20 minute sample-to-answer time and is more appropriate for POC settings. Our test involves the formation of discrete fluorescent nucleation sites which can be counted to estimate the viral load. However, our test fails to accurately quantify higher HIV viral loads (>3,000cps/rxn). We have difficulties distinguishing between individual sites at these higher copy numbers due to sites merging. In this project, I address the limited dynamic range of this test by performing RPA between two glass slides and investigating the effects of different slide thicknesses and concentrations of polyethylene glycol (PEG), a crowding agent used in RPA reactions. I perform nucleation site analysis using computer vision techniques to measure nucleation site radius and intensity and study how these factors affect site diffusion and amplification. By analyzing nucleation site behavior, we demonstrate potential for an HIV viral load test with a higher dynamic range and gain a better understanding for RPA nucleation site formation, ultimately helping to improve access to testing and treatment for people living with HIV.


Decoding Gene Regulation of Immune Cells with Deep Learning
Presenter
  • Nuria Alina (Alina) Chandra, Senior, Computer Science Mary Gates Scholar, UW Honors Program, Washington Research Foundation Fellow
Mentor
  • Sara Mostafavi, Computer Science & Engineering
Session
    Poster Session 4
  • Balcony
  • Easel #69
  • 3:45 PM to 5:00 PM

  • Other students mentored by Sara Mostafavi (1)
Decoding Gene Regulation of Immune Cells with Deep Learningclose

All somatic cells, from heart cells to immune cells, have the same genetic code. Understanding the regulatory processes that allow the same DNA sequence to code for vastly different gene expression patterns is a longstanding goal of biomedical research. To study the regulation of gene expression we examine chromatin accessibility, a measure of the areas of DNA accessible to transcriptional machinery. It’s hypothesized that variation in these accessible regions across different cell states and types enables combinations of Transcription Factors (TFs) to bind and regulate gene expression. This project builds upon the AI-TAC neural network model which predicts chromatin accessibility as measured by ATAC-seq peaks in 81 mouse immune cell types. The trained AI-TAC model was used to identify sequence patterns within regulatory regions that predict cellular differentiation. TFs function through protein-protein interactions with other bound TFs. My recent work found that AI-TAC is unable to sufficiently learn nonlinear TF interactions. I hypothesize that a model trained with higher granularity data to predict base-pair resolution chromatin accessibility will learn the non-linear interactions between TFs encoded in genomic DNA more effectively. I present a model named bpAITAC with a new architecture that predicts base-pair resolution raw ATAC-seq reads. This model will allow us to identify TF interactions important for regulating accessibility. These findings will help us better understand immune cell differentiation. Future iterations of this model will be trained on human immune cell data, and will be able to identify rare disease-associated gene variants from patient DNA sequences. This will allow us to develop personalized therapeutics to address the disease-related effects of an individual’s genetic variations.


filter_list Find Presenters

Use the search filters below to find presentations you’re interested in!













CLEAR FILTERS
filter_list Find Mentors

Search by mentor name or select a department to see all students with mentors in that department.





CLEAR FILTERS

Copyright © 2007–2026 University of Washington. Managed by the Center for Experiential Learning & Diversity, a unit of Undergraduate Academic Affairs.

The University of Washington is committed to providing access and reasonable accommodation in its services, programs, activities, education and employment for individuals with disabilities. For disability accommodations, please visit the Disability Services Office (DSO) website or contact dso@uw.edu.