menu
  • expo
  • expo
  • login Sign in
Office of Undergraduate Research Home » 2024 Undergraduate Research Symposium Schedules

Found 17 projects

Poster Presentation 2

12:45 PM to 2:00 PM
Automatically Decomposing D3 Visualizations
Presenters
  • Soham Shirish Raut, Junior, Computer Science
  • Heer Patel, Senior, Computer Science (Data Science) Mary Gates Scholar
Mentor
  • Leilani Battle, Computer Science & Engineering
Session
    Poster Session 2
  • CSE
  • Easel #172
  • 12:45 PM to 2:00 PM

  • Other Computer Science & Engineering mentored projects (20)
Automatically Decomposing D3 Visualizationsclose

Data visualizations are critical to understanding large, complex datasets. For example, visualizations help us detect missing and erroneous data values, identify potential relationships between data variables, and review the output of machine learning models. That being said, customized visualizations can be difficult to create, because they require the use of specialized toolkits such as ggplot2, Vega-Lite, or D3. In this research, we study how people use a browser-based toolkit called D3 to program custom visualizations, with the long-term goal of creating AI-assistants to help people code in D3. However, to train rigorous AI models, we need a large input corpus of D3 examples that is accurate and reliable. In this presentation, we share our progress towards building this training corpus. First, we mined hundreds of real-world examples from the web. Then, we analyzed these examples to understand how visualization users take complex D3 programs and break them down into easily understandable parts, which we call components. Currently, we are investigating how these different components can be remixed and reused to create D3 visualizations for datasets that a model may not have encountered before. Future research will involve collecting more examples to grow our corpus, and training AI to use the corpus to generate documentation and relevant examples to help new visualization users better understand D3.


Designing Trustworthy LLM-Generated Meeting Summaries for Online Meetings
Presenters
  • Pranati Dani, Senior, Computer Science
  • Shreya Sathyanarayanan, Senior, Computer Science
  • Terrie Chen, Recent Graduate, Computer Science
  • Yusuf Shabbir Shahpurwala, Junior, Computer Science
Mentors
  • Amy Zhang, Computer Science & Engineering
  • Ruotong Wang, Computer Science & Engineering
Session
    Poster Session 2
  • CSE
  • Easel #171
  • 12:45 PM to 2:00 PM

Designing Trustworthy LLM-Generated Meeting Summaries for Online Meetingsclose

In the rapidly evolving landscape of remote work, the challenges associated with recalling important information from meetings and missing meetings have increased. One potential solution is to use large language models (LLMs) to summarize meetings to help participants catch up after meetings are over. To have a better understanding of this topic, we systematically reviewed 17 existing commercial tools and research prototypes for LLM-generated meeting summaries. The results show that existing solutions fell short of supporting users to verify and validate the comprehensiveness and accuracy of the generated summary, hindering users from trusting the summary. To address this, the project aims to design and build a more trustworthy LLM-generated meeting summary tool. Specifically, we propose that LLM-generated summary should progressively display relevant meeting information based on the importance of the information and the user’s goals, and include trustworthiness cues to aid users in making accurate trust judgments of the summary. Our preliminary interviews and a literature review showed that users are more hesitant to trust the AI summary when the information is consequential, such as when they missed the meeting or specific action items. While trustworthiness cues such as quotes or links to raw transcripts could increase users’ trust, irrelevant and redundant information erodes people’s trust. To further validate these observations, we will conduct a formative interview study. We will show participants mid-fidelity prototypes exemplifying the key design decisions and elicit their feedback on appropriate trustworthiness cues, desired ways to indicate their goals and intentions, and expectations on the importance of different portions of a summary. These empirically supported insights will inform the final design of a trustworthy LLM-generated meeting summary tool, which we plan to implement and evaluate in the next step.


Improving Feature-Based ASL Dictionaries One Feature at a Time
Presenter
  • Ben S. Kosa, Senior, Computer Science Mary Gates Scholar
Mentor
  • Richard Ladner, Computer Science & Engineering
Session
    Poster Session 2
  • CSE
  • Easel #173
  • 12:45 PM to 2:00 PM

  • Other Computer Science & Engineering mentored projects (20)
Improving Feature-Based ASL Dictionaries One Feature at a Timeclose
Across the world, there are roughly 70 million Deaf and hard of hearing (DHH) people who use over 200 different sign languages. The ability to look up the meaning of unknown words is important for any language, and there is currently no easy way to look up signs due to sign languages having no common written form–meaning you can’t just type a sign into your keyboard. One common approach to sign language search is feature-based lookup, which allows users to search for a sign by inputting its visual features: handshape, location, palm orientation, and movement. Previous approaches to feature-based lookup have several limitations in that they often have poor matching of features to signs, don’t allow for the user to omit the features they don’t know, and have cumbersome search interfaces. The work done in Bragg et al. (2015) took a step towards addressing these issues by introducing ASL-Search—a robust search system that compares a user’s sign query to a database of labeled queries inputted by previous users using a topic modeling method called Latent Semantic Analysis (LSA) and cosine similarity—but their system still requires the user to choose between a large set of 178 features. In part 1 of our study, we found that by dropping certain categories of features, and combining/reducing others into similar categories, we could dramatically reduce the number of features needed to search for signs from 178 to around 45 to 65 without significantly impacting the accuracy the results returned during search. In reducing the number of features needed, we came up with smaller, intuitive sets of handshapes and movements. In part 2 of our study, we are evaluating how usable our new handshape and movement features are. By making it possible to search for the meaning of signs using less features, we hope to make sign language search interfaces usable enough to be used in the real world.

Predicting Signal Quality for Lunar Rover Communication through Deep Learning Approaches and Channel State Information
Presenter
  • Vibha Sathish Kumar, Junior, Electrical and Computer Engineering
Mentors
  • Joshua Smith, Computer Science & Engineering, Electrical & Computer Engineering
  • Paolo Torrado (patorrad@uw.edu)
Session
    Poster Session 2
  • CSE
  • Easel #181
  • 12:45 PM to 2:00 PM

Predicting Signal Quality for Lunar Rover Communication through Deep Learning Approaches and Channel State Informationclose

NASA is currently developing communication infrastructure for the lunar landscape in preparation for its Artemis missions to the moon. When rovers explore remote areas on the moon, where radio signals may not reach, there is a need for methods to facilitate both communication from base camps and help the rover reconnect to the network. The goal is to develop a deep learning model to predict radio signal quality and maximize communication by autonomously relocating lunar rovers to areas with optimal signal strength. Channel State Information (CSI) data provides insight into how a signal propagates from transmitter to receiver, including data for the signal magnitude, phase, and ray interactions with the environment. I investigate feature selection methods with different combinations of simulated CSI data to train our Recurrent Neural Network (RNN) deep learning model and analyze the resulting performance. Previous research demonstrates one way to improve the prediction of a model is by utilizing information at the hidden layers, the internal layers between input and output data. I explore this method and aim to capture patterns over time with our CSI input data and RNN architecture for predicting the magnitude of the next ray hit. We expect that using additional information at hidden layers will help us understand the relationships between input data and help optimize the model. We anticipate validating the model through the use of real CSI data using physical experiments to replicate signal interaction in a lunar environment. Our work contributes to the development of communication technologies for upcoming lunar explorations. It also provides insight into the role deep learning can play in radio frequency propagation, paving the path for further research in this area.


Oral Presentation 2

1:30 PM to 3:00 PM
Field Programmable Cellular Arrays
Presenter
  • Sri Varshitha (Varshitha) Pinnaka, Senior, Center for Study of Capable Youth UW Honors Program
Mentors
  • Jeff Nivala, Computer Science & Engineering
  • Gwendolin Roote, Computer Science & Engineering, Molecular Engineering and Science
Session
    Session O-2M: Applications of AI for Good
  • CSE 403
  • 1:30 PM to 3:00 PM

  • Other students mentored by Jeff Nivala (2)
Field Programmable Cellular Arraysclose

The Field Programmable Cellular Arrays (FPCA) project at the Molecular Information Systems Lab (MISL) aims to improve current biocomputing systems utilizing spatial organizations of cellular components for logical operations. This can open doors for computation to be done within biological systems where artificial computation has never before been possible. This project encompasses three aims: characterizing the properties of signal propagation within E. coli, constructing biological circuit components for spatial signal processing, and optimizing bioprinting methods for circuits. Signal propagation through molecular signaling is employed to communicate the presence or absence of a signal and truth values to specific cells. We are demonstrating logical states of "1," "0," and the absence of a signal, thereby enabling differentiation between a logical "0" and a lack of signal. Two strains of bacterial cells are capable of performing the logic of a traditional "wire" and a NOR gate. Consequently, by arranging strains in spatially organized layouts, we engineer cellular arrays capable of performing diverse complex logical functions. This research is still in progress and we are in the process of optimizing NOR gate and wire strains. My role explores bioprinting circuits into hydrogels, and I have built a bioprinter with dual extruders to bioprint biological substances into containing slurries. This required designing, printing, and assembling 3D-printed parts. I am now characterizing the behavior of 3D printed materials into various containing slurries. This requires testing the ability of different bioprinting inks to encapsulate bacteria, testing various slurry methodologies, and testing interactions between combinations of these materials over space and time. I am also computationally modeling FPCA circuits at various levels of abstraction. Computational modeling serves to further broader computational goals in this project to compile a logic circuit specification into bioprinter GCODE.


Using Simulator Data to Train Machine Learning Models for Autonomous Road-Following
Presenter
  • Cleah Taryn Winston, Junior, Computer Science
Mentors
  • Byron Boots, Computer Science & Engineering
  • Alexander Spitzer, Computer Science & Engineering
Session
    Session O-2M: Applications of AI for Good
  • CSE 403
  • 1:30 PM to 3:00 PM

Using Simulator Data to Train Machine Learning Models for Autonomous Road-Followingclose

A critical feature of autonomous cars is the ability to follow a road or predefined path. Classical methods often rely on extensive prior mapping with precise GPS positioning. These methods are labor intensive and struggle with changing, unstructured environments. Instead, machine learning (ML) models are trained to recognize paths and follow directions. In this work, we combine simulated and real-world data to train a neural network policy that controls an autonomous ground vehicle down a hallway, avoiding collisions. Training a ML road-following model consists of three steps: data collection and preprocessing, model training, and model evaluation. While all three steps pose challenges, collecting high-quality, real-world data can be expensive and dangerous in road environments. Because of this, simulator data is useful as it allows for data to be collected safely and inexpensively. Thus, we study how much the required amount of real-world data can be reduced to successfully train a road-following robot with the use of simulator data. So, we collected simulator data using AirSim to train a convolutional neural network that follows a path in simulation through live environment images. We then fine-tuned the model using real-world data collected from MuSHR cars through hallways of a building. Next, we test the fine-tuned model on the simulator to ensure limited degradation to the model solely trained from AirSim data. Finally, we deploy the model on a robotic car in a real-world environment and evaluate the model’s performance compared to the baseline model trained on real-world data. We demonstrate that we can successfully train a model in simulation (MSE <= 0.01radians), and we expect to show a comparable performance in reducing the number of collisions and minimizing trajectory differences between expert and learned controller from a model trained on simulator + less real-world data and a model trained solely on real-world data.


Fine-Grained Hallucination Detection and Editing for Language Models
Presenter
  • Abhika Mishra, Senior, Computer Science
Mentors
  • Hannaneh Hajishirzi, Computer Science & Engineering
  • Akari Asai (akari@cs.washington.edu)
Session
    Session O-2P: Large Language Models: Engineering and Social Requirements
  • CSE 305
  • 1:15 PM to 3:00 PM

Fine-Grained Hallucination Detection and Editing for Language Modelsclose

Large language models (LMs) are prone to generate diverse factually incorrect statements, which are widely called hallucinations. Current approaches predominantly focus on coarse-grained automatic hallucination detection or editing, overlooking nuanced error levels. In this project, we propose a novel task—automatic fine-grained hallucination detection—and present a comprehensive taxonomy encompassing six hierarchically defined types of hallucination. To facilitate evaluation, we introduce a new benchmark that includes fine-grained human judgments on two LM outputs across various domains. To run this evaluation, I directly managed the collection of around 400 total human annotations which were analyzed to better understand the hallucinations present in LM outputs. My analysis using this benchmark reveals that ChatGPT and Llama2-Chat exhibit hallucinations in 60% and 75% of their outputs, respectively. A majority of these hallucinations fall into categories that have been underexplored in previous work. As an initial step to address this, I trained FAVA, a retrieval-augmented LM by carefully designing synthetic data generations to detect and correct fine-grained hallucinations. I set up the synthetic data generation pipeline to train FAVA which consists of prompting ChatGPT to noise a passage and insert errors one by one. The noisy passage is then post processed into our training erroneous input and edited output pairs. On our benchmark, our automatic and human evaluations show that FAVA significantly outperforms ChatGPT on fine-grained hallucination detection by a large margin though a large room for future improvement still exists. FAVA’s suggested edits also improve the factuality of LM-generated text, resulting in 5-10% FActScore improvements. These results further demonstrate the strong capabilities of FAVA in detecting factual errors in LM outputs.


Computer Vision Datasets Exhibit Cultural and Linguistic Diversity Across Perception
Presenter
  • Andre Ye, Senior, Computer Science, Philosophy UW Honors Program
Mentor
  • Ranjay Krishna, Computer Science & Engineering
Session
    Session O-2P: Large Language Models: Engineering and Social Requirements
  • CSE 305
  • 1:15 PM to 3:00 PM

  • Other Computer Science & Engineering mentored projects (20)
  • Other students mentored by Ranjay Krishna (3)
Computer Vision Datasets Exhibit Cultural and Linguistic Diversity Across Perceptionclose

I investigate the influence of cultural and linguistic backgrounds on visual perception and semantic interpretation within computer vision. This study addresses the question: Are there significant variations in the semantic content described by vision-language datasets and models across different languages? Guided by the hypothesis that cultural and linguistic diversities lead to distinct semantic interpretations, I compare multilingual datasets against monolingual counterparts. I developed metrics such as scene graph complexity, embedding space width, and linguistic diversity to quantify semantic variations across languages in both human-annotated and model-generated image captions. The methodology involves using linguistic tools and translation techniques to ensure semantic consistency across languages. Our findings indicate that multilingual captions contain, on average, 21.8% more objects, 24.5% more relations, and 27.1% more attributes than monolingual ones. Furthermore, models trained on diverse linguistic content demonstrate improved generalizability across different linguistic datasets. This study contributes to the understanding of how language and culture impact visual perception in computer vision and advocates for more inclusive dataset compilation and model training strategies.


Poster Presentation 3

2:15 PM to 3:30 PM
Coin Copter- A Coin-Sized Helicopter
Presenters
  • Michael Sabit (Michael) Ibrahim, Senior, Computer Science NASA Space Grant Scholar, UW Honors Program
  • Kevin Hernandez, Senior, Computer Engineering
Mentor
  • Vikram Iyer, Computer Science & Engineering
Session
    Poster Session 3
  • CSE
  • Easel #169
  • 2:15 PM to 3:30 PM

  • Other students mentored by Vikram Iyer (1)
Coin Copter- A Coin-Sized Helicopterclose

Sub-gram flying robots have transformative potential in applications from search and rescue, to precision agriculture, and environmental monitoring. A key gap in achieving autonomous flight for these applications is the low lift-to-weight ratio of flapping wing and quadrotor designs near 1 gram. To close this gap, we propose a helicopter-style design that minimizes size and weight by leveraging the high lift, reliability, and low-voltage of sub-gram motors. We take an important step to enable this goal by designing a light-weight, microfabricated flybar mechanism and tail wing rotor to passively stabilize the Coin-Copter: a helicopter-style robot. A 48 mg flybar is folded from a flat carbon fiber laminate into a 3D mechanism that couples the tilting of the flybar to a change in the angle of attack of the rotors. The Coin-Copter’s flybar uses a novel flexure joint design instead of the ball-in-socket joints common in larger flybars. This flybar achieved a peak damping ratio of 0.528, an 18.9x improvement from our initial design. Compared to a flybarless rotor with a near 0 damping ratio, our flybar-rotor mechanism can maintain a stable roll and pitch with relative deviations <1°. This research focuses on testing the yaw stability of a near-gram flying robot by incorporating and improving on flybar designs, roll-pitch-yaw test setups, and writing robot control software using pulse width modulation to help precisely control the heading of the Coin-Copter.


User-in-the-Loop Primitive Tagging/Suggesting for Everyday Objects
Presenters
  • Stanley Yang, Junior, Computer Science
  • Annabelle Carlota (Annabelle) Martin, Sophomore, Computer Science
  • Mingsheng Xu, Senior, Computer Science, Applied & Computational Mathematical Sciences (Scientific Computing & Numerical Algorithms)
Mentors
  • Yuxuan Mei, Computer Science & Engineering
  • Benjamin Jones, Computer Science & Engineering, CSE
  • Adriana Schulz, Computer Science & Engineering
Session
    Poster Session 3
  • CSE
  • Easel #170
  • 2:15 PM to 3:30 PM

User-in-the-Loop Primitive Tagging/Suggesting for Everyday Objectsclose

In the context of computer-aided design, researchers have studied how to reconstruct an input geometry in CAD by decomposing it into CAD primitives. Such reconstruction is useful for creating CAD designs for manufacturing applications. What we want to study is also object decomposition but towards a different goal: understanding object affordances and interactability. For example, a handle of a basket can be grasped or hung from a sticky hook, and we recognize this affordance or functionality because it has a certain shape (e.g. hook or rod). Prior research has identified eight types of shape primitives that are common in everyday objects, but the existing tagging process requires a high degree of modeling expertise. We aim to create a more automatic and easy-to-use tagging tool. Our proposed research is to develop user-in-the-loop methods for tagging shape primitives given an object geometry. This takes advantage of human intuition for how objects function and interact. We start with building an interface, where users sketch over the input mesh to indicate the region for fitting and select the type of primitive to be fit. On top of this, we plan to crop the selected mesh data to generate a reduced mesh that encompasses only the area selected by the user. Finally, we utilize differentiable rendering techniques to automatically optimize the shape parameters of user-selected primitives to fit our reduced mesh data. With this tagging tool, we can enable more people without modeling expertise to tag objects. Data generated with this tool can support future research that studies object affordances with learning, as well as improve applications in robotics, product design, and assembly design like FabHacks.


Leveraging AI to Improve STEM Engagement for Black and Latine Youth
Presenters
  • Samira Shirazy, Senior, Human Centered Design & Engineering Louis Stokes Alliance for Minority Participation, NASA Space Grant Scholar
  • Aisha Cora, Senior, Electrical and Computer Engineering
Mentors
  • Vikram Iyer, Computer Science & Engineering
  • Kyle Johnson, Computer Science & Engineering
Session
    Poster Session 3
  • CSE
  • Easel #168
  • 2:15 PM to 3:30 PM

  • Other students mentored by Vikram Iyer (1)
Leveraging AI to Improve STEM Engagement for Black and Latine Youthclose

Recent studies have shown that pedagogical approaches like hands-on lessons, representative and near-peer mentoring, as well as culturally responsive teaching increase Science Technology Engineering and Math (STEM) engagement in classrooms, specifically those with underrepresented minority (URM) students. URM students interested in pursuing STEM show increased engagement and confidence from holistic outreach programs, unfortunately, there is a dearth of URM instructors who also have the necessary technical know-how. However, new AI tools based on Large Language Models (LLMs), like ChatGPT-3.5, have been shown to increase the productivity of software developers, with the largest productivity gains being for non-experts. Therefore, we propose a study on the effects and limitations of LLMs as an educational tool for supporting students and instructors of various skill levels in both facilitating programming classes for URM students and bringing embedded systems projects to completion. We will instruct 40 hours of culturally relevant Arduino course content to 25-35 URM students. We will allow ChatGPT-3.5 to be used as an educational tool without explicitly telling students to use it as a means of understanding perceptions and hesitations around the tool from URM communities. As our lab’s previous research has seen a significant increase in productivity and project completion with the use of LLMs with novice programmers, we aim to see students who choose to use ChatGPT-3.5 program and complete their projects faster than those who choose to not, as well as an implicit understanding of prompt engineering over time. We anticipate that exposure to the tool will cultivate an interest in exploring other AI and LLM opportunities. Lastly, we hope that implementing LLMs within the curriculum will increase the number of available near-peer instructors to teach these courses by aiding content-inexperienced instructors, thus aiding in closing the digital divide.


A Model System to Detect Virulent S. marcescens Infection Using Novel, Engineered Restriction Endonuclease Mediated DNA Strand Displacement (resDSD) Circuit
Presenter
  • Megan van Meurs, Senior, Bioengineering Mary Gates Scholar, Undergraduate Research Conference Travel Awardee, Washington Research Foundation Fellow
Mentors
  • Jeff Nivala, Computer Science & Engineering
  • Nuttada Panpradist, , University of Texas at Austin
Session
    Poster Session 3
  • CSE
  • Easel #160
  • 2:15 PM to 3:30 PM

  • Other students mentored by Jeff Nivala (2)
  • Other students mentored by Nuttada Panpradist (1)
A Model System to Detect Virulent S. marcescens Infection Using Novel, Engineered Restriction Endonuclease Mediated DNA Strand Displacement (resDSD) Circuitclose

Serratia marcescens is an opportunistic pathogen that can infect multiple human organs and is responsible for many healthcare-associated infections. It has a mortality risk of up to 58% and early diagnosis is crucial for timely treatment. S. marcescens secretes a unique restriction endonuclease, which has been recognized as a virulent factor and thus can be used as a diagnostic biomarker. To detect this restriction enzyme biomarker, I have designed and investigated a model system using novel restriction endonuclease mediated DNA strand displacement (resDSD), adapted from the enzyme-free DNA strand displacement (DSD) reaction. In a typical DSD circuit, a DNA input “invading” strand invades a duplex DNA substrate, replacing the previous incumbent strand through branch migration to reveal a fluorescence molecule. In contrast, my resDSD circuit employs a restriction endonuclease enzyme input. In my design, the toehold region is concealed and blocked by a strand that the restriction enzyme can cleave. Once cleaved, the toehold region is exposed, allowing an invading strand to hybridize and initiate the DSD cascade. This study represents the first demonstration of the resDSD system. To validate the concept, I used commercially-available restriction endonuclease BamHi instead of S marcescens’ endonuclease. I will also modify E. coli 5-alpha competent strain (c2987h) to secrete BamHi in place of S. marcescens. By investigating this innovative resDSD approach, I aim to establish a reliable method for detecting bacterium such as S. marcescens based on its secretion of the restriction endonuclease. Such a diagnostic tool could contribute to early detection and prompt treatment of infection caused by this opportunistic pathogen or similar pathogens in healthcare settings.


Oral Presentation 3

3:30 PM to 5:00 PM
Musical Factors on User Experience in Video Games
Presenter
  • Olivia Hui (Olivia) Wang, Senior, Music (Theory), Computer Science
Mentors
  • Steven Tanimoto, Computer Science & Engineering, Music
  • Anne Searcy, Music
Session
    Session O-3M: Computing in the Physical World: Humans, Robots, and Beyond
  • ECE 303
  • 3:30 PM to 5:00 PM

Musical Factors on User Experience in Video Gamesclose

When creating video games, developers incorporate auditory components like music and sound effects which influence users’ gameplay experience. A game’s music is often designed with respect to the game’s context or plot, containing melodic and harmonic ideas that are continually developed. Existing research in ludomusicology and human-computer interaction have explored the role of music in these games, but few have considered what musical factors are the most easily perceived or most effective for conveying information. My work investigates specific elements of a game’s music, how they are perceived by a user, and how they impact the user’s decision-making. Participants complete a digital maze in which the music progressively adapts in response to their selected path but the adaptation method is not explicitly revealed to the user. Actions that bring a user closer or further to finishing the maze have opposing adaptations, though it is left to the user to observe and interpret these adaptations correctly. The adaptation methods include tempo, dynamic, pitch, and layering or texture. Through analyzing quantitative data tracked during gameplay as well as interviewing with participants about their experience, I seek out which of the aforementioned auditory changes are most easily perceived by and influential to players. I also discuss emotional responses associated with changes in certain auditory factors. Findings from this work may inform the development of software with effective and meaningful auditory elements for users.


Analyzing Mobility Aid User Challenges and Fabricating Improved Mobility Devices
Presenter
  • Julie Zhang, Freshman, Center for Study of Capable Youth
Mentors
  • Jennifer Mankoff, Computer Science & Engineering
  • Jerry Cao, Computer Science & Engineering
Session
    Session O-3M: Computing in the Physical World: Humans, Robots, and Beyond
  • ECE 303
  • 3:30 PM to 5:00 PM

  • Other students mentored by Jerry Cao (1)
Analyzing Mobility Aid User Challenges and Fabricating Improved Mobility Devicesclose

Currently, over 6.6 million Americans use walking canes, rollators, and forearm crutches. However, little work has been done to improve the practicality of mobility aids for users. Prior work on modifying these mobility devices has centered around sensing and monitoring user interactions with their mobility device, without changes to the core structure of the devices. Our project aims to explore a set of mobility aid modifications including aesthetics, comfort, and ergonomics. We conducted over 15 qualitative interviews with mobility aid users using phenomenological interviewing strategies to understand user preferences and experiences better and gain feedback on possible adjustments to mobility devices. After qualitative analysis and creating codes based on patterns observed in the interviews analyzed, we identified and compiled unique experiences amongst mobility aid users into a codebook. We then sought to address these observations using fabrication methods such as 3D printing, laser-cutting, and soldering to modify existing mobility devices and develop prototyping materials. Subsequently, we conducted a follow-up design workshop to have users develop modifications and accessory ideas using the tools and templates we provided. Some modifications considered included interactivity stickers, physical feedback mechanisms, and improved mobility aid tip designs. Ultimately, we gained feedback for modifications in future mobility aids research and produced guidelines from our experiences working with mobility devices that can improve community input in accessibility aid research. This work also contributed valuable insights into approaching mobility aid improvements from a Human-Computer Interaction perspective.


Poster Presentation 4

3:45 PM to 5:00 PM
Building a Large Scale Nanopore Signal Classifier for the Human Proteome
Presenter
  • Hisham Bhatti, Senior, Mathematics, Computer Science
Mentors
  • Jeff Nivala, Computer Science & Engineering
  • Melissa Queen (melq@cs.washington.edu)
Session
    Poster Session 4
  • CSE
  • Easel #172
  • 3:45 PM to 5:00 PM

  • Other students mentored by Jeff Nivala (2)
Building a Large Scale Nanopore Signal Classifier for the Human Proteomeclose

The human proteome consists of tens of thousands of proteins produced from sequences translated from the human genome. Further, each of these proteins can be modified post-translationally to create an even larger set of unique proteoforms. With such a massive catalog, a tool that could accurately and inexpensively fingerprint proteins with single-molecule resolution in real-time would have a transformative impact on biology, medicine, and healthcare. To develop such an approach, we are utilizing nanopore sensor technology. Nanopores function by electrically-examining proteins at the molecular scale. As a protein molecule passes through the nanopore—a minuscule orifice in a thin membrane—it modifies the ionic current. Each protein induces a unique alteration in the current, producing a distinctive signal pattern, or 'squiggle'. These squiggles effectively act as molecular fingerprints, potentially enabling us to identify and classify different proteins based on their specific current changes as they traverse the nanopore. In this project, we are tasked with building a machine learning model to classify proteins based on their squiggle templates when passed through a nanopore. We found that a classification model based on a standard Convolutional Neural Network (CNN) performed well on simulated data of eight synthetic protein designs, but failed to generalize properly to experimental test data. In contrast, a 1-Nearest Neighbor model significantly outperformed the neural network architecture on the synthetic protein test data. We plan to assess this model's performance on fingerprinting the human proteome through a simulated dataset, with the ultimate goal of sharing our findings with other labs that specialize in developing precision medicine, targeted drugs, and technologies for understanding protein structure, function, and interaction.


3D Shape Design for Shadow-Based Evasion Attacks on Deep Learning Vision Models
Presenter
  • Meghan Bailey, Senior, Computer Science, Mathematics
Mentor
  • Tadayoshi Kohno, Computer Science & Engineering
Session
    Poster Session 4
  • CSE
  • Easel #171
  • 3:45 PM to 5:00 PM

3D Shape Design for Shadow-Based Evasion Attacks on Deep Learning Vision Modelsclose

As deep learning vision models become more prevalent, understanding the adversarial risk associated with them is important for maintaining safety and security. A common adversarial approach, evasion attacks, involve adding perturbations to the input data until it is correctly classified by humans but misclassified by machine learning models. Previous methods for physical-world evasion attacks include placing stickers, projecting artificial light sources, and casting shadows to mask the target object. The use of shadows, a naturally occurring phenomenon, is likely to remain undetected by people, and is therefore the focus of this project. Past shadow-based evasion attacks restrict the shadow design to more inconspicuous shapes, like triangles and other simple polygons. By designing a sculpture that can detract attention from the shadows it casts, this project aims to determine whether more complex shapes can be more successful at masking the target object. To compare the effectiveness of the shapes under the black-box setting, we use the same task as previous shadow-based evasion attacks, traffic sign classification, with the LISA and GTSRB datasets. To test the attack method in a simulated environment, I use SketchUp to create various sculpture designs that cast the selected 2D shapes. A model of the sculpture is then tested in a real-world setting, evaluating both general and scheduled attacks in indoor and outdoor environments. Because previous shadow-based evasion attacks are more effective when using polygons with more sides, we expect that complex shapes will result in a higher attack success rate.


Using Graph Coloring in Cardinality Estimation
Presenter
  • Diandre Miguel B Sabale, Senior, Computer Science
Mentors
  • Dan Suciu, Computer Science & Engineering
  • Kyle Deeds, Computer Science & Engineering
  • Moe Kayali, Computer Science & Engineering
Session
    Poster Session 4
  • CSE
  • Easel #176
  • 3:45 PM to 5:00 PM

Using Graph Coloring in Cardinality Estimationclose

Graph workloads are challenging for query optimizers in databases because of query features like larger sizes, frequent joins, and fewer filters. Traditional methods see large errors on queries with more joins, while machine learning methods tend to be complex and slower. We propose a framework to improve estimators by using graph colorings to make compact summaries of a data graph, storing important information about node relations. By modelling cardinality estimation as a subgraph matching problem, we can make use of this summary information and traverse the lifted graph to estimate the number of query graph matches. Additionally, we explore optimizations such as node summation and sampling to enable estimation even for larger queries. After evaluating various designs using this framework, we find improvements up to 100x to cardinality estimation accuracies compared to other recent methods while still maintaining efficient runtimes and memory usage. We discovered that quasi-stable colors, where nodes of one color have similar connections to other colors, result in these improved results when used to build the summary. These findings help improve graph database performance and offer a new application for graph theory.


filter_list Find Presenters

Use the search filters below to find presentations you’re interested in!













CLEAR FILTERS
filter_list Find Mentors

Search by mentor name or select a department to see all students with mentors in that department.





CLEAR FILTERS

Copyright © 2007–2026 University of Washington. Managed by the Center for Experiential Learning & Diversity, a unit of Undergraduate Academic Affairs.

The University of Washington is committed to providing access and reasonable accommodation in its services, programs, activities, education and employment for individuals with disabilities. For disability accommodations, please visit the Disability Services Office (DSO) website or contact dso@uw.edu.