menu
  • expo
  • expo
  • login Sign in
Office of Undergraduate Research Home » 2019 Undergraduate Research Symposium Schedules

Found 19 projects

Poster Presentation 1

11:00 AM to 1:00 PM
Virtual Reality in Prison
Presenter
  • Anand Selvan Sekar, Senior, Computer Engineering UW Honors Program
Mentor
  • Aditya Sankar, Computer Science & Engineering
Session
    Poster Session 1
  • MGH 241
  • Easel #134
  • 11:00 AM to 1:00 PM

  • Other students mentored by Aditya Sankar (1)
Virtual Reality in Prisonclose

Our criminal justice has an extremely high rate of recidivism, i.e. the rate at which those who are released re-enter prison (more than half within one year, more than three-quarters within five years). The prison system is a harsh environment, ineffective at rehabilitating inmates for release in several aspects, including education and mental health care. Virtual reality (VR) is an immersive technology with multifarious applications. Inmates at a local prison sought to utilize this technology to solve issues in prison. The inmates, along with UW students and the Reality Lab, collaboratively identified three application domains in which virtual reality is potentially an effective solution. These applications areas are (A) hands-on education and vocational training; (B) exposure to day-to-day experiences; and (C) mental health skills/ relaxation.We have currently developed prototypes for applications (B) and (C), and are in the process of designing a method to measure their efficacy. We hope that this will provide a foundation for the development of future applications and facilitate deployment in a local prison. 


Oral Presentation 1

12:30 PM to 2:15 PM
microSPliT-seq: Single Cell Transcriptomic Sequencing for Bacteria
Presenter
  • Luana Paleologu, Senior, Biology (Molecular, Cellular & Developmental), Microbiology UW Honors Program
Mentors
  • Georg Seelig, Computer Science & Engineering, Electrical Engineering
  • Anna Kuchina, Electrical Engineering
Session
    Session 1C: Molecular Control of the Cell
  • 12:30 PM to 2:15 PM

microSPliT-seq: Single Cell Transcriptomic Sequencing for Bacteriaclose

Recent studies have shown just how important microbiomes are for individual health, population health, and environmental health. Unfortunately, these studies are often limited by the costs of meta-Genomics. Furthermore, meta-Genomic data itself is limiting by only providing information on population characteristics, but not the functional contributions of members within the population. Single cell transcriptomic sequencing aims to lessen the latter issue by providing information on the gene expression of each individual cell within a sample. Even so, current single cell sequencing technologies are costly and require specialized equipment. SPLiT-seq is a single cell transcriptomic technology developed by the Seelig lab at the University of Washington that uses split-pool ligation to create uniquely barcoded cDNA for each cell using every-day laboratory bench tools and techniques and costs only one cent per cell. Currently, SPLiT-seq is well-optimized for mammalian cells. However, using this method on bacteria requires its own set of optimized procedures given the morphological and biochemical differences between eukaryotes and prokaryotes. The aims of this project are to deal with these biological differences to increase the information obtained from messenger RNA and decrease the amount received from ribosomal RNA, as well as reduce the amount of cells that get the same cDNA barcodes. By optimizing this single-cell transcriptomic technique for bacteria, future studies involving microbial communities will be able to obtain more robust information on the individuals within those populations.


VSEPR Encoding of Peptide Structures for Predicting Binding-Affinity
Presenters
  • Jonathan Taylor (Jonathan) Francis-Landau, Junior, Mathematics
  • Ximing Lu, Junior, Computer Science (Data Science), Statistics Undergraduate Research Conference Travel Awardee
Mentors
  • Mehmet Sarikaya, Applied & Computational Math Sciences, Chemical Engineering, Computer Science & Engineering, Materials Science & Engineering, Oral Health Sciences
  • Siddharth Rath, Computational Molecular Biology, Materials Science & Engineering, Genetically Engineered Materials Science and Engineering Center
Session
    Session 1D: Frontiers in Peptide and Protein Science
  • 12:30 PM to 2:15 PM

  • Other Materials Science & Engineering mentored projects (16)
  • Other students mentored by Mehmet Sarikaya (7)
  • Other students mentored by Siddharth Rath (2)
VSEPR Encoding of Peptide Structures for Predicting Binding-Affinityclose

The goal of this project is to encode peptides, i.e., short amino acid sequences, in terms of smaller molecular components such as their VSEPR (Valence Shell Electron Pair Repulsion) features for training interpretable models with reasonable predictability of functionality. This enables us to go beyond the limitations imposed by treating peptides as sequences of letters, thereby enabling a generalized encoding that works for lipids and other biomolecules that are of interest in a comparable scenario. Biological processes are rarely disjoint and often complicated which lends justification to our approach. Current methods for binding affinity prediction, such as one-hot encoding, where letter-based sequences are converted to a binary representation, do not take into account molecular level features. Combined with a neural network, such a simple encoding is better at predicting affinities of short peptides, e.g., 5-9 Amino acids long, but with an increase in length from 9 to 10, the predictability suffers an exponential drop. Several alternatives have been employed in literature, but they also suffer from the negative impact of distal effects. In the VSEPR approach, encoding peptides in terms of their component functional-group geometries enables us to encode the actual physical length, rather than the number of amino acids. This leads to an overlap between peptides of different length, thereby reducing the fall in predictability. In this encoding, we create 5 channeled matrices with each channel corresponding to ‘central-atom connectivity’, ‘bond-types’, ‘bond-lengths’, ‘bond-angles’ and ‘lone-pairs’ that is then fed through a Deep Residual-Neural-Network. The metrics used to evaluate the models are Pearson-Correlation, Spearman-Rank-Correlation-Coefficient, and Area-under-Receiver-Operating-Curve. With this technique, we were able to consistently predict binding affinities of peptides without an appreciable loss between 9 or 10 length peptides. This method would allow one to create length invariant encodings, not limited to just peptides, significantly improving the practicality of using such a model. The research is supported by NSF/DMR-DMREF program under Materials Genome Initiative.


Mathematical Studies of Data Storage in CD-ROM and DNA
Presenters
  • Iuliia Dmitrieva, Sophomore, Engineering Physics, Lake Wash Tech Coll
  • Dylan Dean, Sophomore, Computer Engineering, Lake Wash Tech Coll
  • Taylour Mills, Sophomore, Aeronautical Engineering, Lake Wash Tech Coll
Mentor
  • Narayani Choudhury, Computer Science & Engineering, Mathematics, Physics, Lake Washington Institute of Technology, Kirkland
Session
    Session 1L: Mathematical Modeling in the Sciences
  • 12:30 PM to 2:15 PM

  • Other students mentored by Narayani Choudhury (2)
Mathematical Studies of Data Storage in CD-ROM and DNAclose

Current data storage elements have reached their threshold capabilities due to extensive data and limiting size requirements. Digital storage in DNA has aroused considerable interest as the next generation miniaturized high capacity storage device. Deoxyribonucleic acid (DNA) forms the genetic blueprint of life and is the primary carrier of genetic information in living cells and organisms. Data storage in DNA involves encoding of digital binary data into synthesized DNA strands. Here, we employ calculus-based methods to provide a comparative study of data storage capacities of conventional CD ROM and DNA. We use parametric equations to model the spiral structure in CD ROM and double helix of DNA and employ calculus-based methods to study the arc length, curvature and topological properties of DNA. The data storage densities for binary, base 3 and base 4 in DNA are estimated. The calculated data storage densities are found to be in good agreement with reported estimates. Recent studies demonstrate that magnetic nano-knots can be used for data storage. The topological properties of DNA including twists, links and knots thus provide additional attributes which may in future be used for data storage.


Customizable Tactile Maps for the Visually-Impaired
Presenters
  • Jerry Cao, Sophomore, Computer Science Mary Gates Scholar, UW Honors Program
  • Shriya Kurpad, Sophomore, Computer Science
  • Emily R. Warnock, Junior, Computer Science
  • Kathryn J. Lum, Junior, Computer Science
Mentors
  • Jennifer Mankoff, Computer Science & Engineering
  • Megan Hofmann, Computer Science & Engineering
Session
    Session 1M: Healthcare
  • 12:30 PM to 2:15 PM

Customizable Tactile Maps for the Visually-Impairedclose

This presentation seeks to summarize a solution to helping the visually-impaired navigate new areas. While previous solutions have been relatively successful, many lacked two key features that we hope our solution addresses: being affordable and allowing customization towards those with compounding disabilities. Our solution consists of two main parts: (1) a user-interface created for Fusion 360, a popular 3D-modeling application, that is built upon an existing framework detailed in Hofmann (2018) called PARTs (Parameterized Abstractions of Reusable Things), and (2) an optimization algorithm to generate maps that are tailored for its users. Through PARTs, we developed different variations of modular pieces of map (e.g., roads, buildings, and sidewalks), which increases ease of customization. After the user specifies personal information and preferences through the PARTs UI—such as the width of their finger, their physical limitations, their understanding of braille, and their desired map features—the optimization algorithm will select the best combination of features from the PARTs database for that specific user. At the end of the process, users have a model of a tactile map in Fusion 360 which can be printed out with commercially-available 3D-printers. With 3D-printers becoming more affordable, this solution is significantly less cost prohibitive than other means of generating tactile maps, which required an initial investment upwards of a thousand dollars. Through user studies, we also test how blind users interpret these maps, which helps us guide design improvements in the future. In this presentation, we discuss the efficacy of our solution by comparing it to previous works and detail our plans to improve the system by making the PARTs user-interface more accessible and incorporating user feedback about the map itself. 


Project Sidewalk: A Web-based Crowdsourcing Tool for Collecting Sidewalk Accessibility Data at Scale
Presenter
  • Aileen Zeng, Junior, Computer Science Mary Gates Scholar
Mentor
  • Jon Froehlich, Computer Science & Engineering
Session
    Session 1R: Computer Security, Privacy, Accessibility, and Graphics
  • 12:30 PM to 2:15 PM

Project Sidewalk: A Web-based Crowdsourcing Tool for Collecting Sidewalk Accessibility Data at Scaleclose

We introduce Project Sidewalk, a new web-based tool that enables online crowdworkers to remotely label pedestrian-related accessibility problems by virtually walking through city streets in Google Street View. To train, engage, and sustain users, we apply basic game design principles such as interactive onboarding, mission-based tasks, and progress dashboards. In an 18-month deployment study, 797 online users contributed 205,385 labels and audited 2,941 miles of Washington DC streets. We compare behavioral and labeling quality differences between paid crowdworkers and volunteers, investigate the effects of label type, label severity, and majority vote on accuracy, and analyze common labeling errors. To complement these findings, we report on an interview study with three key stakeholder groups (N=14) soliciting reactions to our tool and methods. Our findings demonstrate the potential of virtually auditing urban accessibility and highlight tradeoffs between scalability and quality compared to traditional approaches.


Synthesizing Programs that Generate Plant Graphics
Presenter
  • Caleb Hansel (Caleb) Winston, Sophomore, Pre-Sciences
Mentor
  • Rastislav Bodik, Computer Science & Engineering
Session
    Session 1R: Computer Security, Privacy, Accessibility, and Graphics
  • 12:30 PM to 2:15 PM

Synthesizing Programs that Generate Plant Graphicsclose

Within the domains of graphic and video game design, there is often need for tools to quickly develop convincingly realistic models of plants. A common tool applied to this problem is L-systems, a kind of rewriting system that can be used to define rules for iteratively transforming plant models to increasingly fine detail. However, the connection between L-systems and the graphics they generate can sometimes be unintuitive. To enable more intuitive development of plant models, we propose a method for generating models of branching structures from simple specifications of a few given iterations of the model. Our approach involves encoding plant models as bracketed L-systems and applying SMT (Satisfiability Modulo Theory) solvers to solve a form of the inverse L-system problem. Iterations of growth in the form of simple vector graphics are compiled to formal constraints for an L-system that can indefinitely generate further growth iterations. The satisfactory system is then found using an SMT solver. This technique allows for branching structures to be conveniently developed by providing meaningful specifications.


Analysis of the Susceptibility of Smart Home Interfaces to End User Error
Presenter
  • Mitali Vishwesh Palekar, Senior, Computer Science UW Honors Program
Mentors
  • Franziska Roesner, Computer Science & Engineering
  • Earlence Fernandes, Computer Science & Engineering
Session
    Session 1R: Computer Security, Privacy, Accessibility, and Graphics
  • 12:30 PM to 2:15 PM

  • Other students mentored by Franziska Roesner (1)
Analysis of the Susceptibility of Smart Home Interfaces to End User Errorclose

Trigger-action platforms enable end-users to program their smart homes using simple conditional rules of the form: if condition then action. Although these rules are easy to program, subtleties in their interpretation can cause users to make errors that have consequences ranging from incorrect and undesired functionality to security and privacy violations. Based on prior work, we enumerate a set of nine error classes that users can make, and we empirically study the relationship between these classes and the interface design of eight commercially available trigger-action platforms. Particularly, we examine whether each interface prevents (e.g., via good design) or allows each class of error. Based on this analysis, we develop a framework to classify errors and extract insights that lay a foundation for the design of future trigger-action programming interfaces where certain classes of errors can be mitigated by technical means or by alerting the user to the possibility of an error. For instance, we identify that an analysis of a dataset of functionally-similar trigger-action rules could be used to predict whether certain types of error patterns are about to occur. We believe that this work is a first step towards trigger-action interface designs that significantly mitigate user error.


Greedy Face Meshing: An Efficient Meshing Algorithm for Polygon Rendering in Computer Graphics
Presenter
  • Ryan Raghav Pachauri, Senior, Computer Science
Mentor
  • Kevin Zatloukal, Computer Science & Engineering, Allen School
Session
    Session 1R: Computer Security, Privacy, Accessibility, and Graphics
  • 12:30 PM to 2:15 PM

Greedy Face Meshing: An Efficient Meshing Algorithm for Polygon Rendering in Computer Graphicsclose

In computer graphics, a voxel (volume element) is a point in a 3D world coordinate system (i.e. the coordinate system of a virtual world). In games like Toca Blocks or Minecraft, voxels are used to store the texture of a particular terrain. Sometimes, voxels next to each other have the same texture. When voxels of homogeneous textures form polygons, rendering systems will optimize memory storage by storing the polygons' vertices rather than every single voxel in the polygon. The process of choosing polygons that cover the voxels is known as meshing. We refer to these polygons as quads and the collection of quads as a mesh. Current methods for polygon meshing require too much data storage or require a drastic change in the mesh after a small change in the world coordinate system. We propose the Greedy Face Meshing (GFM) Algorithm, a linear time algorithm for meshing voxels into quads. We prove that our algorithm is within a constant factor of the optimal solution (in terms of number of quads) and can update in constant time for a single-voxel change in the world coordinate system. We also show how the GFM Algorithm can be implemented using the Segment Tree data structure. Rendering systems can use the GFM algorithm to mesh polygons since its storage is no worse than any existing algorithm and its updates take constant time.


Poster Presentation 2

1:00 PM to 2:30 PM
A Nanopore-Based Molecular Tagging System Using DNA Barcodes
Presenter
  • Karen Zhang, Senior, Biochemistry UW Honors Program
Mentors
  • Jeff Nivala, Computer Science & Engineering
  • Katie Doroschak, Computer Science & Engineering
Session
    Poster Session 2
  • MGH 241
  • Easel #134
  • 1:00 PM to 2:30 PM

  • Other students mentored by Jeff Nivala (2)
  • Other students mentored by (1)
A Nanopore-Based Molecular Tagging System Using DNA Barcodesclose

Barcoding of physical objects with molecular tags holds an advantage over traditional paper or electronic barcodes in that they are discreet, durable, and difficult to falsify. Here, we developed a DNA tagging system that labels objects to verify their authenticity and trace their origin. We chose DNA as our tagging medium due to its information storage capacity and chemical stability, allowing us to generate a wide variety of unique barcode sequences that can be read by Oxford Nanopore’s MinION sequencing device. The MinION contains an array of thousands of nanopore sensors that are capable of sequencing single strands of DNA. The nanopore sequencing process creates distinct disruptions in the ionic current through the sensors that are indicative of the DNA sequence. However, the DNA basecalling software that processes the raw ionic current is computationally expensive, making it impractical when our goal is to quickly “scan” and identify a tagged sample. Because of this, we designed our barcode sequences to generate unique current patterns that are identified using a simple classification algorithm as opposed to arduous basecalling. So far, we have synthesized and classified a set of 96 barcodes that can be indiscriminately combined to create multi-bit tags. In a given tag, each bit is defined by the presence or absence of a particular barcode, and in practice, we have assembled and read up to 16-bits. We have also explored increasing bit capacity by independently varying barcode lengths, which adds another dimension to the barcode space. We also tested the durability of our barcodes by drying them onto filter paper and sequencing them 24 hours later, proving that our barcodes could survive in a dehydrated state. Future experiments will aim to lengthen this duration and expose the barcodes to different environments, in order to better simulate intended tagging conditions.


Phased Array Wireless Power Optimization on a Planar Array of Coupled Resonators
Presenter
  • Usman M. (Usman) Khan, Sophomore, Electrical Engineering
Mentor
  • Joshua Smith, Computer Science & Engineering
Session
    Poster Session 2
  • MGH 241
  • Easel #138
  • 1:00 PM to 2:30 PM

Phased Array Wireless Power Optimization on a Planar Array of Coupled Resonatorsclose

Wireless power transfer has many applications, from powering biomedical implants to wireless sensors. For more practical use, however, several challenges must be overcome, such as a lack of efficiency and power leakage to other nearby electronics. These issues become especially difficult to tackle with a moving receiver. To combat these problems, an array of magnetically coupled coils was designed. Previous work has shown the capabilities of this system when one coil of the array is supplied with power. In this work, I explore the possible benefits of having two coils in the array driven with power instead, studying the interaction between the different coils. By adjusting parameters such as the phase relationship between the two transmitters’ signals, we aim to optimize power delivery to specific targets and simultaneously minimize leakage to other areas. I tested different configurations of the system in a series of experiments and analyzed measured data to determine which setup is most favorable. Afterwards, I evaluated the efficiency of the configuration compared to the previous single-transmitter case. This provides better insight into how the coils in the array magnetically interact with one another, which will inform future design decisions. It can also eventually lead to better solutions for delivering high power to selected targets within a given space. This can create flexible, efficient, and safe wirelessly charged electronic implants for a variety of biomedical applications, enabling further research in the field and the development of novel solutions.


Detecting Post-Translational Protein Modifications Using Nanopore Sensing Technology
Presenter
  • Aerilynn Nha Chi Nguyen, Senior, Biology (Molecular, Cellular & Developmental) Undergraduate Research Conference Travel Awardee
Mentors
  • Jeff Nivala, Computer Science & Engineering
  • Nicolas Cardozo, Computer Science & Engineering, Molecular Engineering and Science
Session
    Poster Session 2
  • MGH 241
  • Easel #135
  • 1:00 PM to 2:30 PM

  • Other students mentored by Jeff Nivala (2)
Detecting Post-Translational Protein Modifications Using Nanopore Sensing Technologyclose

Nanopore sequencing is a “third-generation” sequencing approach in which a constant electric voltage is applied across a nanoscale pore and the changes in the ionic current flow through the pore are measured as single molecules such as RNA or DNA pass through it. It is our goal to expand and adapt this sensing technology to enable single-molecule proteomics. Specifically, being able to characterize protein post-translational modifications at the single-molecule level is important for quantifying protein complexity and understanding how different protein mod-forms contribute to cellular processes such as differentiation and the progression of disease states like cancer. In this project, we modified a model protein to contain a protein kinase A phosphorylation motif with the purpose of demonstrating the ability to discriminate the modified protein from the unmodified with the Oxford Nanopore MinION, a high-throughput nanopore sequencing device. We hypothesize that the observed ionic current pattern will change upon phosphorylation and enable direct quantification of modified peptides. Ultimately, these analyses will inform us of the general ionic current signature that phosphorylated residues generate, which can then be added to our growing library of nanopore signal signatures that are informative of protein sequence and structure at the single-molecule level.


Oral Presentation 2

3:30 PM to 5:15 PM
Adversarial Language Generation with MCTS
Presenter
  • Jize (Tony) Cao, Junior, Computer Science (Data Science), Statistics
Mentors
  • William Agnew, Computer Science & Engineering
  • Pedro Domingos, Computer Science & Engineering
Session
    Session 2B: Machine Learning
  • 3:30 PM to 5:15 PM

Adversarial Language Generation with MCTSclose

Natural language generation (NLG) aims to generate meaningful and coherent natural language from a machine-representation system. There have been many approaches casting this task as a reinforcement learning (RL) problem. The NLG RL paradigm usually has a generative model to produce response for a given query, and a discriminator model to distinguish between human-generated dialogues and machine-generated ones, analogous to the human evaluator in Turing test. Previous research shows that such adversarial-trained generators can generate higher quality sentences than baseline supervised generators. However, the current state of the art focuses on how to fine-tune the generator using the discriminator, rarely incorporating the discriminator in the final language generation. Formally, we define the NLG as a planning problem. The agent (model) is trying to generate responses given prior query. Each action in the plan represents the intermediate state of generating a response. At each state, the agent decides which word to generate next. The main issue for that paradigm is the enormous number of possible actions at each step, equal to the total number of words in the vocabulary. To solve this issue, we incorporate an idea from AlphaGo, another great success in RL. AlphaGo uses the Monte Carlo Tree Search (MCTS) algorithm to reduce the searching space to avoid intractable computations from the enormous search space in Go. Inspired by current NLG RL paradigm and AlphaGo’s success, we propose to incorporate MCTS into NLG RL. The agent estimates initial word values using the generator probability distribution, and picks the next words through Upper Confidence Bounds for Trees (UCT), a widely used MCTS planning algorithm. Our results approach the current state of the art. My main responsibility was implementing the paradigm and seek ways to improve it. This work’s main contribution is presenting an effective way to incorporate discriminator into NLG.


A More Biologically Accurate Artificial Neural Network to Learn Environment Models for Reinforcement Learning
Presenter
  • Vinny Murugappan Palaniappan, Senior, Neurobiology, Computer Science UW Honors Program
Mentor
  • Rajesh Rao, Computer Science & Engineering
Session
    Session 2B: Machine Learning
  • 3:30 PM to 5:15 PM

  • Other Computer Science & Engineering mentored projects (22)
A More Biologically Accurate Artificial Neural Network to Learn Environment Models for Reinforcement Learningclose

Current artificial neural networks (ANNs) use an archaic view of neurons based on oversimplification of their biological computations. This has allowed optimized computation through GPUs, leading to the widespread adoption of ANNs in deep learning but losing important biological features. In this research we create a new model for artificial neural networks that incorporates more realistic aspects of biological neural networks such as stochastic vesicle release in neuronal synapses and dendritic computation. It has been shown that animals learn models of the environment when introduced to a new situation, but this type of learning is often not incorporated into reinforcement learning models in AI. The goal is to have the new, biologically realistic ANN learn models of environments in simulation frameworks like OpenAI Gym and AI2Thor so that given past frames/images and an action taken by the agent/player the network can predict how the environment will react over time. We compare the performance of this network with that of traditional ANNs (e.g. recurrent neural networks with long-short term memory) to demonstrate the capabilities of the new network. Our results have implications for recent efforts to move toward biologically inspired models of learning in the fields of artificial intelligence and computer vision in robotics. We expect the model learning algorithms we present to more efficiently learn an environment and select actions to achieve arbitrary goals within that environment. This is different than traditional reinforcement learning models, which aim to complete a single goal and can take a long time to train. The novelty of this work is the increased biological realism without the computational complexity of simulating real neurons, the temporal aspect in neural processing in addition to the spatial aspect, and prediction based on actions instead of pure video prediction.


OsteoApp: Towards Ubiquitous Osteoporosis Screening
Presenter
  • Parker Scott (Parker) Ruth, Senior, Bioengineering, Computer Engineering Mary Gates Scholar, UW Honors Program, Washington Research Foundation Fellow
Mentors
  • Shwetak Patel, Computer Science & Engineering
  • Edward Wang, Electrical Engineering
Session
    Session 2H: Medical Imaging and Devices
  • 3:30 PM to 5:15 PM

  • Other Computer Science & Engineering mentored projects (22)
OsteoApp: Towards Ubiquitous Osteoporosis Screeningclose

Osteoporosis — a condition characterized by abnormally low bone density — primarily afflicts women over the age of 65 and is estimated to cause almost 9 million annual fractures worldwide. The current gold standard for clinical osteoporosis screening is dual-energy x-ray absorpiometry (DEXA), which can be used in combination with demographic metrics to estimate an individual’s likelihood of fracturing a bone. Early detection of osteoporosis enables preventative dietary, lifestyle, and pharmaceutical interventions to improve patient outcomes. However, DEXA requires access to expensive equipment and specialized facilities. This motivates the need for an inexpensive and ubiquitous osteoporosis screening technology, bringing access to osteoporosis screening to individuals in low resource settings. In this work, I designed, implemented, and evaluated a smartphone application called OsteoApp that attempts to infer bone density indirectly by measuring the resonant properties of bone. Using my smartphone application prototype in parallel with a custom hardware setup, I collected data from retirement community members with known DEXA scan results  as well as from a control group of University of Washington students. I analyzed these data to evaluate the feasibility of a smartphone-based osteoporosis screening solution.


Medical Imaging for Realtime Diagnosis on Magic Leap One
Presenters
  • Paul Yoo, Junior, Applied & Computational Mathematical Sciences (Discrete Mathematics & Algorithms)
  • Yingru (Alan) Feng, Senior, Computer Science
Mentor
  • Aditya Sankar, Computer Science & Engineering, Mechanical Engineering
Session
    Session 2H: Medical Imaging and Devices
  • 3:30 PM to 5:15 PM

  • Other students mentored by Aditya Sankar (1)
Medical Imaging for Realtime Diagnosis on Magic Leap Oneclose

Medical imaging techniques such as X-ray, provide clinicians intensive information on the disease/condition of patients. However, clinicians have to look away from the subject to refer to medical images, thereby losing track of their work. Thus, clinicians usually study the images prior to surgery and limit reference time to images during surgery. Furthermore, unlike X-ray, novel imaging methods (such as optical ultrasound) are not taught in medical schools, so untrained clinicians face challenges in interpreting the images. These two limitations restrict the clinicians’ ability to fully utilize and adopt advanced medical imaging techniques. In this work, we explore the possibility of using Augmented and Virtual Reality (AR/VR) in the context of medical imaging. Prior applications of AR/VR technology in medicine have been limited to AR-aided training for medical students, telepresence for interaction, as well as remote therapy. We aim to use AR as a real-time diagnostic and therapeutic tool by augmenting the clinicians’ live view with various imaging modalities (such as X-ray, optical ultrasound, near-infrared). We hypothesize that providing these images in-context, and in some cases aligned with the subject, will improve the interpretation of images resulting in better guidance for diagnosis or surgery. To test this, we are creating an AR-based medical imaging/analysis application that uses techniques such as volumetric rendering and real-time image registration to augment the clinicians' view. Furthermore, clinicians can interact with the images by filtering, slicing, and reducing dimensionality, in order to better understand the images and thereby the underlying disease/condition.


Poster Presentation 3

2:30 PM to 4:00 PM
Reducing Tag Identification Time in a Molecular Tagging System
Presenter
  • Aishwarya Mandyam, Senior, Computer Science, Philosophy
Mentors
  • Katie Doroschak, Computer Science & Engineering
  • Luis Ceze, Computer Science & Engineering
  • Jeff Nivala, Computer Science & Engineering
Session
    Poster Session 3
  • MGH 241
  • Easel #127
  • 2:30 PM to 4:00 PM

  • Other Computer Science & Engineering mentored projects (22)
  • Other students mentored by (1)
  • Other students mentored by Jeff Nivala (2)
Reducing Tag Identification Time in a Molecular Tagging Systemclose

Labeling objects with DNA-based tags can provide a secure, difficult to fake identifier that is particularly useful for objects of high value or those that cannot be physically tagged. In this problem setup, a tag is a bit string, where each bit represents the presence or absence of a DNA strand containing a particular barcode. Our goal is to consistently and accurately identify the tag. These DNA barcodes were designed for use on a MinION nanopore sequencer, which outputs a time series signal corresponding to the DNA sequence. Ideally, each barcode should generate a dissimilar signal, which makes it easier to distinguish from other barcodes. We designed 96 barcodes that are signal orthogonal (i.e the signal output from the MinION was as dissimilar as possible), and detected them using signal processing algorithms. Using this system, I created an error analysis pipeline to ensure that we can identify tags both quickly and accurately. In order to optimize the time it takes to identify a tag, it was important to minimize the number of sequencing reads we needed to observe on the MinION, without sacrificing accuracy. I found that using a subset of the reads produced approximately the same error rate as a full run. Therefore, we can run the MinION for a shorter amount of time, and still identify tags at a similar error rate compared to a longer runtime.


Linguistic Knowledge and Transferability of Contextual Word Representations
Presenter
  • Nelson Liu, Senior, Linguistics, Computer Science Goldwater Scholar, Mary Gates Scholar, UW Honors Program, Undergraduate Research Conference Travel Awardee, Washington Research Foundation Fellow
Mentor
  • Noah Smith, Computer Science & Engineering
Session
    Poster Session 3
  • MGH 241
  • Easel #140
  • 2:30 PM to 4:00 PM

  • Other Computer Science & Engineering mentored projects (22)
  • Other students mentored by Noah Smith (1)
Linguistic Knowledge and Transferability of Contextual Word Representationsclose

Contextual word representations derived from large-scale neural language models are successful across a diverse set of natural language processing (NLP) tasks, suggesting that they encode useful and transferable features of language. To shed light on the linguistic knowledge they capture, we study the representations produced by several recent pretrained contextualizers (variants of ELMo, the OpenAI transformer LM, and BERT) with a suite of sixteen diverse probing tasks. We find that linear models trained on top of frozen contextual representations are competitive with state-of-the-art task-specific models in many cases, but fail on tasks requiring fine-grained linguistic knowledge (e.g., conjunct identification). To investigate the transferability of contextual word representations, we quantify differences in the transferability of individual layers within contextualizers, especially between recurrent neural networks (RNNs) and transformers. For instance, higher layers of RNNs are more task-specific, while transformer layers do not exhibit the same monotonic trend. In addition, to better understand what makes contextual word representations transferable, we compare language model pretraining with eleven supervised pretraining tasks. For any given task, pretraining on a closely related task yields better performance than language model pretraining (which is better on average) when the pretraining dataset is fixed. However, language model pretraining on more data gives the best results.


Poster Presentation 4

4:00 PM to 6:00 PM
Wireless Sensor Module for End-Effectors
Presenter
  • Ramon Qu, Senior, Informatics
Mentor
  • Tapomayukh Bhattacharjee, Computer Science & Engineering
Session
    Poster Session 4
  • MGH 241
  • Easel #160
  • 4:00 PM to 6:00 PM

  • Other students mentored by Tapomayukh Bhattacharjee (1)
Wireless Sensor Module for End-Effectorsclose

At the terminus of every robotic manipulator is an end-effector. Sensors mounted at the end-effector provide egocentric perception, enabling the robot to touch and see the world from a unique viewpoint. Our existing wireless perception module has been able to stream visual (RGBD) and force (haptic) data wirelessly to other devices and is integral to our autonomous feeding robot application. As our robot applications have grown in scope, the demand for more sensors with higher quality and greater frequency has increased too. This research focuses on identifying the bottleneck of the data transmission speed of multiple sensors and implementing task-driven data extraction and compression methods. The module compresses the sensor data with optimized processing methods, such as real-time object detection, face detection, and pressure prediction. Additionally, this project involves a hardware design portion which improves the previous design and aims to easily exchangeable mounting technique and sensors. This research uses the Nvidia Jetson TX2, which is able to complete more complex tasks in shorter computing power than the Intel Joule used in the last version. The new embedded board mainly uses Python to run deep learning instances and uses low-level packages and hardware encoders to interact with sensors and cameras. This project also compares processing speed with different deep learning frameworks on Jetson infrastructure and results in a faster and more accurate solution. The sensor module becomes a processing node in the robot network, freeing the robot to focus on task-level computation rather than lower-level perception calls.


filter_list Find Presenters

Use the search filters below to find presentations you’re interested in!













CLEAR FILTERS
filter_list Find Mentors

Search by mentor name or select a department to see all students with mentors in that department.





CLEAR FILTERS

Copyright © 2007–2026 University of Washington. Managed by the Center for Experiential Learning & Diversity, a unit of Undergraduate Academic Affairs.

The University of Washington is committed to providing access and reasonable accommodation in its services, programs, activities, education and employment for individuals with disabilities. For disability accommodations, please visit the Disability Services Office (DSO) website or contact dso@uw.edu.