Session T-8D
Math, Computer Science
3:30 PM to 4:15 PM |
- Presenters
-
- Sam Chao, Junior, Geography
- Lexine Rene Kagiyama, Senior, Industrial Engineering
- Audrey Slater, Senior, Industrial Engineering
- Ryan Cheng, Junior, Industrial Engineering
- Raeleen Tedjadinata, Senior, Industrial Engineering
- Emma Leigh (Emma) Cozart, Senior, Industrial Engineering
- Kristen M. Leierzapf, Senior, Industrial Engineering
- Mentors
-
- Tom Furness, Industrial Engineering
- Nathan Dreesmann, Biobehavioral Nursing & Health Systems, University of Washington, School of Nursing
- Session
-
- 3:30 PM to 4:15 PM
Rheumatoid Arthritis (RA) is a chronic disease with no known cure. While medications are often effective at managing physical symptoms, RA patients often experience high levels of fatigue. Studies have found that fatigue may be managed through meditation, but little is known about virtual reality meditation’s (VRM) potential to alleviate fatigue. The purpose of this study is to examine the feasibility and acceptability of VRM as an alternative non-pharmacologic intervention for fatigue management in RA patients. This study implements a convergent mixed-methods design to collect patient feedback. Four participants diagnosed with RA were recruited from a local rheumatology clinic. Participants used a VRM headset in their own home over the course of four consecutive weeks. During this time, Patient Reported Outcome Measurement Information System (PROMIS) measures of fatigue, pain, depression, anxiety, physical activity, and mood were taken at baseline and at weekly intervals. Semi-structured interviews occurred at baseline and at the conclusion of the study. Interviews were audio recorded, transcribed, and coded using Atlas.ti (v8). The results are currently pending. Expected results include that participants will find VRM both feasible and acceptable for fatigue management, and that participants will report reduced fatigue levels after using the VR device. While studies have explored the use of VRM in the treatment of anxiety disorder, depression or PTSD, this is the first study to examine VRM’s use for managing fatigue in participants with RA. Results of this study will inform future clinical trials using VRM, implementation of VRM into clinical use, and give a better understanding of the patient’s experience of utilizing VRM for fatigue management.
- Presenter
-
- Caleb Ellington, Senior, Bioengineering, Computer Science Levinson Emerging Scholar, Mary Gates Scholar
- Mentor
-
- Naozumi Hiranuma, Computer Science & Engineering
- Session
-
- 3:30 PM to 4:15 PM
Deep convolutional neural networks (CNNs) have seen widespread application across problems in the life sciences where probabilistic models built on simple assumptions are insufficient. One area where Deep Learning has seen considerable success is protein structure modeling, where a protein’s tertiary structure is predicted using physio-chemical information. State-of-the-art structural prediction methods often yield high fidelity structures, but some regions (e.g. loop regions) still pose a significant challenge. To augment low fidelity structures, I propose a novel framework based on a conditional deep generative model for improving residue-residue contact predictions in unreliable local regions (ULRs), implemented as a residual convolutional neural network with high attention to contextual protein information. The work is extended from Nao Hiranuma's DeepAccNet developed in the Baker Lab. My network will supplement existing structural refinement protocols in regions where contacts are poorly predicted. If successful, this will greatly improve the ability of modern protein refinement protocols to recognize more difficult structural motifs.
- Presenters
-
- Fran Herr, Junior, Mathematics
- LeGrand Jones II, Senior, Mathematics, Physics: Comprehensive Physics
- Mentors
-
- Bennet Goeckner, Mathematics
- Rowan Rowlands, Mathematics
- Session
-
- 3:30 PM to 4:15 PM
A graph is a collection of vertices and edges. In computer science, graphs are often called networks and form the basis for many data structures and search algorithms. A matching of a graph is a selection of edges that share no common endpoints. The set of all matchings of a graph forms a simplicial complex which we call the matching complex. We are interested in the relationship between a graph and its matching complex and have been exploring whether we can characterize all simplicial complexes that are matching complexes. What structure does the matching complex imply about the graph and vice versa? We have also been interested in connections between matching complexes and well-known simplicial complexes– in particular, two-dimensional Buchsbaum complexes. These have much more structure than simplicial complexes in general, so they lend themselves to interesting questions. How can we characterize all two-dimensional Buchsbaum complexes that are matching complexes? We have also developed an interest in sequences of graphs generated by taking repeated matching complexes. Understanding these sequences would allow us to categorize graphs using the matching operation. Which graphs go to the empty set after a finite number of iterated matchings? For what finite values does this occur? What common structure do these graphs have? Investigating these questions will allow us to categorize graphs and complexes using the matching operation. We hope to make connections between graphs or categories of graphs that would otherwise remain disconnected. In pursuing these queries not only are we seeking answers but a development of tools which can be applied to further exploration.
- Presenter
-
- Scarlett Hwang, Senior, Informatics: Data Science
- Mentor
-
- Ott Toomet, The Information School
- Session
-
- 3:30 PM to 4:15 PM
Optical character recognition (OCR) is a widely used method to extract texts for page images. While modern software can convert high-quality images of printed text virtually flawlessly, small text, low-quality image or background noise still cause noticeable problems. In particular, the OCR software is often confusing letters or letter combinations that look similar, e.g. “in” for “m” or “t” for “i”. While ordinary fuzzy string matching is based on assumptions that all characters are equally likely to be swapped, this is clearly not the case OCR errors. We develop a method to automatically correct OCR-retrieved texts using a Bayesian approach. We proceed in two steps: first, we convert a large corpus of texts into images, add dithering noise and convert the images back to text using tesseract OCR software. Thereafter we compare the original and the converted texts and tabulate the resulting character errors and the character bigram errors. The error tables are converted to Bayesian error probabilities. Second, the final error correction proceeds by computing the probability the observed word in OCR-retrieved text corresponds to a known word in a large corpus of English texts, based on the probabilities calculated in the first step. We report the performance of our algorithm as a function of font type, font size, and noise intensity. Both tools, the text conversion and error tabulation, and the final error correction, are released on Github. This method contributes to devising faster, more reliable, and more context-sensitive automatic analysis of a printed text, such as processing of large quantities of photocopies of official documents that often come in uneven quality.
- Presenter
-
- Millicent Li, Senior, Computer Science Mary Gates Scholar, NASA Space Grant Scholar
- Mentor
-
- Shwetak Patel, Computer Science & Engineering
- Session
-
- 3:30 PM to 4:15 PM
During surgeries, constant blood pressure sensing is important to counteract the possibility of hypotension, which is a dangerously low drop in blood pressure. Although monitoring blood pressure with invasive arterial catheters can provide continuous information to the anesthesiologist, discomfort and health risks related to using an invasive method limit their use to only a few high-risk surgeries. While blood pressure cuffs to non-invasively measure blood pressure do exist, they are usually uncomfortable and can only periodically record blood pressure. This motivates the need for a tool to perform continuous, non-invasive blood pressure sensing. Here, we validate the use of facial photoplethysmography (PPG) signals to accurately infer blood pressure. Using our wearable eye face mask mounted with optical sensors, we collect PPG signals while the subject is undergoing surgery. Then, we can calculate blood pressure from the PPG signals and subsequently determine the accuracy of the blood pressure measurements. To infer blood pressure from non-invasive facial PPG signals, we apply temporal deep learning techniques that can model dynamic changes in the cardiovascular system. First, we test potential filtering methods by performing peak detection on noisy PPG data to determine which filtering method cleans the signals the best. Then, we incorporate several machine learning models, including autoencoders, to compress parts of the PPG signals into more featurized components. In the final step, we test the face mask sensor data to find the root mean square error (RMSE) of the predictive model compared to that of the ground truth. We expect that it is possible to infer blood pressure from noisy sensor data, as an alternative to invasive arterial catheters.
- Presenter
-
- Thomas Serrano, Junior, Pre-Sciences
- Mentors
-
- Bryan Martin, Statistics
- Daniel Pollack, Statistics
- Session
-
- 3:30 PM to 4:15 PM
Every minute, Twitter users send hundreds of thousands of tweets, providing a rich resource of publicly available text data. Our goal is to use this data to learn from and imitate the sentence structure of specific accounts. To this end, we develop mRkov, a statistical tool that takes the username of a Twitter account, also known as a handle, as input and outputs fake tweets that mimic the linguistic style of the tweets from that handle. We built mRkov into an R software package as well as an interactive and user-friendly web tool that walks the user through the process of using our software. mRkov first scrapes tweets posted from the input Twitter handle, and then after processing the text and sentiments of the scraped tweets, generates new tweets using Markov chain simulation. Markov chains consist of a sequence of items, where each item is probabilistically sampled dependent only on the preceding item in the chain. By using non-independent sampling, the Markov chain method generates a sample that mimics the true distribution. In this application, the Markov chain is a sequence of words, and the distribution is the sentence structure of the tweets. mRkov also allows users to provide input that influences the sentiment of tweets in order to generate tweets that tend to be more “positive” or “negative” in sentiment. Tools such as mRkov help us better understand patterns of speech and writing. This has many useful applications, including identifying if multiple accounts are coming from the same source or writer, analyzing and comparing how the style and sentiment of different accounts change over time, and detecting bots or other fake accounts.
- Presenter
-
- Tucker Reed Stewart, Senior, Computer Science and Systems Mary Gates Scholar
- Mentors
-
- Juhua Hu, Institute of Technology (Tacoma Campus)
- Anderson Nascimento (andclay@uw.edu)
- Session
-
- 3:30 PM to 4:15 PM
For network administration and maintenance, it is critical to anticipate when networks will receive peak volumes of traffic so that adequate resources can be allocated to service requests made to servers. In the event that sufficient resources are not allocated to servers, they can become prone to failure and security breaches. However, popular forecasting models such as ARMIA, a statistical model that forecasts a value based a linear combination of previously observed values, and Recurrent Neural Networks forecast time series data generally, thus lack in predicting peak volumes in the series. In this project, we aim to study how time series decomposition can be used to improve prediction when peak volumes occur in time series. More than often, time series are a combination of different features, which may include but are not limited to 1) Trend, the general movement of the traffic volume, 2) Seasonality, the patterns repeated over some time periods (e.g. daily and monthly), and 3) Noise, the random changes in the data. Considering that the fluctuation of seasonality can be harmful for trend prediction, we apply the Fourier Transform to extract seasonalities and study how forecasting these components independently can be used to improve both the general time series forecasting and the peak volume prediction.
The University of Washington is committed to providing access and accommodation in its services, programs, and activities. To make a request connected to a disability or health condition contact the Office of Undergraduate Research at undergradresearch@uw.edu or the Disability Services Office at least ten days in advance.