Session L-1B

Computer Vision, Robotics, Virtual Reality and Computer Simulations

9:30 AM to 11:00 AM | | Moderated by Narayani Choudhury


Improving Mobile User Experience Using Data Preloading Via Parallel Network Calls
Presenter
  • Dias Mashikov, Sophomore, Computer Science, Seattle Central College
Mentor
  • Arlene Ford, Computer Science & Engineering, Seattle Colleges
Session
  • 9:30 AM to 11:00 AM

Improving Mobile User Experience Using Data Preloading Via Parallel Network Callsclose

Mobile application is a big chunk of today's people's lives, and the user experience is an essential fact in app success. In any mobile app ad, you can see smooth screen transitions, but in an actual app, we see loading signs the first time we open it. The research aims to eliminate the loadings in screen-to-screen transitions in mobile applications. The method of achieving smoothness of the app experience is data preloading using parallel network calls right at the start of the mobile application. The duration of the mobile app start takes 2 to 3 seconds before it turns on; in such case, we can call parallel network calls to load all necessary data at the same time as the app launches. Thus, as the app is working, we already have all the essential data to present without in-time network calls. We conduct tests on the validity of this method using a demo app built on the Flutter framework for building cross-mobile applications. We use the NodeJS server to provide parallel network calls and the tracking time needed to execute the API call. The findings of the effectiveness of this method in improving the mobile user experience are essential in implementing such techniques in industrial mobile app development with high active user patency.


A Software Layer for Kinematic Tracking during Commercial VR Gameplay
Presenters
  • Pranati Dani, Sophomore, Computer Science
  • Claris Winston, Junior, Computer Science
Mentors
  • Courtnie Paschall, Bioengineering
  • Maurice Montag (rmontag@uw.edu)
  • Jeffrey Herron, Neurological Surgery
Session
  • 9:30 AM to 11:00 AM

A Software Layer for Kinematic Tracking during Commercial VR Gameplayclose

Our lab has recently implemented a virtual reality (VR) experimental platform for research with human patients undergoing invasive, intracranial neural monitoring for seizure localization. The goal of our project is to design a software interface enabling high-dimensional kinematic tracking during commercial VR gameplay. This will allow us to collect a large dataset of synchronized neural signals and continuous kinematic variables such as hand location and angular velocity during the dynamic/naturalistic human movements evoked by VR gameplay, that can be used to support neural decoding of 3D human movement for brain computer interface (BCI) development in labs at UW. It can also be published as a benchmark dataset for human movement decoding to the larger community of computational neuroscientists. Currently, only 2D datasets of human movement tracking during concurrent intracranial neural recording have been provided to the larger computational community. The distribution of a 3D movement dataset with intracranial neural recording is thus vital and would facilitate many avenues of clinical, computational, and neural network (ANN) modeling research. Specifically, our project is to create a software interface, called a “layer”, that can record VR controller/headset positions and other tracked variables while a subject plays a commercial VR game that utilizes the industry-standard OpenXR framework. Our layer works by intercepting different functions in the OpenXR framework, and uses this to track movement data. Specifically, our layer will include a manifest file which holds our layer’s identification, and intercepts commands such as xrNegotiateLoaderApiLayerInterface, xrNegotiateLoaderInfo, xrCreateActionSpace, and more to specifically track the movement. The positions and data collected will be outputted to a .json file which will then be used in the clinical research detailed above.


Privacy Harm in the Built Environment
Presenter
  • Caitlin Quirk, Senior, Global and Regional Studies Mary Gates Scholar, UW Honors Program
Mentor
  • Jessica Beyer, Center for Studies in Demography and Ecology
Session
  • 9:30 AM to 11:00 AM

Privacy Harm in the Built Environmentclose

As Internet of Things (IoT) devices--networked computing technologies embedded directly in objects that interact with or sense the environment in some way--are increasingly incorporated into the built environment, the amount of data collected on users simultaneously increases. From this arises a quantitative concern of managing data points and the security of devices, as well as a societal concern of ensuring privacy. Thus, it is imperative to address not only the benefits of IoT devices, but also the tangible and potential harms, including privacy harms. From this baseline, I have researched the following question: How does the data collected by connected devices--both aggregate and personally-identifiable--affect the appropriate flow of information and reshape conceptions of privacy? In studying this question, I have centered my methodology on analyzing the privacy risks of devices in both public and private spaces by conducting a literature review of privacy theory and policy. In doing so, preliminary results have yielded that, while privacy policies may be targeted to one generic user, privacy harm does not affect populations congruently. My research has visualized how subjective and objective privacy harms in the built environment have varying effects on individuals, as people's conceptions of privacy are not congruent to start with and are reshaped differentially by connected devices. To curb these vulnerabilities and differing effects, scholars have proposed a social contract theory of privacy. Instead of placing the burden of privacy on a single actor, privacy may instead be seen as a social contract, in which individuals and groups obtain the agency to determine how their information is communicated. In this way, the differential vulnerabilities of populations can be preemptively accounted for, reducing the risk of privacy harm in a proactive manner.


Mathematical Modeling for Computer Vision
Presenters
  • Christian Tarta, Freshman, Computer Science, Lake Wash Tech Coll
  • Nicholas Develle
  • Han Ji, Senior, Computing and Software Development, Math Education, Lake Wash Tech Coll
  • Kwan-Jie Lee
  • Alex Gale, Senior, Electrical Engineering AS-T, Lake Wash Tech Coll
Mentor
  • Narayani Choudhury, Applied & Computational Math Sciences, Physics, Lake Washington Institute of Technology, Kirkland
Session
  • 9:30 AM to 11:00 AM

Mathematical Modeling for Computer Visionclose

Computer vision is a branch of artificial intelligence that involves applications of mathematical methods and computers for machine learning from digital images and videos. Here, we apply computer vision-based methods for optical character recognition (OCR) and image compression. OCR has important applications such as process automation like check clearing, digitizing text and image records for online databases, automated analysis of surveillance camera videos for security, automated reading of text from car license plates in a parking lot, etc. But how can we feed visual information to a computer in a form that it can understand and operate on? To this end, we digitized images into vectorized arrays and analyzed data using vector and scalar projections. Further, we applied algorithms with foundations in linear algebra and wrote programs using Python scientific libraries for optical character recognition and image compression. Using IPython, we characterized color and grayscale images as arrays and implemented singular value decompositions (SVD) and principal component analysis (PCA) for grayscale and color image compression studies and OCR. These studies illustrate how mathematical transformations and data reduction methods can be used for optical character recognition, image compression, identification and encryption. This project elucidates the key role of mathematical modeling for computer vision applications.


Modeling and Simulation of Resistive Random Access Memories Using High-Performance Computing Tools
Presenter
  • Simon Cao, Senior, Electrical Engineering Mary Gates Scholar
Mentor
  • Anant M.P. Anantram, Electrical & Computer Engineering
Session
  • 9:30 AM to 11:00 AM

Modeling and Simulation of Resistive Random Access Memories Using High-Performance Computing Toolsclose

The resistive random-access memory (RRAM) is a promising candidate for next-generation nonvolatile memory (NVM). This technology is considered one of the most standout emerging memory technologies due to its potential high storage density, fast access speed, low power consumption, and low cost. If the technology is well-understood, it could significantly change the memory industry and trigger more advancement in computing and, thus, the scientific field. One of the most widely discussed applications of resistive memories is neuromorphic computing. Neuromorphic computing uses very large-scale integrated circuits to mimic neurological architectures. The human brain can compute with extremely low energy consumption since neurons and synapses combine storage and computation. However, the scalability of current semiconductor technology could be problematic as such synapses density in the neurological system is hard to achieve by conventional means. Resistive random-access memories could ease the problem by providing high scalability and computing capacity to mimic neuron and synaptic functions and thus push important technological advancements in autonomous systems, which learn and interact with the environment in real-time and open the door to so many possibilities. The research project’s main focus would be the theoretical modeling and simulations of the properties and behaviors of the conductive filament in the resistive random-access memories using high-performance computing (HPC) resources at the University of Washington. More simulations will be done to understand further the physical principles behind two configurations: unipolar RRAM and bipolar RRAM. In unipolar RRAM, the voltage to reset and set the memory cell has the same polarity, while in bipolar RRAM, the voltage to reset and set has reverse polarity. While some of the theoretical models are still not complete, these simulations might help understand the phenomenon and shed some light on future modeling.


Improved Pure Pursuit Algorithm for Surface Robot Movement
Presenter
  • Alex Gale, Senior, Electrical Engineering AS-T, Lake Wash Tech Coll
Mentors
  • Michelle Judy, Mathematics, Lake Washington Institute of Technology
  • Narayani Choudhury, Mathematics, Science Technology Engineering and Mathematics, Lake Washington Institute of Technology, Kirkland
Session
  • 9:30 AM to 11:00 AM

Improved Pure Pursuit Algorithm for Surface Robot Movementclose

Motion algorithms are foundational for effective autonomous robot movement. For surface robotics, one particularly useful algorithm is known as pure pursuit, where a robot follows a point along a path that is a constant distance away from the robot. This work hopes to improve the pure pursuit motion algorithm to account for differences in the robot's features by implementing closed loop full state feedback (FSF) control. In addition, this project aims to provide more abilities to the pure pursuit algorithm, such as specification of angle at each point, allow for moving points, and ensure fast and efficient movements. The algorithm additions are made by modifying the calculations or control loop, and using simulations to verify effectiveness. So far, this work has shown promise by enabling intricate movements while being effective. As a whole, the role of this research is to make pure pursuit more useful and effective for any robot operating on a surface.


A High-Resolution Small-Scale Polarizing Rotary Encoder Design With Applications to Robotics and Nanotechnology
Presenter
  • Isaias Ramos-Gunn, Non-Matriculated, Electrical Engineering, Edmonds Community College
Mentor
  • Tom Fleming, Physics, Edmonds College
Session
  • 9:30 AM to 11:00 AM

A High-Resolution Small-Scale Polarizing Rotary Encoder Design With Applications to Robotics and Nanotechnologyclose

Rotary encoders are present in many electronic devices, and are used to measure changes in rotation. The common photo type encoder has remained largely unchanged in its fundamental operation, and faces limitations in resolution at small sizes. This research explores an encoder design utilizing polarizers, that can achieve higher resolutions than similarly sized photo encoders. Current photo encoders achieve their rotational measurements through a rotating disk, with perforations located along the circumference. As the disk rotates, a photodiode receives pulses of light through the perforations from a light source. The resulting signal(s) consists of a multitude of digital pulses that require compiling and processing, before position can be determined. To achieve higher resolutions, more perforations are required, and more photodiodes are needed. High-resolution encoders can therefore become very expensive as the perforations increase, and the perforations, and therefore resolution, is limited at smaller sizes. However, there is a way to inexpensively achieve indefinite resolution and absolute position, at sizes that are unfeasible with current encoder technologies. Such an encoder utilizes an initially polarized light source, followed by an analyzer polarizer, and then a photodiode. As the analyzer is rotated, the photodiode receives a gradually increasing, and then decreasing, change in brightness (translating into current). Rotating the analyzer continuously, produces a sine graph of current change, with each point corresponding to a position of the analyzer. In this research, this configuration has been adapted to create a small scale, and high-resolution rotary encoder. This polarizer encoder measures only ten millimeters in diameter, and achieves greater resolution than current similarly sized rotary encoders.


Insect Robotics in Space: Trajectory and Landing | World’s First Insect-Sized Robot without a Power/Control Wire
Presenter
  • Merrill Keating, Sophomore, Pre-Major NASA Space Grant Scholar
Mentor
  • Sawyer Fuller, Mechanical Engineering, U Washington
Session
  • 9:30 AM to 11:00 AM

Insect Robotics in Space: Trajectory and Landing | World’s First Insect-Sized Robot without a Power/Control Wireclose

Insects have superlative capabilities over contemporary robots: increased mobility, redundancy, coverage area, and can utilize different sensors. Perhaps most importantly, having reduced mass, launch costs can be exponentially lower. The goal of my research project was to create a simulation to compute the pathway/logistics of an insect robot landing on Mars, code a simulation, linearize data and arrays, and learn more about insect robots and space to investigate how to land insect-sized flying robots on Mars. I first reviewed existing information on spacecraft transiting from Earth to Mars, including the need to protect insect-sized robots from space and radiation while in transit. Of great interest would be potentially different de-orbiting and landing scenarios given a lower mass spacecraft, which is still traveling a hundred times faster than a bullet during initial deorbiting. My research speculated that simpler landing strategies like Spirit and Opportunity vs Endurance might be employed for the carrier spacecraft, and once on the surface, a carrier could deploy a small rover to act as a home base supplying power and communications to the flying insect-sized robots, greatly extending the range of the science data collection. My research captured the general characteristics of insect robotics and using a Python program I created, simulated reentry paths and maximum heating rates, which were still high as expected. My next steps would be to test different ballistic coefficients to see if a small payload direct from deorbit landing is possible. The broader implication is the potential for delivering many tiny distributed sensors on Mars to dramatically improve our understanding of the planet and at a lower cost.


Thermal Optimization of Computer Hardware Through Airflow & Computer Fluid Dynamics Simulation  
Presenters
  • Toufic Majdalani, Sophomore, Computer Science, Mathematics, Edmonds Community College
  • Caleb Jansen, Sophomore, Computer Engineering, Edmonds Community College
Mentor
  • Tom Fleming, Physics, Edmonds College
Session
  • 9:30 AM to 11:00 AM

Thermal Optimization of Computer Hardware Through Airflow & Computer Fluid Dynamics Simulation  close

The thermal optimization of computer systems is a study dating back to their creation. Because powerful computers typically generate additional heat, the more heat one can remove from a computer, the more powerful they can make it. Over the years, it has become a popular solution for many companies to build computers using standardized, modular hardware. Our research questions the cooling efficiency of this standardized hardware. Initially, we are creating a 3D simulation of a desktop computer that will allow us to quickly test various internal component layouts for thermal efficiency. Using a custom-programmed microcontroller for data collection, our real-world testing includes the building and monitoring of both a computer built in this standardized fashion and one reconfigured to other custom layouts, which, from our simulations, we anticipate will improve the cooling capabilities of the computer. By using this data in conjunction with observations from our digital 3D simulations, we hope to test potential improvements to the layout of computer components to enhance the performance of both high-end computers and everyday desktops.


The University of Washington is committed to providing access and accommodation in its services, programs, and activities. To make a request connected to a disability or health condition contact the Office of Undergraduate Research at undergradresearch@uw.edu or the Disability Services Office at least ten days in advance.