Session O-3K
From Moral Reasoning to the Cosmos: Exploring the Intersection of AI, Digital Communities, and Space Analysis
3:30 PM to 5:00 PM | MGH 238 | Moderated by Afra Mashhadi
- Presenter
-
- Jazminh (JazMinh) Diep, Senior, Computer Science & Software Engineering
- Mentor
-
- Afra Mashhadi, Computer Science & Engineering, UWB
- Session
-
- MGH 238
- 3:30 PM to 5:00 PM
Nostalgic contents are social media posts that refer to past collective memories or events. When crisis events do occur, it affects the way of living life. In these unprecedented times, people turn to social media to express their concerns and feelings. By studying the engagement and interactions of users in social media, we can create new ways of understanding nostalgic longing. This research explores the nostalgic activity of tweets during crisis events. The NLP (Natural Language Processing) classifier is a pre-trained algorithm that enables us to detect whether the tweet is nostalgic or not by using references to human language and then classifying them into categories. The performance of our classifier is 98% accurate. This accuracy ensures the detection of nostalgic tweets is correct when formulating an analysis. Once the nostalgic tweets are obtained from the classifier I can begin performing a deeper analysis of the tweets by using machine learning tools. A descriptive analysis of time is used to gain insight into how people react to events and the progression of the nostalgia feeling humans have. Especially pre-crisis, during a crisis, and post-crisis are time periods that are significant because it gives insight into the progression of human behavior. Sentiment analysis is also performed on the data to understand how people feel about certain events. This is a useful method to gain information about whether there is a positive or negative reminiscent during the time the tweet is posted. The analysis has shown that less than 1% of tweets are nostalgic and the contents tend to be more negative than positive. The content of the tweets ranges from informational to political with a reminiscent of the time during crisis. The results will help us understand human behavior and how it can be leveraged as public assistance during a crisis.
- Presenter
-
- Inkar Kapen, Senior, Computer Science & Software Engineering Mary Gates Scholar
- Mentor
-
- Afra Mashhadi, Computing & Software Systems (Bothell Campus), UWB
- Session
-
- MGH 238
- 3:30 PM to 5:00 PM
The "Missing Maps" research project targets remote secluded communities across large regions of the globe. In this project, we train a model that leverages satellite images to find settlements, houses, and villages so that humanitarian organizations and community health systems can know about every community in the area. This has a high impact and helps non-governmental organization and local policymakers to meet the needs of people in rural areas and plan relief efforts in cases of crisis or natural disasters. There is extensive research on training neural networks to recognize buildings on satellite images for big cities like New York or Las Vegas, but not on rural satellite images to identify remote communities. In the "Missing Maps" research project, we use ensemble methods that combine multiple machine learning models to solve the problem holistically and improve accuracy as a result while adapting it to a diverse variety of continents and areas. Some of the methods explored in this research are based on community detection using neural networks and advanced image inpainting. The models are trained using the latest datasets, such as OpenEarthMaps and OpenBuildings. This project diversifies satellite image analysis and addresses the biases in algorithms that are only targeting urban areas.
- Presenter
-
- Andrew Macpherson, Senior, Honors Liberal Arts, Computer Science, Physics, Seattle Pacific University
- Mentors
-
- Christine Chaney, English, Liberal Arts and Sciences, Seattle Pacific University
- John Lindberg (lindberg@spu.edu)
- Lisa Goodhew, Physics, Seattle Pacific University
- Dennis Vickers, Computer Science & Engineering, Seattle Pacific University
- Session
-
- MGH 238
- 3:30 PM to 5:00 PM
As the field of astrophysics continues to grow, the quantity of data to analyze is constantly expanding. With projects like the James Webb Space Telescope each sending back hundreds of gigabytes of data every day, Artificial Intelligence (AI) technologies is needed to assist manual analytical techniques in processing these volumes of information. One of the most apparent tasks for AI in astrophysics is image categorization – identifying what sort of astronomical object a certain body is. If a machine could categorize these bodie in significantly less time than a person, it would free tens of thousands of human hours every year. I created a Machine Learning program using a Deep Neural Network (DNN) implemented in Keras and TensorFlow capable of classifying astronomical images based on photometric data. Built from scratch, it utilizes existing labeled images to “learn” how astronomical bodies differ in appearance and assign them a category. The value of automated classification of astronomical phenomena cannot be understated. DNN allows the model to find unique identifiers in images humans often cannot spot, leading to often-more reliable predictions, recognizing possible discoveries in far less time, and freeing astronomers to undertake higher-cognition tasks only humans can accomplish. As the model is continuouly improved, it will be able to make increasingly accurate classifications and be of ever-growing value.
- Presenter
-
- Jacob Seaman, Sophomore, Computer Science, Neuroscience, Shoreline Community College
- Mentor
-
- Lauren Bryant, UW Libraries, Shoreline Community College
- Session
-
- MGH 238
- 3:30 PM to 5:00 PM
The existential risk of being unable to control a super-intelligent agent is called the Control Problem. Philosophers argue that an intelligence explosion and the creation of a singularity are inevitable, likening it to a ticking bomb. This fear is also present within the media, with rogue robots and singularities being frequent tropes for science fiction. However, the catastrophizing of sentient computers is not new. When first invented, academics and citizens speculated the computer was a precursor to supernatural thinking machines. Even in the mid-20th century, scientists believed sentient computers were right around the corner. This belief led to widespread computer phobia- the general public was afraid of what they thought were sentient gadgets and their implications. As familiarity with computers grew, along with a redefinition of what qualifies as human intelligence, this fear dwindled, and the public viewed computers as mere tools. Once again, due to the innovation of neural networks, we are experiencing a resurgence of phobia, reviving the belief that computers are supernatural thinking machines. This literature review will compare recent and historical philosophical arguments to current psychology and computer science. I expect to find similarities between the 1950s and present phobia and logical dissonance between the application of computer science and philosophical arguments. By confronting a potentially baseless fear, we can correct and alleviate the issues caused by irrationality and identify policies separate from sentience but still necessary to safeguard against non-sentient AI.
- Presenter
-
- Sravani Nanduri, Senior, Economics, Computer Science
- Mentors
-
- Yejin Choi, Computer Science & Engineering, University of Washignton
- Alisa Liu, Computer Science & Engineering
- Liwei Jiang, Computer Science & Engineering
- Session
-
- MGH 238
- 3:30 PM to 5:00 PM
As AI applications become more pertinent to human society, research on computational ethics and morality becomes critical to align autonomous AI models to human values and ethics. The study of computational moral reasoning requires an acute understanding of nuanced moral decisions in the context of human societies. In this work, we tackle one perspective of moral understanding – morally analogous situations that share abstract norms (e.g., “Parking in a handicap spot if you are not disabled” and “Driving in HOV lane if you are by yourself” share the norm “Using public resources you are not entitled to use”). We collect a set of subjective moral situations that are machine-generated or human written, and ask the model to generate morally analogous situations for a given situation. We use OpenAI’s GPT-3 model as the source of data curation. To improve the data quality of the model-generated analogous situations, we examine examples and conduct user studies to identify issues, and with these insights create improved data filters and design additional data modification pipelines. Using these situations, we then compose multiple choice style questions that ask models to pick the least/most morally analogous situation of the options provided. We expect this multiple choice task to be difficult and hold up to the advancements of natural language models. It will be used to evaluate language models’ moral and social reasoning, scoring based on the number of multiple choice questions answered correctly. This dataset, its annotations, etc. can also be used to create diagnostic sets of analogous situations which moral models should be invariant to, or even as a reasoning step within moral models (such as DELPHI) to make better, less biased moral judgements. We have fine-tuned the template for GPT-3 and are working on refining the dataset and designing user studies.
The University of Washington is committed to providing access and accommodation in its services, programs, and activities. To make a request connected to a disability or health condition contact the Office of Undergraduate Research at undergradresearch@uw.edu or the Disability Services Office at least ten days in advance.