menu
  • expo
  • expo
  • login Sign in
Office of Undergraduate Research Home » 2025 Undergraduate Research Symposium Schedules

Found 7 projects

Poster Presentation 1

11:20 AM to 12:20 PM
Child Vocalizations and Emergent Language in the Panãra Community: An Exploratory Study
Presenter
  • Emily Kim, Senior, Psychology, Early Childhood & Family Studies UW Honors Program
Mentors
  • Naja Ferjan Ramirez, Linguistics
  • Jessamine Jeter, Linguistics
  • Myriam Lapierre, Linguistics
Session
    Poster Presentation Session 1
  • MGH Commons East
  • Easel #23
  • 11:20 AM to 12:20 PM

  • Other Linguistics mentored projects (7)
  • Other students mentored by Naja Ferjan Ramirez (2)
  • Other students mentored by Myriam Lapierre (2)
Child Vocalizations and Emergent Language in the Panãra Community: An Exploratory Studyclose

Before a child says their first word, they begin to produce and practice sounds they hear. Early vocalizations play a crucial role in speech development and language acquisition. However, most research on infant vocalizations focuses on children in Western, industrialized societies. This study contributes to the growing body of literature on diverse linguistic environments, specifically examining emergent sounds in the Panãra community, an Indigenous group in the Brazilian Amazon with approximately 700 speakers. Ten infants aged 2-21 months wore recording devices that collected a recording of their language environment over a day. Alongside shared ethnographic observations, I manually annotated selected 30-second audio segments for a fine-grained analysis of child vocalizations. I am currently analyzing the frequency and types of child vocalizations (i.e. vocal play, canonical babbling, variegated babbling) in infants' speech, and I plan to explore how these vocalizations may differ across the age range studied. I predict that child vocalizations will become more complex with increasing age, following pre-speech vocal development stages broadly found across cultures. My findings will contribute to a broader understanding of how language learning varies across cultural settings, vocalization stages, and the role of the environment to language development. 


Oral Presentation 1

11:30 AM to 1:10 PM
Sanmen Wu: A Study of Contrastive Voicing
Presenter
  • Em Tyutyunnyk, Senior, Asian Languages and Cultures, Chinese, Linguistics UW Honors Program
Mentors
  • Myriam Lapierre, Linguistics
  • Zev Handel, Asian Languages & Literature
  • Jessica Luo, Linguistics
Session
    Session O-1J: Archiving Narratives of Race and Change
  • MGH 284
  • 11:30 AM to 1:10 PM

  • Other Linguistics mentored projects (7)
  • Other students mentored by Myriam Lapierre (2)
Sanmen Wu: A Study of Contrastive Voicingclose

I am currently assisting PhD student Jessica Luo in her research of the Sanmen Wu sound system, a language of the Wu family found in Southeast China. As Jessica writes an article that summarizes the sound structure of Sanmen Wu, I analyze utterances produced by speakers of the language. In my self-guided research, I focus on the sound quality of the consonants and their variations to determine underlying pronunciation. I also connect these variations to historical sound changes from Middle Chinese, its ancestor, into Sanmen Wu. I observe that Sanmen Wu speakers tend to freely alter pronunciations of certain consonants. For example, a speaker may say 部 [pu] or [bu] meaning ‘part,’ the latter only appearing after another spoken word. These two syllables contrast only in voicing, where [p] is voiceless and [b] is voiced. I use Praat, an industry-standard speech-analysis program, to read diagrams that depict the acoustics of these consonants to verify my findings. I am also creating a set of rules that predicts this alternation. One of the conditions is as follows: words with alternating voicing in their consonants change when pronounced within a sentence (‘medially’). Eventually, I will explain these rules, and I predict my explanation is related to the evolution of Sanmen Wu into its current stage. I reason that because the Wu language family stems from Middle Chinese, both of which require contrastive voicing to create distinct words, Sanmen Wu also contains the original underlying variation that exists in Middle Chinese. As such, I attribute this variation to an inherent part of the language rather than random circumstance. Ultimately, I intend to foster a thorough understanding of Sanmen Wu phonology and provide a foundation for further exploration of this topic.


Poster Presentation 2

12:30 PM to 1:30 PM
Music and Mothers: Auditory Input to Infants in Families Headed by Same-Gender Couples
Presenter
  • Daisy Niloufar Abiad, Senior, Psychology UW Honors Program
Mentor
  • Naja Ferjan Ramirez, Linguistics
Session
    Poster Presentation Session 2
  • MGH Commons West
  • Easel #3
  • 12:30 PM to 1:30 PM

  • Other Linguistics mentored projects (7)
  • Other students mentored by Naja Ferjan Ramirez (2)
Music and Mothers: Auditory Input to Infants in Families Headed by Same-Gender Couplesclose

Language input is necessary for language development. Importantly, mothers have been shown to speak to infants more than fathers do. My study asks whether this pattern extends to the amount of music that mothers produce or play to infants. Music impacts people neurologically, emotionally, and even physically, and can possibly be used to enhance the linguistic development of infants alongside speech. I am comparing the amount and type of speech and music heard by infants in mother-father families to infants in mother-mother families to isolate the variable of gender and gauge its association with infants’ auditory input. Daylong Language ENvironment Analysis (LENA) recorders are used to record everything in an infant’s naturalistic environment (at home) therefore capturing how many instances of in-person and/or electronic speech or music occur and whether parents’ speaking and/or singing is directed to the infants. Undergraduate students are currently annotating LENA recordings of twenty-one mother-mother families (ages 3-24 months) and twenty-three mother-father families (ages 6-24 months) for the amount and type of speech and music present in infants’ audio environments. Annotators indicate what is heard in 100 randomly sampled 10-second segments from each daylong recording. Using independent samples t-tests, I am analyzing the differences in the average amount of music, the average amount of speech, and the type of music presented to infants of mother-mother families versus infants of mother-father families. I hypothesize that there is significantly more speech and music heard by infants in mother-mother families compared to infants in mother-father families. I also hypothesize there is significantly more singing heard by infants of mother-mother dyads, but a comparable amount of electronic music. If found, these results will point to gender being associated with auditory input variability, expanding the knowledge on environmental factors that influence infant language development.


Oral Presentation 2

1:30 PM to 3:10 PM
Transcribing in Context: Evaluating Biases in English Phoneme Transcription
Presenters
  • Aruna Srivastava, Senior, Computer Science
  • Alexander Le (Alex) Metzger, Senior, Mathematics, Computer Science
  • Ruslan Mukhamedvaleev, Junior, Computer Science, University of Washington
Mentors
  • Jian Zhu, Linguistics, University of British Columbia
  • S. M. Farhan Samir, Computer Science & Engineering
Session
    Session O-2P: Innovative and Interdisciplinary Uses of Data and Machine Learning
  • CSE 305
  • 1:30 PM to 3:10 PM

  • Other Linguistics mentored projects (7)
Transcribing in Context: Evaluating Biases in English Phoneme Transcriptionclose

Speech technology is often evaluated under idealized conditions that privilege certain speaker profiles: native English speakers in optimal acoustic environments. This approach overlooks the reality that English, as a global lingua franca, is spoken by billions of non-native speakers. Similarly, speakers with speech disorders face potential exclusion. Accurate phonemic transcription is crucial both for analyzing speech patterns in post-stroke aphasia and Computer-Assisted Pronunciation Training (CAPT). We evaluate automatic phonemic transcription under realistic conditions, including varied noise levels, L2 accents, and speech variations. We find that standard models perform suboptimal under realistic conditions, and that applying vocabulary refinement and data augmentation improves error rates by 12-28 percentage points. To demonstrate the viability of our phonemic transcription models, we develop Machine Aided Pronunciation Learning via Entertainment (MAPLE). MAPLE maintains real-time performance on consumer devices, demonstrating the practical applicability of robust socioculturally-aware phonemic transcription in educational environments.


Poster Presentation 3

1:40 PM to 2:40 PM
Implosive Variability in Central African Languages
Presenters
  • Bella Linn Rae, Fifth Year, Linguistics
  • Amaya Haylie (Amaya) Saunders, Senior, Linguistics
  • Chloe Osborn, Junior, Linguistics
Mentor
  • Richard Wright, Linguistics
Session
    Poster Presentation Session 3
  • MGH Commons West
  • Easel #15
  • 1:40 PM to 2:40 PM

  • Other Linguistics mentored projects (7)
Implosive Variability in Central African Languagesclose

In the study of the consonants of the world’s languages, certain consonants, specifically those made through the glottis, are less studied than consonants made using primarily the lungs, despite being geographically widespread. In particular, there is very little large-scale research about the acoustic (sound) variability in their production in connected speech. In the present study, we investigate the acoustic variability present in the realization of implosives (consonants made from lowering the glottis and blocking air in the mouth at the same time) from the online corpus of Hausa and kiSwahili. The corpora used was from Common Voice which contains recording of speakers reading sentences. This data was downloaded for each language, then hand corrected and noted for implosives and their equivalents. We used this data to investigate the variability between the consonants in Hausa and kiSwahili and we discuss this variability in the realization of the consonants. We anticipate finding important differences among implosives in these two languages and hope to apply this knowledge to other languages with implosives. This research is part of a larger effort to document the variability among consonants made using the glottis in languages all over the world.


Processing and Analysis of Panãra Field Materials
Presenter
  • Adrian Brunke, Junior, Linguistics
Mentors
  • Myriam Lapierre, Linguistics
  • Sunkulp Ananthanarayan,
Session
    Poster Presentation Session 3
  • MGH Commons West
  • Easel #19
  • 1:40 PM to 2:40 PM

  • Other Linguistics mentored projects (7)
  • Other students mentored by Myriam Lapierre (2)
Processing and Analysis of Panãra Field Materialsclose

Panãra is a Jê language spoken in the Panará Indigenous Land in the Brazilian Amazon by around 730 people. I am an undergraduate research assistant working as part of the larger Panãra Documentation Team at the University of Washington. I am in the process of transcribing, coding, and archiving field notes taken by team members during the summer of 2024. I have employed my experience with Panãra and Portuguese to resolve ambiguities in the notes and to code materials in a standardized, accessible manner. Many letters, such as ⟨b, d, g, z, l⟩, and sequences, such as ⟨-ät-⟩ or ⟨-me-⟩ are impossible due to Panãra’s phonology and orthography. However, these letters may occur in the notes due to transcriber error or Portuguese loans. When I identified suspect items, I had to use my knowledge of Panãra to determine their status. I typed the notes into text format before transferring items into a spreadsheet. In the spreadsheet, I coded part of speech and added lexical items to the ongoing dictionary. My work is a case study in longer-term, multi-researcher documentary efforts in linguistics. Not only will the body of data I code be valuable in further analysis of the language, but the processes developed will be useful in rethinking how documentary linguistics is carried out. In particular I emphasize the need for a coherent vision of data usage, from collection to coding. As the dictionary work moves forward, my next steps will be to give words that have not yet been checked in the field to the research team for the summer and to code the phonological, orthographic, and lexical information for each word into the FLEx database.


Oral Presentation 3

3:30 PM to 5:10 PM
Automatically Estimating Child-Directed Speech: A Reanalysis
Presenter
  • Aeddan Grace (Aeddan) Claflin, Senior, Speech & Hearing Sciences, Linguistics UW Honors Program
Mentor
  • Naja Ferjan Ramirez, Linguistics
Session
    Session O-3A: Early Childhood Development: Exploring Social, Educational and Parental Practices
  • MGH 288
  • 3:30 PM to 5:10 PM

  • Other Linguistics mentored projects (7)
  • Other students mentored by Naja Ferjan Ramirez (2)
Automatically Estimating Child-Directed Speech: A Reanalysisclose

In researching language development, it is important to observe a child in their natural environment instead of a lab, because this gives better insight into their daily life and development. Language ENvironment Analysis (LENA) is a recorder often used for such projects which is worn by the child and collects up to 16 hours of sound. Although LENA creates automatic estimates of various statistics, such as number of adult/child words and changes in speaker, other variables, such as how much speech is directed to the child (as opposed to overheard) must be manually annotated by humans, which is time-consuming and expensive. Recently, researchers developed an open-source classifier that uses LENA’s estimates to identify segments of recordings as sleep, child-directed speech (CDS), or other-directed speech (ODS) (Bang et al., 2023). If accurate, this technology could significantly speed up the annotation process, potentially enhancing the scope of language interventions. My research focuses on verifying the reliability of the classifier and its validity for use in future research. I am in the process of reanalyzing a previously published dataset of daylong LENA recordings collected with infants 6-24 months of age. I processed the original LENA data through the new classifier and currently oversee undergraduates who manually annotate a random selection of the segments, which I compare with the classifier. My preliminary findings show that the classifier’s reliability is limited for recordings collected with the youngest infants; however, I hypothesize to find higher reliability at older ages, since LENA’s automatic statistics are more accurate for recordings from older ages. I am also investigating which other aspects of the segments affect the reliability of the classifier (such as presence of additional children, noise, etc.). My results will give insight into if, and in what contexts, the classifier can be used for future research.


filter_list Find Presenters

Use the search filters below to find presentations you’re interested in!













CLEAR FILTERS
filter_list Find Mentors

Search by mentor name or select a department to see all students with mentors in that department.





CLEAR FILTERS

Copyright © 2007–2025 University of Washington. Managed by the Center for Experiential Learning & Diversity, a unit of Undergraduate Academic Affairs.

The University of Washington is committed to providing access and reasonable accommodation in its services, programs, activities, education and employment for individuals with disabilities. For disability accommodations, please visit the Disability Services Office (DSO) website or contact dso@uw.edu.