Found 10 projects
Poster Presentation 1
11:20 AM to 12:20 PM
- Presenters
-
- Mana Yamaguchi, Senior, Speech & Hearing Sciences
- Amanda Silber, Senior, Speech & Hearing Sciences
- Mentor
-
- Amy Pace, Speech & Hearing Sciences
- Session
-
-
Poster Presentation Session 1
- MGH Balcony
- Easel #46
- 11:20 AM to 12:20 PM
Previous evidence points to the benefit of early literacy intervention and support for bilingual children. Therefore, the need for culturally responsive practices in the field of speech-language pathology is essential for the growing bilingual population across all settings. Although there is substantial literature on the home literacy environment (i.e., resources and practices that families use during book reading at home), most of the existing evidence comes from monolingual children. The purpose of our research is to analyze the important relationship between child and caregiver during shared book reading to understand parents' language use and its impact on child vocabulary and grammar. Our research questions are: 1) what is the amount of book reading reported by families; 2) how much bilingual input does the parent produce; and 3) how is this related to children's vocabulary and grammatical development? We collected videos of parent-child book reading in the home and we measured vocabulary development with a parent-report vocabulary checklist in English and Spanish. We also collected environmental surveys containing questions regarding the literacy environment (e.g., duration of book reading; how often they read; and what language they read in). We hypothesize that children who receive more bilingual input during book reading will demonstrate larger vocabularies and more complex syntax. As research assistants in the Child Language Lab, we score standardized language assessments and transcribe the book reading interactions. We have completed scoring and are in the process of analyzing the data from parent report instruments and the transcribed videos. This research may identify key factors in creating an enriching, supportive literacy and language environment for developing multilingual speakers. We can use the findings of this study in the field of Communication Sciences and Disorders and may improve interventions for bilingual children, especially for daycares and preschools.
- Presenter
-
- Krista Lauren Pechacek, Senior, Speech & Hearing Sciences Mary Gates Scholar, UW Honors Program
- Mentors
-
- Gabriel Cler, Speech & Hearing Sciences
- Cara Sauder, Speech & Hearing Sciences
- Session
-
-
Poster Presentation Session 1
- HUB Lyceum
- Easel #126
- 11:20 AM to 12:20 PM
Presbyphonia, or "aging voice", is one of the most common voice concerns, with prevalence ranging from 19-40% in the older adult (60+) population. Common symptoms of presbyphonia are a strained, weak voice and reduced loudness, which causes communication difficulties that negatively affect mental and social wellbeing. A key factor contributing to presbyphonia is vocal fold atrophy, which is deterioration of the muscle and tissue in the vocal folds. This causes weakness and incomplete vocal fold closure during voice production. Treatment usually consists of voice therapy with a speech-language pathologist (SLP) or vocal fold injections to bulk up vocal fold volume from an otolaryngologist. Currently, the standard for viewing the vocal folds to assess post-treatment change in vocal function is via an endoscope. However, endoscopic examination of the vocal folds relies on perceptual assessment in two dimensions, restricting analysis of vocal fold volume. In this research project, I am using magnetic resonance imaging (MRI) to view the vocal folds in 3D, allowing for a complete analysis of volume in mm3. My goal is to use this novel method to provide new information on which treatments create the best outcomes for patients in terms of vocal fold volume and voice quality. Participants undergo a comprehensive voice assessment conducted by me and a mentor SLP, an initial MRI scan, and then complete one of two treatment pathways: either 4-6 weeks of voice therapy or vocal fold injections. After treatment, they return for another voice assessment and MRI to evaluate the effects. This research is ongoing, but I anticipate that both treatment groups will experience improvement in voice outcomes, consistent with the literature. However, it is unknown if vocal fold volume will increase in both treatment groups, as MRI has not been used to assess post-treatment outcomes in this patient population.
- Presenter
-
- Caitrin Kerr, Senior, Speech & Hearing Sciences Mary Gates Scholar, UW Honors Program
- Mentor
-
- Gabriel Cler, Psychology, Speech & Hearing Sciences
- Session
-
-
Poster Presentation Session 1
- MGH 258
- Easel #80
- 11:20 AM to 12:20 PM
Developmental language disorder (DLD) is a prevalent lifelong communication disorder that encompasses challenges in learning, understanding, and using language not attributed to other bodily or environmental conditions. It is heritable, but its exact cause is unknown. Understanding why a specific population has language difficulties is essential to clinical communication support. This research aimed to establish a protocol for using a novel method of neuroimaging, magnetic resonance spectroscopy (MRspec), to investigate the brains of adults with DLD. We used MRspec to measure the neurotransmitter levels in regions associated with language guided by existing functional and structural findings about the DLD brain. Adult participants were recruited via survey and identified using the Fidler test. We scanned the head of the caudate nucleus and the inferior frontal gyrus in both hemispheres. I identified metabolites in those regions and am testing their possible language skill correlations. We expect, even with minimal data, to find lower concentrations of choline and glutamate and elevated concentrations of GABA in individuals with DLD compared to TD participants. Additionally, because choline chloride is linked to memory and poorer verbal and nonverbal working memory is associated with DLD, we anticipate a lower level in the caudate head. GABA may be at a higher level because it is inhibitory, which means it slows down messages in the nervous system, which may lead to difficulty processing and producing language. Inversely, we expect lower levels of glutamate as an excitatory neurotransmitter. I selected our software, TARQUIN, for processing and conducted analyses on MR data throughout the project. Ongoing analysis includes a visual reference and quantitative data to compare between participants. This study is the baseline for future research exploring neurotransmitters in adult individuals with DLD. Our results help better understand why specific language difficulties exist and how clinicians can help.
- Presenter
-
- Britney Vy Pham, Senior, Speech & Hearing Sciences Mary Gates Scholar, UW Honors Program
- Mentor
-
- Christina Zhao, Speech & Hearing Sciences, Institute for Learning & Brain Sciences
- Session
-
-
Poster Presentation Session 1
- MGH 241
- Easel #71
- 11:20 AM to 12:20 PM
Accurately describing a child’s language skills is difficult, but identifying children with atypical language development adds even more complexity. In an ordinary language assessment session, a Speech-Language Pathologist (SLP) will use both standardized, norm-referenced assessments and non-standardized assessments, like Language Sample Analysis (LSA). However, there is little research about how these different assessments relate to one another. To better understand this relationship, the language abilities of children (n=38) were assessed after turning 6-years-old and attending Kindergarten by SLPs using the following norm-referenced tests: a sound-in-words subtest from the Goldman-Fristoe Test of Articulation, 3rd Ed. (GFTA-3); core language subtests from the Clinical Evaluation of Language Fundamentals, 5th Ed. (CELF-5); and a nonverbal IQ subtest from the Kaufman Brief Intelligence Test, 2nd Ed. (KBIT-2). Then, a 10- to 20-minute language sample of the child’s spontaneous speech was collected for analysis. This project extends from previous research by including participants beyond clinical populations and using multiple sampling contexts to holistically capture the child’s naturalistic speech. I transcribed each language sample with Codes for Human Analysis of Transcripts (CHAT) and utilized Computerized Language Analysis (CLAN) software to automatically compute measures reflecting language skills from the language samples. I will conduct correlational analyses to inspect the associations between measures from norm-referenced tests and measures extracted from language samples. I expect to see significantly positive correlations between several CELF-5 measures and LSA measures of morphosyntactic development (i.e., grammar) that demonstrate a convergence between these two methods of assessment. Correlations between LSA measures and GFTA-3 measures are expected as well but to a lesser degree of association because they do not index identical elements of language. Overall, relationships discovered during this process will lend themselves to further understanding the information we gain from these common tools of language assessment.
- Presenter
-
- Hailey Robinson, Senior, Speech & Hearing Sciences UW Honors Program
- Mentor
-
- Yi Shen, Speech & Hearing Sciences
- Session
-
-
Poster Presentation Session 1
- MGH 241
- Easel #60
- 11:20 AM to 12:20 PM
The hearing aid experience is not a one-size-fits-all situation. Rather, each hearing aid user and their environment is unique. This project aimed to customize the listening experience of hearing aid users by reconstructing daily sound scenes based on real-world data during hearing aid fine-tuning. Older adults with mild to severe hearing loss were recruited and randomly assigned to the experimental or control group. All participants were fitted and sent home with hearing aids for an initial field trial of two weeks. During this period, they were instructed to collect information on the sound scenes that they identified as most important to their communication needs, including communication intent, listening effort, and acoustic recordings of the scenes. Following the initial field trial, participants were invited back for hearing aid fine-tuning. Participants in the experimental group conducted self-directed adjustment of their hearing aid gain for the individualized sound scenes reconstructed in the lab based on the audio recordings collected during the initial field trial. In contrast, participants in the control group made adjustments in two non-individualized, generic acoustic environments. Following fine-tuning, the participants were sent out again for three weeks, before returning for final outcome assessments. These included speech recognition performance in background noise and questionnaires on the subjective benefits of the hearing aids. For my role, I conducted detailed analyses of the survey data collected during the two field trials and the final outcome questionnaire results. Additionally, I managed the equipment inventory to support the field trials. It is anticipated that the experimental group will demonstrate greater clinical outcomes based on speech recognition testing and subjective questionnaires than the control group. If this is confirmed, it would provide the first evidence for leveraging real-world data in individualized hearing aid fine-tuning.
Poster Presentation 3
1:40 PM to 2:40 PM
- Presenter
-
- Priyanka Suman Talur, Senior, Bioengineering UW Honors Program
- Mentor
-
- Ludo Max, Speech & Hearing Sciences
- Session
-
-
Poster Presentation Session 3
- MGH Commons East
- Easel #36
- 1:40 PM to 2:40 PM
Sensorimotor adaptation is the ability of the brain to adjust future movements made by an individual in response to feedback signaling movement error. I am conducting an experiment using a virtual display system in order to manipulate visual feedback associated with arm movements when a subject reaches toward a target. My experiment consists of a baseline phase, followed by an adaptation phase, and finally, a de-adaptation phase. In the baseline phase, the cursor is aligned with the true movement of the sensor, in the adaptation phase, the position of the cursor is displaced by 30° counterclockwise relative to the true position of the sensor, in the de-adaptation phase, the cursor is aligned with the sensor again. Time can be a factor in how people learn motor skills, specifically time intervals between practice trials performed, also known as inter-trial-intervals (ITIs). I am conducting this visuomotor experiment with varying ITIs in between practice trials in the adaptation and de-adaptation phases with 20 human subjects. The subjects sit at the virtual display system strapped to an arm sled with their finger taped to an electromagnetic sensor which controls a cursor and reach towards a target. The subjects are divided into four groups, 7-second ITI only for adaptation, 7-second ITI only for de-adaptation, 7-second ITI in both adaptation and de-adaptation, and no 7-second ITI in in either adaptation or de-adaptation. I will collect data on each subject's response, which is the reach direction relative to the target to see if the amount of adaptation in reach direction is enhanced for groups practicing with the 7-second ITI.
- Presenter
-
- Crystal Sanchez, Junior, Speech & Hearing Sciences
- Mentor
-
- Yi Shen, Speech & Hearing Sciences
- Session
-
-
Poster Presentation Session 3
- MGH 258
- Easel #82
- 1:40 PM to 2:40 PM
This research focuses on improving speech understanding for individuals with hearing loss caused by a gene mutation affecting the stereocilin protein (STRC). The STRC mutation, which impacts the outer hair cells in the cochlea, is the second most common genetic cause of inherited hearing loss, affecting approximately 14.36% of individuals with mild to moderate hearing loss. Although this mutation doesn’t cause profound deafness, it significantly impairs the ability to distinguish speech, especially in noisy environments. The project proposes the use of spectral contrast enhancement (SCE), a signal processing algorithm that sharpens the auditory spectrum to improve speech clarity. Previous studies have shown that the SCE algorithm benefits cochlear implant users with poor spectral resolution, and this research adapts the algorithm specifically for those with STRC-related hearing loss. By enhancing spectral content, the algorithm helps make speech more intelligible in complex listening situations, such as background noise. I am conducting a behavioral evaluation with normal-hearing participants, simulating STRC-related hearing loss via the online platform Gorilla. The experiment measures hearing thresholds, word recognition with and without the SCE algorithm, and speech clarity ratings to assess the algorithm's efficacy. With the help of Dr. Yi Shen, I designed the experiment and created the user interface for the test. The current step involves adding more words into the SCE test for comparison with and without the algorithm, allowing for a comprehensive evaluation of the algorithm's impact on speech recognition and clarity. This work represents a significant step forward in audiology by applying precision medicine to hearing loss treatment. It aims to provide a tailored, evidence-based solution for individuals with STRC mutations, improving their ability to communicate in everyday settings and enhancing their overall quality of life.
Poster Presentation 4
2:50 PM to 3:50 PM
- Presenter
-
- Helen Liu, Senior, Computer Science, Linguistics UW Honors Program
- Mentor
-
- Christina Zhao, Speech & Hearing Sciences, Institute for Learning & Brain Sciences
- Session
-
-
Poster Presentation Session 4
- HUB Lyceum
- Easel #102
- 2:50 PM to 3:50 PM
Auditory input, such as infant directed speech and music, is integral to childhood language development. However, existing research focuses primarily on examining monolingual English-speaking families, overlooking families of other cultures and languages. Hence in this study, I investigate the naturalistic auditory home environments of Latino and Hispanic infants in comparison with Pacific Northwest monolingual English speaking infants to better understand the differences in auditory exposure. This study uses audio data obtained from daylong recordings of Latino and Hispanic infants' home environments utilizing the Language Environment Analysis (LENA) technology. Infants wear the LENA recorder in a vest for up to 16 hours per day. The selection requirement for Latino/Hispanic infants is that at least one parent identifies as being of Latino or Hispanic origin. I randomly sample short snippets of recordings and upload them to Zooniverse, an online citizen science research platform, which allows volunteers to annotate for types of sounds (music or speech), its source (in-person or electronic), and target audience (infant-directed or not). I quantify the types of auditory input to compare it with an existing study of Pacific Northwest monolingual English infants to uncover differences and understand the impact that culture has on infants' language input and ultimately development.
- Presenter
-
- Eloise Schell, Senior, Speech & Hearing Sciences UW Honors Program
- Mentors
-
- Christina Zhao, Speech & Hearing Sciences, Institute for Learning & Brain Sciences
- Tzu-Han Cheng, Speech & Hearing Sciences
- Yi Shen, Speech & Hearing Sciences
- Session
-
-
Poster Presentation Session 4
- MGH 241
- Easel #76
- 2:50 PM to 3:50 PM
A factor influencing the ability to tune into a single speaker in the presence of competing speech is speech rhythm. The Selective Entrainment Hypothesis suggests that attention fluctuates periodically and synchronizes with speech, a quasi-periodic stimulus. This synchronization allows the brain to predict when the most salient parts of speech will occur and direct attention towards those moments. According to the hypothesis, more rhythmic speech should be easier to synchronize with, as it is more predictable. This hypothesis has been supported by previous behavioral research, which found that altering the rhythm in the target speech stream decreased comprehension of the target speech, while rhythm distortion in the background improved comprehension, likely because it became a weaker competitor. The present study replicated and extended these findings by recording electroencephalographic (EEG) data from listeners (N = 20) to measure phase locking, or synchronization, between the target speech envelope and neural activities. I ran EEG sessions, which began by exposing participants to the target speaker’s voice on its own. Participants then listened to 300 sentence pairs, which I created by playing a sentence spoken by the background speaker and sentence from the target speaker simultaneously. The sentence pairs were divided into three rhythm alteration conditions: target-altered, background-altered, and neither-altered. After each trial the participants answered a multiple choice comprehension question to collect behavioral data. Using EEG allowed for a more direct measurement of synchronization compared to behavioral results alone. We test the hypothesis that in the conditions there will be the strongest phase locking in the background-altered condition, followed by the neither-altered, and worst in the target-altered condition, a pattern that mirrors the behavioral results. This will provide more insight into the role of rhythm in speech processing and has potential future implications for hearing aid development.
Poster Presentation 5
4:00 PM to 5:00 PM
- Presenters
-
- Misha Nivota, Sophomore, Computer Science
- Shrihun Reddy Sankepally, Sophomore, Pre-Sciences
- Mentors
-
- Yi Shen, Speech & Hearing Sciences
- Erik Petersen,
- Session
-
-
Poster Presentation Session 5
- CSE
- Easel #155
- 4:00 PM to 5:00 PM
The auditory brainstem response (ABR) tests are used to objectively evaluate the clinical hearing threshold of infants and young patients. However, the ABR testing process can be time and resource-consuming, as audiologists have to test multiple frequencies. For each frequency, an ABR threshold (the lowest level at which a discernable ABR response is detected) must be determined by repeating the test for a multitude of levels. The efficiency of these tests depends on clinical expertise. Audiologists can expedite this process by utilizing their experience to quickly analyze the ABR waveform and jump to the next test, skipping redundant intermediary steps. Clinicians with this expertise might not be widely available. To address this issue, the long-term goal of this study is to create an automated system that can mimic the efficient testing procedure of experienced audiologists using machine learning. A set of clinical ABR data was leveraged for model development. Our baseline models operate by analyzing one waveform at a time and predicting the next stimulus a clinician would choose based on individual waveforms. We hypothesize that a neural network that treats ABR waveforms collected in a single session as a time series would outperform baseline models. We are comparing these baseline models with neural networks that hold memory, meaning they treat ABR waveforms collected in a single session as a sequence. Multiple models were built and evaluated, including multiple time series neural networks (e.g., Long-Short Term Memory model). Initial testing indicates that including sequential data ordered as time series results in better performance. The outcome of this research is likely to improve the efficiency of ABR testing without requiring real-time supervision of expert clinicians.