My name is Liang-Yuan “Leo” Wu 吳兩原. I am now working with Prof. Dhruv “DJ” Jain and the Soundability Lab at the AI Laboratory, University of Michigan CSE. I hold a Master’s Degree in Computer Science & Engineering from the University of Michigan, and a Bachelor’s Degree in Electrical Engineering from National Taiwan University.

I am a researcher and engineer working at the intersection of Human-Computer Interaction (HCI) and Artificial Intelligence (AI), with a focus on building human-centered technologies that make sound and speech more accessible. My work often involves close collaboration with the Deaf and Hard of Hearing (DHH) community to understand how people perceive, trust, and engage with audio AI.

My recent focus includes: (1) Verbal sounds: ASR and captioning, particularly in challenging environments such as medical settings and atypical speech. (2) Non-verbal sound: Understanding and interpreting broad auditory scenes, and (3) Emotions in verbal and non-verbal cues: investigating how they are perceived and whether they are intelligible.

I have hands-on experience in software development, with a background in deep learning, audio and language processing, and full-stack development. I also bring experience in UX research, including mixed-methods evaluation with users. In addition to my research, I am developing an open-source adaptive captioning project for people with atypical speech.

Currently, I am applying for 2026 CS PhD programs, focusing on sound and HCI. I am always excited to discuss my research and potential collaboration, please feel free to reach out!

Resume / CV

UMich Logo

University of Michigan

2022-Present
UoE Logo

University of Edinburgh

2021
NTU Logo

National Taiwan University

2017-2021

NEWS

Aug. 26, 2025 🏆 CARTGPT receives the Best Paper Honorable Mention at ASSETS 2025.
Jul. 30, 2025 🎉 Two posters (one first-authored) are accepted by ASSETS 2025.
Jul. 03, 2025 🎉 Three papers (two first-authored) are accepted by ASSETS 2025.
Jun. 14, 2025 🤞 My co-first authored work MedCaption is available as a preprint for JMIR.
Apr. 03, 2025 🎊 My work SoundNarratives is accepted by the GenAI and A11y Workshop at CHI 2025.
Jan. 16, 2025 🎊 My co-authored work, SoundWeaver, is accepted by CHI 2025.
Oct. 30, 2024 🏆 CARTGPT receives the BEST POSTER AWARD at ASSETS 2024
Oct. 01, 2024 🎉 Our research proposal, Audio Scene Understanding, receives Google Academic Research Awards