“It was Mentally Painful to Try and Stop”: Design Opportunities for Just-in-Time Interventions for People with Obsessive-Compulsive Disorder in the Real World (ASSETS 2025)

Obsessive-compulsive disorder (OCD) is a mental health condition that significantly impacts people’s quality of life. To better understand the challenges and needs in OCD self-management, we conducted interviews with 10 participants with diverse OCD conditions and seven therapists specializing in OCD treatment. Through these interviews, we explored the characteristics of participants’ triggers and how they shaped their compulsions, and uncovered key coping strategies across different stages of OCD episodes.

Ru Wang, Kexin Zhang, Yuqing Wang, Keri Brown, Yuhang Zhao

ACM DL | Direct Download PDF

 

Publication accepted to ASSETS 2025 and presented in Denver, Colorado, USA.

“It’s Just Part of Me:” Understanding Avatar Diversity and Self-presentation of People with Disabilities in Social Virtual Reality (ASSETS 2022)

We explored people with disabilities’ avatar perception and disability disclosure preferences in social VR by (1) conducting a systematic review of fifteen popular social VR applications to evaluate their avatar diversity and accessibility support and (2) interviewing 19 participants with different disabilities to understand their avatar experiences (Zhang et al. 2022).

Kexin Zhang, Elmira Deldari, Zhicong Lu, Yaxing Yao, Yuhang Zhao

ACM DL | Direct Download PDF

 

In social Virtual Reality (VR), users are embodied in avatars and interact with other users in a face-to-face manner using avatars as the medium. With the advent of social VR, people with disabilities (PWD) have shown an increasing presence on this new social media. With their unique disability identity, it is not clear how PWD perceive their avatars and whether and how they prefer to disclose their disability when presenting themselves in social VR. We fill this gap by exploring PWD’s avatar perception and disability disclosure preferences in social VR. Our study involved two steps. We first conducted a systematic review of fifteen popular social VR applications to evaluate their avatar diversity and accessibility support. We then conducted an in-depth interview study with 19 participants who had different disabilities to understand their avatar experiences. Our research revealed a number of disability disclosure preferences and strategies adopted by PWD (e.g., reflect selective disabilities, present a capable self). We also identified several challenges faced by PWD during their avatar customization process. We discuss the design implications to promote avatar accessibility and diversity for future social VR platforms.

Publication accepted to ASSETS 2022 and presented in Athens, Greece.

A Diary Study in Social Virtual Reality: Impact of Avatars with Disability Signifiers on the Social Experiences of People with Disabilities (ASSETS 2023)

We conducted a diary study with 10 People with Disabilities who freely explored VRChat for two weeks, comparing their experiences between using regular avatars and avatars with disability signifiers (i.e., avatar features that indicate the user’s disability in real life) (Zhang et al. 2023).

Kexin Zhang, Elmira Deldari, Yaxing Yao, Yuhang Zhao

ACM DL | Direct Download PDF

 

People with disabilities (PWD) have shown a growing presence in the emerging social virtual reality (VR). To support disability representation, some social VR platforms start to involve disability features in avatar design. However, it is unclear how disability disclosure via avatars (and the way to present it) would affect PWD’s social experiences and interaction dynamics with others. To fill this gap, we conducted a diary study with 10 PWD who freely explored VRChat—a popular commercial social VR platform—for two weeks, comparing their experiences between using regular avatars and avatars with disability signifiers (i.e., avatar features that indicate the user’s disability in real life). We found that PWD preferred using avatars with disability signifiers and wanted to further enhance their aesthetics and interactivity. However, such avatars also caused embodied, explicit harassment targeting PWD. We revealed the unique factors that led to such harassment and derived design implications and protection mechanisms to inspire more safe and inclusive social VR.

Publication accepted to ASSETS 2023 and presented in New York City, New York.

A Preliminary Interview: Understanding XR Developers’ Needs towards Open-Source Accessibility Support (IEEE VRW 2023)

We investigated XR developers' practices, challenges, and needs when integrating accessibility in their projects (Ji et al. 2023).

Tiger F Ji, Yaxin Hu, Yu Huang, Ruofei Du, Yuhang Zhao

IEEE Xplore | Direct Download PDF

 

While extended reality (XR) technology is seeing increasing mainstream utilization, it is not accessible to users with disabilities and lacks support for XR developers to create accessibility features. In this study, we investigated XR developers’ practices, challenges, needs when integrating accessibility in their projects. Our findings revealed developers’ needs for open-source accessibility support, such as code examples of particular accessibility features alongside accessibility guidelines.

Publication accepted to IEEE VR 2023 and presented as a workshop in Shanghai, China.

Beyond the “Industry Standard”: Focusing Gender-Affirming Voice Training Technologies on Individualized Goal Exploration (CHI 2025)

We interviewed six voice experts and ten transgender individuals with voice training experience (voice trainees), focusing on how they defined, triangulated, and used voice goals. We found that goal voice exploration involves navigation between approximate and clear goals, and continuous reevaluation throughout the voice training journey. Our study reveals how voice examples, character descriptions, and voice modification and training technologies inform goal exploration, and identifies risks of overemphasizing goals.

Kassie Povinelli, Hanxiu “Hazel” Zhu, and Yuhang Zhao

*Best Paper Honorable Mention*

ACM DL | Direct Download PDF

Abstract: Gender-affirming voice training is critical for the transition process for many transgender individuals, enabling their voice to align with their gender identity. Individualized voice goals guide and motivate the voice training journey, but existing voice training technologies fail to define clear goals. We interviewed six voice experts and ten transgender individuals with voice training experience (voice trainees), focusing on how they defined, triangulated, and used voice goals. We found that goal voice exploration involves navigation between approximate and clear goals, and continuous reevaluation throughout the voice training journey. Our study reveals how voice examples, character descriptions, and voice modification and training technologies inform goal exploration, and identifies risks of overemphasizing goals. We identified technological implications informed by the separation of voice goals and targets, and provide guidelines for for supporting individualized goals throughout the voice training journey based on brainstorming with trainees and experts.

Characterizing Collective Efforts in Content Sharing and Quality Control for ADHD-relevant Content on Video-sharing Platforms (ASSETS 2025)

We systematically collected 373 ADHD-relevant videos with comments from YouTube and TikTok and analyzed the data with a mixed method. Our study identified the characteristics of ADHD-relevant videos on VSPs (e.g., creator types, video presentation forms, quality issues) and revealed the collective efforts of creators and viewers in video quality control, such as authority building, collective quality checking, and accessibility improvement.

Hanxiu ‘Hazel’ Zhu, Avanthika Senthil Kumar, Sihang Zhao, Ru Wang, Xin Tong, Yuhang Zhao

ACM DL | Direct Download PDF

Publication accepted to ASSETS 2025 and presented in Denver, Colorado, USA

Characterizing Visual Intents for People with Low Vision through Eye Tracking (ASSETS 2025)

We conducted a retrospective think-aloud study using eye tracking with 20 low vision participants and 20 sighted controls. Participants completed various image-viewing tasks and watched the playback of their gaze trajectories to reflect on their visual experiences. Based on the study, we derived a visual intent taxonomy with five visual intents characterized by participants’ gaze behaviors. We demonstrated the difference between low vision and sighted participants’ gaze behaviors and how visual ability affected low vision participants’ gaze patterns across visual intents.

Ru Wang, Ruijia Chen, Anqiao Erica Cai, Zhiyuan Li, Sanbrita Mondal, Yuhang Zhao

ACM DL | Direct Download PDF

 

Publication accepted to ASSETS 2025 and presented in Denver, Colorado, USA.

CookAR: Affordance Augmentations in Wearable AR to Support Kitchen Tool Interactions for People with Low Vision (UIST 2024)

We present CookAR, a head-mounted AR system with real-time object affordance augmentations to support safe and efficient interactions with kitchen tools. To design and implement CookAR, we collected and annotated the first egocentric dataset of kitchen tool affordances, fine-tuned an affordance segmentation model, and developed an AR system with a stereo camera to generate visual augmentations.

Jaewook Lee, Andrew D Tjahjadi, Jiho Kim, Junpu Yu, Minji Park, Jiawen Zhang, Jon E. Froehlich, Yapeng Tian, and Yuhang Zhao

*Belonging and Inclusion Best Paper Award*

ACM DL | Direct Download PDF | Open Source

Abstract: Cooking is a central activity of daily living, supporting independence as well as mental and physical health. However, prior work has highlighted key barriers for people with low vision (LV) to cook, particularly around safely interacting with tools, such as sharp knives or hot pans. Drawing on recent advancements in computer vision (CV), we present CookAR, a head-mounted AR system with real-time object affordance augmentations to support safe and efficient interactions with kitchen tools. To design and implement CookAR, we collected and annotated the first egocentric dataset of kitchen tool affordances, fine-tuned an affordance segmentation model, and developed an AR system with a stereo camera to generate visual augmentations. To validate CookAR, we conducted a technical evaluation of our fine-tuned model as well as a qualitative lab study with 10 LV participants for suitable augmentation design. Our technical evaluation demonstrates that our model outperforms the baseline on our tool affordance dataset, while our user study indicates a preference for affordance augmentations over the traditional whole object augmentations.

Exploring the Design Space of Optical See-through AR Head-Mounted Displays to Support First Responders in the Field (CHI 2024)

We interviewed 26 first responders in the field who experienced a state-of-the-art optical-see-through AR HMD, soliciting their first-hand experiences, design ideas, and concerns on its interaction techniques and four types of AR cues (Zhang et al. 2024).

Kexin Zhang, Brianna R Cochran, Ruijia Chen, Lance Hartung, Bryce Sprecher, Ross Tredinnick, Kevin Ponto, Suman Banerjee, Yuhang Zhao

ACM DL | Direct Download PDF

 

First responders (FRs) navigate hazardous, unfamiliar environments in the field (e.g., mass-casualty incidents), making life-changing decisions in a split second. AR head-mounted displays (HMDs) have shown promise in supporting them due to its capability of recognizing and augmenting the challenging environments in a hands-free manner. However, the design space has not been thoroughly explored by involving various FRs who serve different roles (e.g., firefighters, law enforcement) but collaborate closely in the field. We interviewed 26 first responders in the field who experienced a state-of-the-art optical-see-through AR HMD, as well as its interaction techniques and four types of AR cues (i.e., overview cues, directional cues, highlighting cues, and labeling cues), soliciting their first-hand experiences, design ideas, and concerns. Our study revealed both generic and role-specific preferences and needs for AR hardware, interactions, and feedback, as well as identifying desired AR designs tailored to urgent, risky scenarios (e.g., affordance augmentation to facilitate fast and safe action). While acknowledging the value of AR HMDs, concerns were also raised around trust, privacy, and proper integration with other equipment. Finally, we derived comprehensive and actionable design guidelines to inform future AR systems for in-field FRs.

Publication accepted to CHI 2024 and presented in Honolulu, Hawaii.

FocusView: Understanding and Customizing Informational Video Watching Experiences for Viewers with ADHD (ASSETS 2025)

We designed FocusView, a video customization interface that allows viewers with ADHD to customize informational videos from different aspects.

Hanxiu ‘Hazel’ Zhu, Ruijia Chen, Yuhang Zhao

ACM DL | Direct Download PDF

While videos have become increasingly prevalent in delivering information across different educational and professional contexts, individuals with ADHD often face attention challenges when watching informational videos due to the dynamic, multimodal, yet potentially distracting video elements. To understand and address this critical challenge, we designed FocusView, a video customization interface that allows viewers with ADHD to customize informational videos from different aspects. We evaluated FocusView with 12 participants with ADHD and found that FocusView significantly improved the viewability of videos by reducing distractions. Through the study, we uncovered participants’ diverse perceptions of video distractions (e.g., background music as a distraction vs. stimulation boost) and their customization preferences, highlighting unique ADHD-relevant needs in designing video customization interfaces (e.g., reducing the number of options to avoid distraction caused by customization itself). We further derived design considerations for future video customization systems for the ADHD community.

Publication accepted to ASSETS 2025 and presented in Denver, Colorado, USA