- GazePrompt: Enhancing Low Vision People’s Reading Experience with Gaze-Aware Augmentations (CHI 2024)
GazePrompt is a gaze-aware reading aid that provides timely and targeted visual and audio augmentations for people with low vision based on users’ gaze behaviors (Wang et al. 2024)
—
- A Diary Study in Social Virtual Reality: Impact of Avatars with Disability Signifiers on the Social Experiences of People with Disabilities (ASSETS 2023)
We conducted a diary study with 10 People with Disabilities who freely explored VRChat for two weeks, comparing their experiences between using regular avatars and avatars with disability signifiers (i.e., avatar features that indicate the user’s disability in real life) (Zhang et al. 2023).
—
- A Preliminary Interview: Understanding XR Developers’ Needs towards Open-Source Accessibility Support (IEEE VRW 2023)
We investigated XR developers' practices, challenges, and needs when integrating accessibility in their projects (Ji et al. 2023).
- Exploring the Design Space of Optical See-through AR Head-Mounted Displays to Support First Responders in the Field (CHI 2024)
We interviewed 26 first responders in the field who experienced a state-of-the-art optical-see-through AR HMD, soliciting their first-hand experiences, design ideas, and concerns on its interaction techniques and four types of AR cues (Zhang et al. 2024).
—
- Practices and Barriers of Cooking Training for Blind and Low Vision People (ASSETS 2023)
We interviewed six professionals to explore their training strategies and technology recommendations for blind and low vision clients in cooking activities (Wang et al. 2023).
—
- “It’s Just Part of Me:” Understanding Avatar Diversity and Self-presentation of People with Disabilities in Social Virtual Reality (ASSETS 2022)
We explored people with disabilities’ avatar perception and disability disclosure preferences in social VR by (1) conducting a systematic review of fifteen popular social VR applications to evaluate their avatar diversity and accessibility support and (2) interviewing 19 participants with different disabilities to understand their avatar experiences (Zhang et al. 2022).
- Springboard, Roadblock or “Crutch”?: How Transgender Users Leverage Voice Changers for Gender Presentation in Social Virtual Reality (IEEE VR 2024)
We interviewed 13 transgender and gender-nonconforming users of social VR platforms, focusing on their experiences with and without voice changers to explore the connection between avatar embodiment and voice representation (Povinelli and Zhao 2024).
—
- Understanding How Low Vision People Read Using Eye Tracking (CHI 2023)
We collected the gaze data of 20 low vision participants and 20 sighted controls who performed reading tasks on a computer screen to thoroughly explore their challenges in reading based on their gaze behaviors and compare gaze data quality between low vision and sighted people (Wang et al. 2023).
—
- VRBubble: Enhancing Peripheral Awareness of Avatars for People with Visual Impairments in Social Virtual Reality (ASSETS 2022)
We designed VRBubble, an audio-based VR technique that provides surrounding avatar information based on social distances. Based on Hall’s proxemic theory, VRBubble divides the social space with three Bubbles—Intimate, Conversation, and Social Bubble—generating spatial audio feedback to distinguish avatars in different bubbles and provide suitable avatar information (Ji, Cochran, and Zhao 2022).