Rahaf Alharbi

I am a Ph.D. candidate in the School of Information at the University of Michigan, where I am advised by Dr. Robin Brewer and Dr. Sarita Schoenebeck. I work towards a future where disabled people have agency and control over their data. Specifically, my research outlines a disability-centric responsible AI practice by reimaging privacy and transparency techniques in visual assistance technologies.

My work draws from disability studies and disability justice to examine the possibilities and limitations of emerging AI technologies for managing privacy and enhancing transparency. Through in-depth qualitative research, I explored blind people's perspectives on AI-enabled techniques to detect and redact private content. I found that while blind people recognize potential benefits, they desire greater control over what constitutes as private and are concerned about errors that may be challenging to detect. To design for non-visual transparency, I conducted an interview study to understand how blind people detect and resolve AI errors in their everyday lives. My ongoing work seeks to co-design transparency technquies in emerging AI-enabled privacy tools.

I interned at Microsoft Research with the Ability team and at Meta with the Responsible AI team. Prior to graduate school, I obtained my Bachelor of Science degree in Mechanical Engineering (minor in Ethnic Studies) at the University of California, San Diego.

Email  /  CV (last updated Oct 2024)  /  Google Scholar

Rahaf looking straight to the camera and smiling. She is wearing a black t-shirt and standing behind green plants. Rahaf used to have long black hair with dyed maroon ends.
Updates
  • Oct. 2024: I was awarded the Gary M. Olson Outstanding PhD Student Award by UMSI!
  • May 2024: Selected to attend HCIC as a University of Michigan representative. Super excited to chat with old and new friends!
  • June 2024: My paper was conditionally accepted to ASSETS 2024! I am really excited to share this work soon on AI accessible verification and visual access.
  • May 2024: Selected to attend HCIC as a University of Michigan representative. Super excited to chat with old and new friends!
  • Mar. 2024: I won the Rackham Predoctoral Fellowship which aims to support dissertations that are "unusually creative, ambitious, and impactful."
  • Aug. 2023: Our privacy and accessibility workshop was accepted to ASSETS 2023!
  • May 2023: Started my internship at Meta in the Responsible AI team! I am excited to be back in California!
  • Mar. 2023: Excited to present our paper “Accessibility Barriers, Conflicts, and Repairs: Understanding the Experience of Professionals with Disabilities in Hybrid Meetings” at CHI 2023 in Hamburg, Germany.
  • Dec. 2022: I passed my prelim defense! I’m now a Ph.D. candidate!
  • May 2022: I started my internship at Microsoft Research in the Ability team!
  • Apr. 2022: My first first-author paper was accepted to CSCW 2022! Excited to present my study on the benefits and harms that Blind people perceive of future privacy technology (obfuscation).
  • Nov. 2022: I passed my pre-candidacy defense!

Selected Journal and Conference Publications

decision tree explaining a blind person's verification process of using sensory skills, comparing with visual assistance technology, cross-referencing with another visual assistance technologies, and if requires accuracy or includes security risks, verifying with sighted people
Misfitting With AI: How Blind People Verify and Contest AI Errors

Rahaf Alharbi, Pa Lor, Jaylin Herskovitz, Sarita Schoenebeck, Robin Brewer

ASSETS 2024

PDF

We interviewed 26 blind people to understand how they make sense of errors in AI-enabled visual assistance technologies. We described common errors such as processing issues and cross-cultural bias. Blind people developed tactics to readdress and identify AI errors such as everyday experimentation in low-risk settings and strategically involving sighted people. We drew from disability studies framework of misfitting and fitting to expand our findings, and inform responsible AI scholarship.

Illustration of Deaf ASL user that has a laptop and a mobile device setup. On the laptop, there is a video conferencing interface that shows the video grid of in-person attendees and three other remote attendees. Also, there is a mobile device, standing upright, with a video image of an ASL interpreter.
Accessibility Barriers, Conflicts, and Repairs: Understanding the Experience of Professionals with Disabilities in Hybrid Meetings

Rahaf Alharbi, John Tang, Karl Henderson

CHI 2023

PDF / ACM DL / Talk

We interviewed 21 professionals with disabilities to unpack the accessibility dimensions of hybrid meetings. Our analysis demonstrates how invisible and visible access labor may support or undermine accessibility in hybrid meetings. We offer practical suggestions and design directions to make hybrid meetings accessible.

Illustration of participant trying to use Seeing AI to read mail, but they frustrated because Seeing AI keeps repeating the same information as they slightly shift their camera
Hacking, Switching, Combining: Understanding and Supporting DIY Assistive Technology Design by Blind People

Jaylin Herskovitz, Andi Xu, Rahaf Alharbi, Anhong Guo

CHI 2023

PDF / ACM DL / Talk / Dataset

Current assistive technologies (AT) often fail to support the unique needs of Blind people, so they often 'hack' and create Do-it-Yourself (DIY) AT. To further understand and support DIY AT, we conducted two-stage interviews and diary study with 12 Blind participants and we present design considerations for future DIY technology systems to support existing customization and creation process of Blind people.

First Monday logo
Definition Drives Design: Disability Models and Mechanisms of Bias in AI Technologies

Denis Newman-Griffis, Jessica Sage Rauchberg, Rahaf Alharbi, Louise Hickman, Harry Hochheiser

First Monday

PDF / First Monday DL

We reveal how AI bias stems from various design choices, including problem definition, data selection, technology use, and operational elements. We show that differing disability definitions drive distinct design decisions and AI biases. Our analysis offers a framework for scrutinizing AI in decision-making and promotes disability-led design for equitable AI.

GIF of a medicine bottle with the patient name being obfuscated by blurring
Understanding Emerging Obfuscation Technologies in Visual Description Services for Blind and Low Vision People

Rahaf Alharbi, Robin N. Brewer, Sarita Schoenebeck

CSCW 2022

PDF / ACM DL / Talk

Machine learning approaches such as obfuscation are often thought of as the state-of-art solution to addressing visual privacy concerns. We interviewed 20 Blind and low vision people to understand their perspectives on obfuscation. We found that while obfuscation may be beneficial, it imposes significant trust and accessibility issues. Participants worried that cultural or gendered privacy needs might be overlooked in obfuscation systems. We applied the framework of interdependence to rethink current obfuscation approaches, and provided more inclusive design directions.