Fairness in AI for people with disabilities
Artificial intelligence (AI) has the potential to significantly improve the lives of people disabilities (PWD). Indeed, many cutting-edge AI systems, such as automatic speech recognition tools that can caption films for the deaf and hard of hearing, or language prediction algorithms that can augment communication for persons with speech or cognitive disabilities, are motivated by improving the lives of PWD. However, AI systems that are widely used may not perform correctly for people with disabilities, or worse, may actively discriminate against them. These concerns about AI justice for people with disabilities have gotten little attention thus far.
The topics of the STS may include, but are not limited to:
- Risk assessment of computer-vision based AI for PWD (face recognition, object recognition, etc.)
- Risk assessment of speech-based AI for PWD (speech recognition, speech analysis)
- Risk assessment of text recognition AI for PWD
- Risk assessment of integrative AI for PWD (information retrieval, conversational agents)
- Bias in AI for PWD
- PWD and outlier detection
- PWD and practices of evaluating systems through aggregate metrics
- PWD and definitions of objective functions
- PWD and evaluation data
Chair
Alireza Darvishy, ICT-Accessibility Lab, Zurich University Of Applied Sciences, Switzerland
Contributions to a STS have to be submitted using the standard submission procedures of ICCHP-AAATE.
When submitting your contribution please make sure to select the right STS from the drop-down list "Special Thematic Session". Contributions to a STS are evaluated by the Programme Committee of ICCHP-AAATE and by the chair(s) of the STS. Please get in contact with the STS chair(s) for discussing your contribution and potential involvement in the session. Submission Deadlines for Contributions to STSs: February, 6, 2022 for publishing in Springer Lecture Notes in Computer Science, March 3, 2022 for publishing in the Open Access Compendium.