Incorporating and Balancing Stakeholder Values in Algorithm Design
Faculty
This project will create a general method for value-sensitive algorithm design and develop tools and techniques to help incorporate the tacit values of stakeholders, balance multiple stakeholders' values, and achieve collective goals in the development of an algorithm. The research community has paid increasing attention to the role of human values in algorithm design and development. For example, fairness-aware machine learning research attempts to translate fairness notions into formal algorithmic constraints and develop algorithms subject to such constraints. Despite the mathematical rigor of these approaches, prior research suggests a disconnect between the current discrimination-aware machine learning research and stakeholders' realities, context, and constraints; this disconnect is likely to undermine practical initiatives. Furthermore, studies have suggested that there are often tensions among a diverse set of values relevant to the design of the algorithm. A new general method will be developed in the context of Redesigning Wikipedia's Objective Revision Evaluation Service (ORES), a machine learning-based service designed to generate real-time predictions on edit quality and article quality, which will benefit vast numbers of people who consume the Wikipedia content either directly or indirectly through other applications.
For more information, visit:
Researchers
Haiyi Zhu
Research Areas
Artificial Intelligence (AI), Fairness, Accountability, Transparency, and Ethics (FATE), Human-Centered AI, Social Computing