
AI Now report on bias in algorithms & recommendations
For those interested in AI and algorithms, AI now has just published an excellent report as an outcome of their symposium. See attached - recommendations 1, 7, and 9 are perhaps areas we could work on.
Recommendations from the Report
These recommendations reflect the views and research of the AI Now Institute at New York
University. We thank the experts who contributed to the AI Now 2017 Symposium and
Workshop for informing these perspectives, and our research team for helping shape the AI
Now 2017 Report.
1. Core public agencies, such as those responsible for criminal justice, healthcare,
welfare, and education (e.g “high stakes” domains) should no longer use “black box”
AI and algorithmic systems. This includes the unreviewed or unvalidated use of
pre-trained models, AI systems licensed from third party vendors, and algorithmic
processes created in-house. The use of such systems by public agencies raises serious
due process concerns, and at a minimum they should be available for public auditing,
testing, and review, and subject to accountability standards.
2. Before releasing an AI system, companies should run rigorous pre-release trials to
ensure that they will not amplify biases and errors due to any issues with the training
data, algorithms, or other elements of system design. As this is a rapidly changing field,
the methods and assumptions by which such testing is conducted, along with the
results, should be openly documented and publicly available, with clear versioning to
accommodate updates and new findings.
3. After releasing an AI system, companies should continue to monitor its use across
different contexts and communities. The methods and outcomes of monitoring should
be defined through open, academically rigorous processes, and should be accountable
to the public. Particularly in high stakes decision-making contexts, the views and
experiences of traditionally marginalized communities should be prioritized.
4. More research and policy making is needed on the use of AI systems in workplace
management and monitoring, including hiring and HR. This research will complement
the existing focus on worker replacement via automation. Specific attention should be
given to the potential impact on labor rights and practices, and should focus especially
on the potential for behavioral manipulation and the unintended reinforcement of bias
in hiring and promotion.
5. Develop standards to track the provenance, development, and use of training datasets
throughout their life cycle. This is necessary to better understand and monitor issues of
bias and representational skews. In addition to developing better records for how a
training dataset was created and maintained, social scientists and measurement
researchers within the AI bias research field should continue to examine existing training
datasets, and work to understand potential blind spots and biases that may already be
at work.
6. Expand AI bias research and mitigation strategies beyond a narrowly technical
approach. Bias issues are long term and structural, and contending with them
necessitates deep interdisciplinary research. Technical approaches that look for a
one-time “fix” for fairness risk oversimplifying the complexity of social systems. Within
each domain – such as education, healthcare or criminal justice – legacies of bias and
movements toward equality have their own histories and practices. Legacies of bias
cannot be “solved” without drawing on domain expertise. Addressing fairness
meaningfully will require interdisciplinary collaboration and methods of listening across
different disciplines.
7. Strong standards for auditing and understanding the use of AI systems “in the wild”
are urgently needed. Creating such standards will require the perspectives of diverse
disciplines and coalitions. The process by which such standards are developed should be
publicly accountable, academically rigorous and subject to periodic review and revision.
8. Companies, universities, conferences and other stakeholders in the AI field should
release data on the participation of women, minorities and other marginalized groups
within AI research and development. Many now recognize that the current lack of
diversity in AI is a serious issue, yet there is insufficiently granular data on the scope of
the problem, which is needed to measure progress. Beyond this, we need a deeper
assessment of workplace cultures in the technology industry, which requires going
beyond simply hiring more women and minorities, toward building more genuinely
inclusive workplaces.
9. The AI industry should hire experts from disciplines beyond computer science and
engineering and ensure they have decision making power. As AI moves into diverse
social and institutional domains, influencing increasingly high stakes decisions, efforts
must be made to integrate social scientists, legal scholars, and others with domain
expertise that can guide the creation and integration of AI into long-standing systems
with established practices and norms.
10. Ethical codes meant to steer the AI field should be accompanied by strong oversight
and accountability mechanisms. More work is needed on how to substantively connect
high level ethical principles and guidelines for best practices to everyday development
processes, promotion and product release cycles.
- Log in to post comments