“Computational design” and “inclusive design” have been, for at least a decade now, among the design world’s buzziest terms.
The former has been used to denote a new category of designers and engineers who exploit the power of computers, data, algorithms, and the Internet to roll out complex services to millions at lightning speed. The latter began as a mostly academic effort to make products accessible to users with disabilities, then evolved into a broader discussion about the importance of ethnic, racial and gender diversity in design and how to make large organizations more representative of the communities they serve.
For many years, it was widely assumed that there was a natural affinity between the two approaches. After all, it was often asserted, computers, data, and algorithms are “objective”—tools of logic that could be used to make decisions and serve users without the messy prejudices of humans.
It was hoped that the new methods of design would help promote diversity in a variety of fields including education, job recruitment, credit scoring, criminal justice, and the distribution of health care services.
Alas, just as the Internet, once hailed as a technology that would bring the world together, is now widely reviled for driving us apart, so algorithms increasingly are vilified for turbocharging racism and sexism.
Safiya Umoja Noble, author of Algorithms of Oppression: How Search Engines Reinforce Racism, this week offered a curated reading list on Pocket, exploring how technology can replicate and reinforce racist and sexist attitudes and skew outcomes in nearly all the fields it was meant to improve. I recommend every item. A few highlights:
- Google’s image-ranking algorithms reinforce negative stereotypes of “black girls” as sex objects and “black teenagers” as criminals. (Time)
- Algorithmic risk scoring in the U.S. criminal justice system has led to racially biased outcomes. One such algorithm, documented by Pro Publica, classified black defendants as having a “high recidivism risk” at disproportionately higher rates than white defendants even though the algorithm did not explicitly use race as a variable. (The Boston Review, Pro Publica)
- Algorithms used by hospitals and physicians to guide decisions such as who receives heart surgery, who needs kidney care, and who should try to give birth vaginally are racially biased. (STAT)
- Speech recognition systems from Amazon, Apple, Google, IBM and Microsoft misidentify words spoken by black people 35% of the time compared to 19% for white people. (The New York Times)
Tech optimists assert the long-run benefits to society of embracing artificial intelligence far outweigh its current shortcomings. A host of practitioners argue the use of algorithms remains in its early days, and that problems of racial and gender bias can be tweaked and optimized away to achieve more neutral decision outcomes in the future.
But Noble and other critics insist algorithms are inherently biased and bound to perpetuate prejudice—not least because the data upon which they feed is the product of past discrimination.
Is it possible to teach machines to be truly inclusive? Are we capable of devising algorithms that overcome, not amplify, injustice? The early evidence isn’t encouraging.
More design news below.