At least five large companies will introduce “bias bounties” or hacker competitions to identify bias in artificial intelligence (AI) algorithms, predicts the just-released “North American Predictions 2022” from Forrester.

Bias bounties are modeled on bug bounties, which reward hackers or coders (often, outside the organizations) who detect problems in security software. In late July, Twitter launched the first major bias bounty and awarded $3,500 to a student who proved that its image cropping algorithm favors lighter, slimmer and younger faces.

“Finding bias in machine learning (ML) models is difficult, and sometimes, companies find out about unintended ethical harms once they’ve already reached the public,” wrote Rumman Chowdhury, director of Twitter META, in a blog entry. “We want to change that.”

Coders have been unearthing biases in AI-driven algorithms on social media since a programmer in 2015 called out a search feature of the Google Photos app that mistakenly tagged photos of Black people as gorillas.

Twitter in May admitted its automatic cropping algorithm repeatedly cropped out Black faces in favor of white ones and favored men over women.

AI-biases can impact what advertisements or products an individual is shown online or the recommendations they receive on Netflix, but can also lead to prejudices in job hiring, loan applications, health care decisions and criminal intelligence.

Machine learning algorithms can pick up the covert or overt biases from their human developers. Biases are also often attributed to old data that’s already biased.

Companies using AI claim to be taking steps to use more representative training data and to regularly audit their systems to check for unintended bias and disparate impact against certain groups.

Forrester predicted that other major tech companies such as Google and Microsoft in 2022 will implement bias bounties, as will non-technology companies, such as banks and healthcare companies.

Wrote Forrester in its predictions report, “AI professionals should consider using bias bounties as a canary in the coal mine for when incomplete data or existing inequity may lead to discriminatory outcomes from AI systems. With trust high on the agenda of stakeholders, organizations will have to drive decision-making based on levers of trust such as accountability and integrity, making bias elimination ever more critical.”





Source link