[ad_1]
Cleber Ikeda is Investigative Analytics and Intelligence Director at Walmart. Any views and opinions expressed herein are personal and may not represent those of Walmart. The content of this article does not contain nor reference to confidential, proprietary information of Walmart.
Recommendation algorithms help connect customers to products they need or want to buy. They also increase visibility to promotions and, in some cases, make shopping more fun. They have also, in some cases, crossed serious ethical lines that have damaged the trust that consumers placed with retailers.
There have been unfortunate instances in which recommendation algorithms misled profiling outputs and generated discrimination (e.g., job ads). The root cause usually goes to input data, tainted with prejudice, extremism, harassment or discrimination. Combined with a careless approach to privacy and aggressive advertising practices, data can become the raw material for a terrible customer experience. Irresponsible use of data could even generate severe, undesirable outputs, like threats to human rights.
To address the risks of discrimination and unfairness, retailers must assess if their algorithms discriminate against specific groups, subgroups or minorities. They need to know if profiling techniques are preventing customer segments from full visibility of comparable products and if unsound algorithm design is preventing less affluent customers from accessing good deals.
Training machine learning developers on how prejudice, discrimination and biases can impact algorithmic design is a powerful tool. Without proper training and clear communication from leadership, developers might design algorithms while consciously or unconsciously promoting values that are not aligned with their company’s ethical standards.
Another concern is privacy. Many data points used to profile customers and predict shopping decisions are personal identifiable information or protected attributes. Marketers must observe domestic and international privacy regulations, but it’s also just good business to understand customers’ expectations when it comes to privacy. Violations of trust are business killers.
Retailers also need to exercise caution when it comes to retargeting ads online. There is a line to be drawn between helpful product reminders and what comes across as intrusive.
State-of-the-art artificial intelligence is not yet able to “fix” real-world data. Nevertheless, there is no excuse for recommendation algorithms owners to be negligent about it.
More diversity in the data science teams would help, given that marginalized, vulnerable groups that suffer inequities the most in the digital world are not well represented. Companies can also go outside with bias bounties where hackers compete to identify inherent bias in the code.
- Facebook’s ad algorithms are still excluding women from seeing jobs – MIT Technology Review
- Empathic media and advertising: Industry, policy, legal and citizen perspectives (the case for intimacy) – Sage Journals
- YouTube ran ads from hundreds of brands on extremist channels –CNN
- Companies place ‘bias bounties’ on AI algorithms – RetailWire
[ad_2]
Source link