Training Module: Why Bias in AI Models Matters in Plain Language

Problem 

Employees and customers needed a primer on the importance of creating AI models that produce fair, unbiased, and representative outcomes and how to implement them. If they’re unaware of the technical and social pitfalls that can accompany development, the recommendation systems that undergird many of Salesforce’s products—and the world’s biggest companies that leverage them—would generate inaccurate, unusable, and potentially harmful results.


Background

I have a longstanding interest in the relationship between technology and people and the repercussions of AI in particular. I reached out to the head of our ethical AI department to ask how I could be of use to her as a writer. A training module was among her top priorities, but she was strapped for time. Given my writing ability and familiarity with the subject matter and our internal publishing processes, I could manage the details of this collaborative effort.

Scope

  • Adapted a series of blog posts written by the head of Ethical AI to an easily-digestible format of bullet-pointed objectives, headings, and tightly-edited body text.

  • Worked with engineering (highly technical), Trailhead editors (non-technical), the Ethics team, PR, and legal to align on messaging for describing bias to a diverse audience that spans cultural norms and technical abilities.


Stakeholders

Product, the Office of Ethical and Humane Use, AI engineers, UX research


Solution

Make sure our examples apply to everyone.

Before

”But the AI in question judged [the beauty pageant contestants] by European beauty standards—in the end, the vast majority of the winners were white.”

Customer feedback highlighted this conflation of European beauty standards and whiteness.

After

But the AI in question judged [the beauty pageant contestants] by Western beauty standards that emphasized whiteness. In the end, the vast majority of the winners were white.

To show that automation bias isn’t new or limited to AI, I also added information about the history of color photography, which initially calibrated skintones, shadows, and light based on an image of a white employee.


Before

A limited discussion of gender re: a prostate cancer drug.

Some may argue that not all biases are bad. For example, a pharmaceutical company manufacturing a prostate cancer drug only to men may seem like a “good” bias because they are targeting a specific group. This is actually not “biased.” Since women do not suffer from prostate cancer, the company is not denying them a benefit by not showing them ads for their drug. In this case, showing only men the ads is accurate and not biased. Therefore, it doesn’t make sense to refer to this as a “good” bias because it is not a bias at all.

After

Some may argue that not all biases are bad. For example, let’s say a pharmaceutical company manufactures a prostate cancer drug, and they target only men in their marketing campaigns. The company believes that their marketing campaign targeting is an example of a good bias because it’s providing a service by not bothering female audiences with irrelevant ads. But if the company’s dataset included only cis-gender people, or failed to acknowledge additional identities (ie. non-binary, transgender woman, transgender man, and agender individuals), then they were likely excluding other people who would benefit from viewing the ad. By incorporating a more complex and accurate understanding of gender and gender identity, the company would be better-equipped to reach everyone who could benefit from the drug.

Outcomes

  • Over 37k employees, administrators, consultants, sales representatives, and developers have completed the training.

  • Winner of Distinguished Award from the Society of Technical Communication

  • Nominated for a Harvard Kennedy School Tech Spotlight

    • A recognition program for projects and initiatives that demonstrate a commitment to tech and public purpose. Nominations are evaluated based on their proven ability to reduce societal harms and protect public purpose values including privacy, safety and security, transparency and accountability, and inclusion.

  • Listed as a resource and referenced in the Ethical and Humane Use of Technology training required by all Salesforce employees.

Previous
Previous

UX Writing: Empower Non-Data Scientists to Clean Data