Research and Content Audit to Deflect Cases
Problem
Customers struggle to include database queries in interactive dashboard widgets due to hard-to-find documentation, outdated content, and overall complicated functionality. As the database query language writer, I identified these issues when I learned that writing queries for interactive widgets was the number support case driver. This feature is key to designing dashboards in Salesforce’s Analytics product. If a customer can’t create their desired bar graph, table, or chart to display data, they could leave the Salesforce ecosystem—a potential source of difficulty for the customer, and a user retention issue for the company.
Scope
Audit trainings, video, documentation, and in-app help to identify limitations and terminology inconsistencies—sources of customer confusion.
Move widgets documentation to a more visible place in our documentation structure and reference it in the language reference.
Create examples based on customer usage—both superusers and beginners.
Stakeholders
Support, engineering, product management
Solution
Increased cross-team visibility by working with support, pre-sales engineering, solutions engineering, and an expert Salesforce consultant in order to avoid duplicating efforts and increase the consistency of our user experience. Together, we discussed common questions and agreed on the gaps in understanding.
Repurposed must-have examples from the Salesforce consultant’s blog posts to use in documentation (in progress).
Successfully handed off content audit to another writer after I was reorged to another team. The writer has since migrated the docs to our developer site where customers can easily find them.
UX Writing: Empower Non-Data Scientists to Clean Data
Problem
Data Prep is part of Salesforce’s Analytics product that simplifies the complex task of cleaning data. We added a feature to reduce the cognitive overload of joining one dataset to another by letting customers select their main dataset and the dataset(s)-to-be-joined on the same screen. Previously, customers needed to know which datasets had relationships to the initial one they selected. In practice, they spent a lot of time searching through datasets to figure out which ones they could join. The recommended datasets feature takes out the guesswork by showing the relevant ones from which to choose.
Scope
Ensure our user base of non-data scientists can effectively clean data through providing context and brief explanations, while maintaining consistent technical vocabulary.
Stakeholders
Product management, design, engineering
Solution
Though I don’t have feedback on the tooltips specifically, based on discussions with the product manager and solutions engineers, customers are very appreciative of the feature and weren’t confused by an initial demo from product management.
Before
After
Process Details
I collaborate on product text with designers, engineers, product managers, and our UX writing review team in a Google Doc.
Engineers or designers sometimes write placeholder product text, which goes in the Current Text column. I write the Recommendation column.
The UX writing review team and I edit/discuss the recommended text via Google Doc comments. Together, we establish the final text, which goes in the Final Text column.
Outcomes
From a solutions engineer: “Previously, I don’t think [Data Prep] did a terribly great job. Before, I really had to search. What is super nice now is how when you select a dataset, you can check for related Objects—that is gold!!! So, for a related join node, it finds the join keys automatically. That is SWEET. It’s so much better!!”
Training Module: Why Bias in AI Models Matters in Plain Language
Problem
Employees and customers needed a primer on the importance of creating AI models that produce fair, unbiased, and representative outcomes and how to implement them. If they’re unaware of the technical and social pitfalls that can accompany development, the recommendation systems that undergird many of Salesforce’s products—and the world’s biggest companies that leverage them—would generate inaccurate, unusable, and potentially harmful results.
Background
I have a longstanding interest in the relationship between technology and people and the repercussions of AI in particular. I reached out to the head of our ethical AI department to ask how I could be of use to her as a writer. A training module was among her top priorities, but she was strapped for time. Given my writing ability and familiarity with the subject matter and our internal publishing processes, I could manage the details of this collaborative effort.
Scope
Adapted a series of blog posts written by the head of Ethical AI to an easily-digestible format of bullet-pointed objectives, headings, and tightly-edited body text.
Worked with engineering (highly technical), Trailhead editors (non-technical), the Ethics team, PR, and legal to align on messaging for describing bias to a diverse audience that spans cultural norms and technical abilities.
Stakeholders
Product, the Office of Ethical and Humane Use, AI engineers, UX research
Solution
Make sure our examples apply to everyone.
Before
”But the AI in question judged [the beauty pageant contestants] by European beauty standards—in the end, the vast majority of the winners were white.”
Customer feedback highlighted this conflation of European beauty standards and whiteness.
After
But the AI in question judged [the beauty pageant contestants] by Western beauty standards that emphasized whiteness. In the end, the vast majority of the winners were white.
To show that automation bias isn’t new or limited to AI, I also added information about the history of color photography, which initially calibrated skintones, shadows, and light based on an image of a white employee.
Before
A limited discussion of gender re: a prostate cancer drug.
Some may argue that not all biases are bad. For example, a pharmaceutical company manufacturing a prostate cancer drug only to men may seem like a “good” bias because they are targeting a specific group. This is actually not “biased.” Since women do not suffer from prostate cancer, the company is not denying them a benefit by not showing them ads for their drug. In this case, showing only men the ads is accurate and not biased. Therefore, it doesn’t make sense to refer to this as a “good” bias because it is not a bias at all.
After
Some may argue that not all biases are bad. For example, let’s say a pharmaceutical company manufactures a prostate cancer drug, and they target only men in their marketing campaigns. The company believes that their marketing campaign targeting is an example of a good bias because it’s providing a service by not bothering female audiences with irrelevant ads. But if the company’s dataset included only cis-gender people, or failed to acknowledge additional identities (ie. non-binary, transgender woman, transgender man, and agender individuals), then they were likely excluding other people who would benefit from viewing the ad. By incorporating a more complex and accurate understanding of gender and gender identity, the company would be better-equipped to reach everyone who could benefit from the drug.
Outcomes
Over 37k employees, administrators, consultants, sales representatives, and developers have completed the training.
Winner of Distinguished Award from the Society of Technical Communication
Nominated for a Harvard Kennedy School Tech Spotlight
A recognition program for projects and initiatives that demonstrate a commitment to tech and public purpose. Nominations are evaluated based on their proven ability to reduce societal harms and protect public purpose values including privacy, safety and security, transparency and accountability, and inclusion.
Listed as a resource and referenced in the Ethical and Humane Use of Technology training required by all Salesforce employees.