Blythe Sheldon Blythe Sheldon

Research and Content Audit to Deflect Cases

Problem

Customers struggle to include database queries in interactive dashboard widgets due to hard-to-find documentation, outdated content, and overall complicated functionality. As the database query language writer, I identified these issues when I learned that writing queries for interactive widgets was the number support case driver. This feature is key to designing dashboards in Salesforce’s Analytics product. If a customer can’t create their desired bar graph, table, or chart to display data, they could leave the Salesforce ecosystem—a potential source of difficulty for the customer, and a user retention issue for the company.

Scope

  • Audit trainings, video, documentation, and in-app help to identify limitations and terminology inconsistencies—sources of customer confusion.

  • Move widgets documentation to a more visible place in our documentation structure and reference it in the language reference. 

  • Create examples based on customer usage—both superusers and beginners.

Stakeholders

  • Support, engineering, product management

Solution

  • Increased cross-team visibility by working with support, pre-sales engineering, solutions engineering, and an expert Salesforce consultant in order to avoid duplicating efforts and increase the consistency of our user experience. Together, we discussed common questions and agreed on the gaps in understanding.

  • Repurposed must-have examples from the Salesforce consultant’s blog posts to use in documentation (in progress).

  • Successfully handed off content audit to another writer after I was reorged to another team. The writer has since migrated the docs to our developer site where customers can easily find them.

The above UI can’t be redesigned due to other priorities. Product management and engineering agree it needs a redesign and it is on the query language PM’s backlog. The product manager behind the original design admitted even he has trouble with the editor.

A few suggestions for how to improve the current design with text:

  • Clarify the relationship between the menu on the left and the main text editor. Rename the “Created Interaction” label “Interaction Selections” to indicate that this section of the screen shows the results from the left menu choices. Clarify the relationship between Created Interaction and Interaction Result beneath it.

  • Create an empty state for “Created Interaction”: “You haven’t created an interaction yet. To get started, select Source Data and an Interaction Type. You’ll see the result here.”

  • Simplify infobubble text. Under Interaction Type, the Result infobubble says: “Results interactions are based on the entire result of the query.” “Entire result” is unclear. Replace with: “This type of interaction occurs when the query returns a new value.” For Selection, can add, “This type of interaction occurs when a user clicks.”

  • Include in-app guidance, such as a walkthrough to show what happens when you paste a query written in the Salesforce proprietary query language into the Query section of the Advanced Interaction Editor. The main way that a customer uses the editor is missing from the UI.

Content Audit

Interactive widgets documentation live on the highly-visible developer documentation site, where executives want to see more content.

Read More
Blythe Sheldon Blythe Sheldon

UX Writing: Empower Non-Data Scientists to Clean Data

Problem

Data Prep is part of Salesforce’s Analytics product that simplifies the complex task of cleaning data. We added a feature to reduce the cognitive overload of joining one dataset to another by letting customers select their main dataset and the dataset(s)-to-be-joined on the same screen. Previously, customers needed to know which datasets had relationships to the initial one they selected. In practice, they spent a lot of time searching through datasets to figure out which ones they could join. The recommended datasets feature takes out the guesswork by showing the relevant ones from which to choose. 

Scope

Ensure our user base of non-data scientists can effectively clean data through providing context and brief explanations, while maintaining consistent technical vocabulary.

Stakeholders

Product management, design, engineering

Solution

Though I don’t have feedback on the tooltips specifically, based on discussions with the product manager and solutions engineers, customers are very appreciative of the feature and weren’t confused by an initial demo from product management.

Before

This panel displays when the user picks one of the recommended Salesforce objects to join.

  • The heading Select Columns (Account) is unclear.

  • The label Select right key for your join doesn’t provide much information and is sloppy text.

     

After

  • Preserves how the interface mimics the underlying database language, but adds more information.

  • Clarifies the relationships between the input and to-be-jointed objects by renaming the tab Related Objects.

  • Keeps the focus on objects across the UI by renaming the header with the name of the first selected object. “Selected Columns (Object Name)” was cluttered and unclear.

Process Details

I collaborate on product text with designers, engineers, product managers, and our UX writing review team in a Google Doc.

  • Engineers or designers sometimes write placeholder product text, which goes in the Current Text column. I write the Recommendation column.

  • The UX writing review team and I edit/discuss the recommended text via Google Doc comments. Together, we establish the final text, which goes in the Final Text column.

Conversation with engineer and product manager where I push to add further details in the UI.

Outcomes

From a solutions engineer: “Previously, I don’t think [Data Prep] did a terribly great job. Before, I really had to search. What is super nice now is how when you select a dataset, you can check for related Objects—that is gold!!! So, for a related join node, it finds the join keys automatically. That is SWEET. It’s so much better!!”

Read More
Blythe Sheldon Blythe Sheldon

Training Module: Why Bias in AI Models Matters in Plain Language

Problem 

Employees and customers needed a primer on the importance of creating AI models that produce fair, unbiased, and representative outcomes and how to implement them. If they’re unaware of the technical and social pitfalls that can accompany development, the recommendation systems that undergird many of Salesforce’s products—and the world’s biggest companies that leverage them—would generate inaccurate, unusable, and potentially harmful results.


Background

I have a longstanding interest in the relationship between technology and people and the repercussions of AI in particular. I reached out to the head of our ethical AI department to ask how I could be of use to her as a writer. A training module was among her top priorities, but she was strapped for time. Given my writing ability and familiarity with the subject matter and our internal publishing processes, I could manage the details of this collaborative effort.

Scope

  • Adapted a series of blog posts written by the head of Ethical AI to an easily-digestible format of bullet-pointed objectives, headings, and tightly-edited body text.

  • Worked with engineering (highly technical), Trailhead editors (non-technical), the Ethics team, PR, and legal to align on messaging for describing bias to a diverse audience that spans cultural norms and technical abilities.


Stakeholders

Product, the Office of Ethical and Humane Use, AI engineers, UX research


Solution

Make sure our examples apply to everyone.

Before

”But the AI in question judged [the beauty pageant contestants] by European beauty standards—in the end, the vast majority of the winners were white.”

Customer feedback highlighted this conflation of European beauty standards and whiteness.

After

But the AI in question judged [the beauty pageant contestants] by Western beauty standards that emphasized whiteness. In the end, the vast majority of the winners were white.

To show that automation bias isn’t new or limited to AI, I also added information about the history of color photography, which initially calibrated skintones, shadows, and light based on an image of a white employee.


Before

A limited discussion of gender re: a prostate cancer drug.

Some may argue that not all biases are bad. For example, a pharmaceutical company manufacturing a prostate cancer drug only to men may seem like a “good” bias because they are targeting a specific group. This is actually not “biased.” Since women do not suffer from prostate cancer, the company is not denying them a benefit by not showing them ads for their drug. In this case, showing only men the ads is accurate and not biased. Therefore, it doesn’t make sense to refer to this as a “good” bias because it is not a bias at all.

After

Some may argue that not all biases are bad. For example, let’s say a pharmaceutical company manufactures a prostate cancer drug, and they target only men in their marketing campaigns. The company believes that their marketing campaign targeting is an example of a good bias because it’s providing a service by not bothering female audiences with irrelevant ads. But if the company’s dataset included only cis-gender people, or failed to acknowledge additional identities (ie. non-binary, transgender woman, transgender man, and agender individuals), then they were likely excluding other people who would benefit from viewing the ad. By incorporating a more complex and accurate understanding of gender and gender identity, the company would be better-equipped to reach everyone who could benefit from the drug.

Outcomes

  • Over 37k employees, administrators, consultants, sales representatives, and developers have completed the training.

  • Winner of Distinguished Award from the Society of Technical Communication

  • Nominated for a Harvard Kennedy School Tech Spotlight

    • A recognition program for projects and initiatives that demonstrate a commitment to tech and public purpose. Nominations are evaluated based on their proven ability to reduce societal harms and protect public purpose values including privacy, safety and security, transparency and accountability, and inclusion.

  • Listed as a resource and referenced in the Ethical and Humane Use of Technology training required by all Salesforce employees.

Read More