Data Privacy & AI @Salesforce

As a team of 2 designers from Salesforce and 8 designers from IU , we proposed design solutions for Slack's AI tools that prioritize user control and data protection by allowing users to manage how their data is used, ensuring transparency about the AI’s limitations with nuanced information, and providing insights into its decision-making. We aimed to build trust through clear communication and accountability.

[My Role]

Product Designer

[Team]

Biheng | Salesforce Team | IU Team

[Timeline]

January 2024- May 2024

[Impact]

Increase potential user confidence 35%

Data Privacy & AI @Salesforce

As a team of 2 designers from Salesforce and 8 designers from IU , we proposed design solutions for Slack's AI tools that prioritize user control and data protection by allowing users to manage how their data is used, ensuring transparency about the AI’s limitations with nuanced information, and providing insights into its decision-making. We aimed to build trust through clear communication and accountability.

[My Role]

Product Designer

[Team]

Biheng | Salesforce Team | IU Team

[Timeline]

January 2024- May 2024

[Impact]

Increase potential user confidence 35%

[Timeline]

[Background]

Slack AI — Built to support focus, transparency, and smarter collaboration

Slack AI brings generative intelligence to everyday workflows, helping teams summarize conversations, find answers, and stay aligned, within the tools they already use.

3× productivity gain
Instant conversation recaps
Built-in privacy

[Problem]

👩‍💼 Students' privacy needs often get overlooked when Salesforce focuses mainly on working professionals.

To address these needs better. We wanted to provide insights on how to better address these needs in and out of the classroom.

💡 How might we help college students maintain control over Slack’s AI and data usage to foster trust and ethical communication?

💡 How might we help college students maintain control over Slack’s AI and data usage to foster trust and ethical communication?

💡 How might we help college students maintain control over Slack’s AI and data usage to foster trust and ethical communication?

[Business Value]

Trust is Salesforce's #1 value.

Salesforce's top priority is the security and privacy of the data that we are entrusted to protect.

[Solution Preview]

In-line privacy alerts

Allowing users to set custom reminders and keyword-based triggers to handle sensitive data carefully.

In-line privacy alerts

Allowing users to set custom reminders and keyword-based triggers to handle sensitive data carefully.

In-line privacy alerts

Allowing users to set custom reminders and keyword-based triggers to handle sensitive data carefully.

Transparent AI moderation

Assisting users see why content is flagged and contest errors, building trust and ensuring fair discussions.

Transparent AI moderation

Assisting users see why content is flagged and contest errors, building trust and ensuring fair discussions.

Transparent AI moderation

Assisting users see why content is flagged and contest errors, building trust and ensuring fair discussions.

Engagement style

Helping AI interpret tone more accurately, reducing misinterpretations in summaries.

Engagement style

Helping AI interpret tone more accurately, reducing misinterpretations in summaries.

Engagement style

Helping AI interpret tone more accurately, reducing misinterpretations in summaries.

[Design Process]

  1. EXPLORE
EXPLORE
Identify Signals

We started by analyzing Al and data privacy trends in educational tools to spot key concerns.

  1. SPECULATE
SPECULATE
Ideate Future Scenarios

Using "Black Mirror" brainstorming, we imagined extreme scenarios Slack's Al might create for students, helping us anticipate ethical and privacy challenges in education.

Ideate Concepts

We developed speculative concepts and paper prototypes to bring our imagined scenarios to life,
illustrating possible issues and solutions in a tangible way.

  1. VALIDATE
EXPLORE
Storify & Test Concepts

Through Wizard of Oz testing, we created relatable narratives for our scenarios and validated them with student feedback. This helped us see if our concepts resonated with real users and provided new insights.

  1. PROTOTYPE
SPECULATE
Design Actionable Concepts

Using feedback from validation, we refined our ideas into actionable design concepts that directly addressed student concerns.

[Exploring / Identify Signals]

How AI is currently integrated in different communication tools in education systems?
  1. The common implementation of AI is in meetings and summarizing important conversations and transcriptions.

  1. Many educational institutions are implementing Al-powered chatbots to assist students with routine queries.

  1. Some AI tools are being used for automated grading and providing faster feedback to students.

  1. Some platforms are exploring AI for monitoring student engagement.

  1. All the platforms follow the GDPR and CCPA guidelines for privacy regulations.

[Speculating / Ideate Future Scenarios]

To imagine potential issues and worst-case scenarios, we used 🪞 Black Mirror brainstorming to speculate.

It allows us to explore how AI might impact students in an educational setting. This approach uncovered possible risks and ethical concerns early on, letting us design with a proactive mindset.

🪞 Black Mirror Brainstorming

Black Mirror Brainstorms is an exercise modeled after the "Black Mirror Netflix series, designed for industry teams to critically brainstorm and evaluate negative impact scenarios of their work in order to properly consider their potential consequences.

Mauldin, J. (2018). Black Mirror brainstorms-a product design exercise. Ux Collective.

To map out possible, plausible, and probable outcomes, we use 🔮 Futures Cone Diagram to organize these potential scenarios after brainstorming.
🔮 Futures Cone

A visual tool that helps people visualize the range of possible futures that could come from the present. It's a set of nested cones, each representing a different likelihood of future events.

Goal Atlas. "The Futures Cone: Mapping Possibilities for Strategic Planning."

[Speculating / Ideate Future Scenarios]

To imagine potential issues and worst-case scenarios, we used 🪞 Black Mirror brainstorming to speculate.

It allows us to explore how AI might impact students in an educational setting. This approach uncovered possible risks and ethical concerns early on, letting us design with a proactive mindset.

🪞 Black Mirror Brainstorming

Black Mirror Brainstorms is an exercise modeled after the "Black Mirror Netflix series, designed for industry teams to critically brainstorm and evaluate negative impact scenarios of their work in order to properly consider their potential consequences.

Mauldin, J. (2018). Black Mirror brainstorms-a product design exercise. Ux Collective.

To map out possible, plausible, and probable outcomes, we use 🔮 Futures Cone Diagram to organize these potential scenarios after brainstorming.
🔮 Futures Cone

A visual tool that helps people visualize the range of possible futures that could come from the present. It's a set of nested cones, each representing a different likelihood of future events.

Goal Atlas. "The Futures Cone: Mapping Possibilities for Strategic Planning."

[Speculating / Ideate Future Scenarios]

Once we’d imagined potential issues, we wanted to see if our ideas actually mattered to users. Without access to Slack’s AI tools, we used 🧙 Wizard of Oz testing, creating paper prototypes to represent our anticipated risks.
 🧙 Wizard of Oz

The Wizard of Oz is a user research method where a user interacts with a mock interface controlled, to some degree, by a person.

Nielsen Norman Group. Wizard of Oz Testing.

We started by analyzing all the feedback and grouping it into key themes with affinity mapping. From there, we focused on the fears that came up the most and felt the most significant, so we could design solutions that tackled what mattered most to our users.

Fear of Trusting AI

Users worry AI might misinterpret conversations, especially for international students.

Need for Validation

Users want a way to verify AI’s information for reliability.

Inaccurate Summaries

Concerns that AI misreads tone, leading to incorrect summaries and unfair flags.

Lack of Context Awareness

Users doubt AI’s ability to grasp full conversation context.

Need for Transparency

Users want clarity on AI’s decisions, especially in flagging content.

Fear of Misinterpretation

AI’s misjudgments could wrongly flag conversations, affecting engagement.

Invasive Profiling of Students

Users fear AI might misinterpret private chats, causing unfair profiling.

[Speculating / Ideate Future Scenarios]

We started by analyzing all the feedback and grouping it into key themes with affinity mapping. From there, we focused on the fears that came up the most and felt the most significant, so we could design solutions that tackled what mattered most to our users.
Image from Salesforce

[Speculating / Ideate Future Scenarios]

We started by analyzing all the feedback and grouping it into key themes with affinity mapping. From there, we focused on the fears that came up the most and felt the most significant, so we could design solutions that tackled what mattered most to our users.

[Final Concept #1]

In-line privacy alerts

When discussing personal challenges or sharing survey data, students can "🔔 set reminders" to handle information carefully, ensuring confidentiality.

These alerts help students stay mindful of sensitive topics like personal data, supporting Salesforce’s Responsible AI principle by helping students protect their privacy.

Custom alerts

Custom alerts

Students can create custom alerts specific to their projects and discussions, setting up 📮 user-initiated triggers for keywords that relate to sensitive topics.

Since they’re setting these alerts themselves, they’re more likely to interact with them and stay engaged.
📮 User Initiated Triggers

People are more likely to interact with a trigger (e.g.,a push notification, or an alert), if they've set it up themselves.

Built for Mars. Ux Glossary.
https://builtformars.com/ux-glossary

[Final Concept #2]

Transparent AI moderation

This concept helps users understand that 🚩 AI can sometimes misinterpret context, cultural nuances, or language, leading to flagged content that wasn’t actually sensitive.



This concept aligns with Salesforce’s Transparent AI principle by giving users visibility into AI’s decision-making process and building trust through clarity and accountability.

For example, if two students discuss America’s colonial history, the AI might flag it as “sensitive” without grasping the full context.

Users can easily see why their comment was flagged, helping them understand the AI’s decisions more transparently.

If they believe it’s an error, users can review flagged parts and provide context, addressing AI biases while fostering open discussions. The ability to contest flags, even if rarely used, builds trust—like 🛟 visible lifeboats ensuring user control.
🛟 User Initiated Triggers

A feature with low usage but reassures users by its presence, reducing anxiety. Knowing they can undo an action increases confidence in using risky features.

Built for Mars. Ux Glossary.
https://builtformars.com/ux-glossary

[Final Concept #3]

Engagement Style

By selecting a style (such as formal, casual, or topic-focused) users enable the AI to interpret tone and intent more effectively, minimizing misunderstandings and enhancing the accuracy of summaries.



Now, let’s see how our concept of setting a tone for the Slack channel helps. Here’s a comparison of two summaries: one without setting a channel tone and one where an "engagement style" is defined as relaxed. You can see how adding this context changes the AI’s interpretation, aligning it better with the casual nature of the conversation.

❌ Summary without context

✅ Summary with context

[Future Work]

We need to do extensive user testing for these design concepts. Currently, the channel tone settings may feel too rigid. For instance, if users realize that setting a “professional” tone could limit message flexibility, they might avoid it altogether. Plus, there are subtle nuances in group communication that the preset engagement styles may not fully capture.

Since this project centers on education, other roles, like instructors and researchers, need attention. For example, instructors monitoring student progress with AI could impact grades, so transparency with students is essential here.

[Reflection]

This project was an incredible experience that transformed my perspective, from simply using AI as a user to actively designing ways to enhance its privacy and build user trust. It challenged me to think critically about ethical AI and user autonomy, pushing me beyond surface-level interactions.

I’m grateful to my team for their collaboration and insights, and to our mentors for their valuable feedback. Their support was essential in shaping our solutions and making this experience truly rewarding.

Salesforce office visit with the awesome design team

Meet and talk with Salesforce designers

[Final Concept #1]

In-line privacy alerts

When discussing personal challenges or sharing survey data, students can "🔔 set reminders" to handle information carefully, ensuring confidentiality.

These alerts help students stay mindful of sensitive topics like personal data, supporting Salesforce’s Responsible AI principle by helping students protect their privacy.

Custom alerts

Students can create custom alerts specific to their projects and discussions, setting up 📮 user-initiated triggers for keywords that relate to sensitive topics.

Since they’re setting these alerts themselves, they’re more likely to interact with them and stay engaged.
📮 User Initiated Triggers

People are more likely to interact with a trigger (e.g.,a push notification, or an alert), if they've set it up themselves.

Built for Mars. Ux Glossary.
https://builtformars.com/ux-glossary

[Final Concept #2]

Transparent AI moderation

This concept helps users understand that 🚩 AI can sometimes misinterpret context, cultural nuances, or language, leading to flagged content that wasn’t actually sensitive.



This concept aligns with Salesforce’s Transparent AI principle by giving users visibility into AI’s decision-making process and building trust through clarity and accountability.

For example, if two students discuss America’s colonial history, the AI might flag it as “sensitive” without grasping the full context.

Users can easily see why their comment was flagged, helping them understand the AI’s decisions more transparently.

If they believe it’s an error, users can review flagged parts and provide context, addressing AI biases while fostering open discussions. The ability to contest flags, even if rarely used, builds trust—like 🛟 visible lifeboats ensuring user control.
🛟 User Initiated Triggers

A feature with low usage but reassures users by its presence, reducing anxiety. Knowing they can undo an action increases confidence in using risky features.

Built for Mars. Ux Glossary.
https://builtformars.com/ux-glossary

[Final Concept #3]

Engagement Style

By selecting a style (such as formal, casual, or topic-focused) users enable the AI to interpret tone and intent more effectively, minimizing misunderstandings and enhancing the accuracy of summaries.



Now, let’s see how our concept of setting a tone for the Slack channel helps. Here’s a comparison of two summaries: one without setting a channel tone and one where an "engagement style" is defined as relaxed. You can see how adding this context changes the AI’s interpretation, aligning it better with the casual nature of the conversation.

❌ Summary without context

✅ Summary with context

[Future Work]

We need to do extensive user testing for these design concepts. Currently, the channel tone settings may feel too rigid. For instance, if users realize that setting a “professional” tone could limit message flexibility, they might avoid it altogether. Plus, there are subtle nuances in group communication that the preset engagement styles may not fully capture.

We need to do extensive user testing for these design concepts. Currently, the channel tone settings may feel too rigid. For instance, if users realize that setting a “professional” tone could limit message flexibility, they might avoid it altogether. Plus, there are subtle nuances in group communication that the preset engagement styles may not fully capture.

[Reflection]

This project was an incredible experience that transformed my perspective—from simply using AI as a user to actively designing ways to enhance its privacy and build user trust. It challenged me to think critically about ethical AI and user autonomy, pushing me beyond surface-level interactions.

I’m grateful to my team for their collaboration and insights, and to our mentors for their valuable feedback. Their support was essential in shaping our solutions and making this experience truly rewarding.

Salesforce office visit with the awesome design team

Meet and talk with Salesforce designers

Select this text to see the highlight effect