Slack AI's Privacy Principles & Consent for AI Training

Michelle Ma
May 20, 2024

AI Talk

Slack, the online communications platform companies worldwide use to engage and communicate with employees, released Slack AI earlier this year. However, recently the company has come under fire for its purported use of Customer Data in its training of ML models, as allowed by its privacy terms. 

Slack AI and ML Training Using Customer Data

Slack AI is a relatively new add-on that summarizes conversations, searches projects, teams and topics, and provides a daily recap of messages so users don’t miss anything. Notably, its webpage states that “Your data is your data. We don’t use it to train Slack AI.”

The controversy arises from the apparent contradiction between Slack AI’s page and its privacy principles, which state “our systems analyze Customer Data (e.g. messages, content, and files) submitted to Slack as well as Other Information (including usage information),” for several purposes: providing channel recommendations, search results, autocomplete, and even emoji suggestions. Additionally, the terms also state that it does not train LLMs or other generative models on Customer Data or share Customer Data with LLM providers.

Opting Out

While Slack now agrees that clarifying the privacy principles is necessary, the next issue is the ability for users to opt out. Currently, users automatically opt in for Slack AI to analyze Customer Data. To opt-out, users have to follow a process outlined in the privacy principles: “Contact us to opt out. If you want to exclude your Customer Data from Slack global models, you can opt out. To opt out, please have your org, workspace owners or primary owner contact our Customer Experience team at feedback@slack.com with your workspace/org URL and the subject line ‘Slack global model opt-out request’. We will process your request and respond once the opt-out has been completed.”

AI Legislation & Consent

Slack’s predicament raises the question of best practices and legal requirements for consent when it comes to customer data used in ML training. Currently, many platforms have an automatic opt-in for users using the services; for some commonly used platforms, directions and info for opting out are here. While some states have already passed or are considering legislation regulating use of personal data in AI and automated decision-making technology, there isn’t currently any explicit legislation on workplace software and opt-out, aside from the most recent Senate bill, the AI CONSENT Act. The Act requires explicit consumer consent prior to PII being used for AI training and comes the closest to hitting the issue here. Until we have more state or federal legislation requiring companies to have an affirmative opt-in for using B2B customer user data in training algorithms, this controversy likely won’t be resolved soon.