How AI Personalization Fuels Groupthink and Uniformity
No, this isn’t another article crapping all over AI and babbling about our AI overlords replacing us at the top of the food chain. It’s a critical take on the ever-rising trend of pushing AI-powered suggestion and recommendation mechanisms in every service and tool we use.
I read Slack's privacy principles on search, learning, and artificial intelligence a few days ago. Like many users, I wasn’t delighted to see how Saleforces’ Slack will use private conversations from its clients to train its AI. There’s much backlash about it online, and everyone and their mother already shared their opinion about it somewhere. I see no point in adding to that pile, but I find another part of that text as concerning.
Right now, I want to focus on how the data will be used. Please note that Slack is not the only one (or the first, for that matter) doing this. This was only the trigger that made me think again about how the bubbles we live in will become even smaller. Slack is the primary communication medium for many businesses worldwide. Let me be idealistic for a moment and say that all those businesses want to create value. To create value, things such as creativity, new ideas, original thoughts, and something you want to have more of, right?
Here are some ways how Slack will use their customer’s data to “make your life easier”:
- Autocomplete: Slack might make suggestions to complete search queries or other text– for example autocompleting the phrase “Customer Support” after a user types the first several letters of this phrase.
- Emoji Suggestion: Slack might suggest emoji reactions to messages using the content and sentiment of the message, the historic usage of the emoji, and the frequency of use of the emoji on the team in various contexts.
- Search Results: Our search machine learning models help users find what they’re seeking by identifying the right results for a particular query. We do this based on historical search results and previous engagements (…)
At first glance, these features seem harmless, even helpful. They save time, reduce friction, and enhance user experience. However, beneath the surface lies a more troubling consequence: the potential for these features to stifle creativity and reinforce groupthink.
Consider the autocomplete function. By suggesting common completions based on past data, Slack’s AI could inadvertently discourage users from thinking outside the box. If the AI continually nudges users toward conventional phrases and ideas, it might limit the expression of novel thoughts. Over time, this can create a feedback loop where the most common ideas become even more dominant.
Emoji suggestions present a similar issue. If certain emojis are suggested more frequently because they have been used more often in the past, it could lead to a homogenization of emotional expression. This might seem trivial, but it’s one more thing that makes the whole communication experience uniform, predictable, and boring.
Search results, too, are influenced by past behaviors. If AI algorithms prioritize results based on what has been frequently searched for and engaged with, they can create a self-reinforcing loop that amplifies popular ideas and suppresses less common ones. If everyone is guided to the same set of information, the range of ideas and solutions considered will narrow.
In essence, AI personalization in tools can lead to an echo chamber. It can lead to uniformity and groupthink.
There’s a popular saying, “Always use the right for the job.” Maybe we should add a part that goes, “if it’s not dangerous in the long run.”