
How sure are you that you can tell what is real online? You may think it’s easy to spot an AI-made image, and you might know that computer programs can be biased. But studies show we often fail to notice this in our daily lives. This influence affects not only what we see but also how we speak.
The same thing is now happening with AI chatbots. You may think ChatGPT speaks English like a person, but it does not — just as algorithms (算法) do not show true reality. Its language always has small changes, depending on how the AI is trained. For example, ChatGPT uses the word “delve” much more often than normal. This may be because some of its trainers were from Nigeria, where “delve” is more common. Over time, the AI used it even more. Now, since ChatGPT became popular, people everywhere have begun to use “delve” more in daily talk. Without realizing it, we are starting to copy the AI’s way of speaking.
You can see a similar effect in music apps like Spotify. The app noticed some users listening to a new kind of music. It then created a playlist called “hyperpop” and recommended it to more people. Soon, listeners began to debate what hyperpop really was. Musicians made more songs in that style, and Spotify kept promoting it. But this raises a question: was hyperpop a real trend, or was it made bigger by the app? Social media often amplifies trends to keep users engaged, making it hard to tell what is truly popular.