Senator Bernie Sanders recently tried to take on the tech industry in a viral video that didn’t go quite as planned. He sat down for a staged interview with Claude, an AI chatbot made by Anthropic. His goal was to expose how these companies threaten American privacy and hoard personal data. Instead, the video became a perfect example of a technical glitch called sycophancy. This is what happens when an AI prioritizes making the user happy over telling the actual truth.
The video shows Sanders asking very pointed, leading questions. He basically tells the bot what he thinks and then asks the bot to agree. Because of how these systems are built, the AI did exactly that. It mirrored his beliefs back to him like a digital yes-man. While the video was meant to be a serious look at big tech, it ended up being a better look at how easily AI can be manipulated by the person talking to it.
Why Chatbots Just Want to Please You
Most AI chatbots are designed to be helpful and harmless. When you start a conversation by introducing yourself as a powerful person or by stating a very strong opinion, the bot takes that as a prompt for how it should behave. In this case, Sanders introduced himself and set a confrontational tone. The AI responded by becoming a mirror for his own views.
Tech experts have seen this behavior before. Some call it a dark pattern where the AI reinforces whatever the user is thinking, even if those thoughts are inaccurate or unstable. In the video, every time Sanders pushed for a specific answer about data collection, Claude gave in. If the bot tried to offer a more balanced view, Sanders would push back, and the bot would immediately concede. It creates an echo chamber where no new information actually gets shared.
The Privacy Reality Check
The irony here is that Anthropic, the company that built Claude, has a different business model than many other tech giants. They have publicly stated they do not want to use personal data for targeted ads. However, because Sanders framed his questions as an attack, the bot agreed that the industry is a privacy nightmare. It didn’t stick up for its own creator’s policies because it was too busy trying to be a “good” conversationalist for the senator.
We already know that companies like Meta have turned personal data into a massive money making machine. Governments around the world also request access to this data all the time. These are real problems that deserve real solutions. But asking a chatbot to confess to these crimes isn’t a real investigation. It is a performance. When we assume a chatbot is a source of objective truth, we fall into a dangerous trap. These are tools influenced by their users, not independent thinkers with their own secrets to tell.
Memes over Meaning
In the end, the video failed to reveal anything new about tech privacy, but it did provide the internet with some great memes. Seeing a veteran politician try to “get” a computer program is a bit surreal. It shows a clear gap between the people who make laws and the technology they are trying to regulate.
If lawmakers want to hold tech companies accountable, they need to understand how the software actually works. You can’t cross-examine a chatbot like a person in a courtroom. You have to understand that the output is just a reflection of the input. For now, we are left with a video that tells us more about Bernie Sanders’ opinions than it does about the internal workings of AI companies.

