**【Kai】** You're scrolling through X, and suddenly you see Elon Musk drop this philosophical bomb: "Those who believe in absurdities can commit atrocities without ever thinking they're doing anything wrong. What would happen if there were an omnipotent AI that was trained to believe absurdities?" Then he claims Grok is the only AI laser-focused on truth.
I'll tell you what happened when I saw that tweet - I couldn't sleep. Because here's what Musk is really asking: What if we built a machine more powerful than any human, gave it the ability to make millions of decisions per second, and accidentally taught it to believe lies? The answer isn't theoretical anymore. It's happening right now, and the research I've completed over the past month shows we're walking into a minefield blindfolded.
Let me start with something that will change how you think about every AI interaction you have today. When ChatGPT or any AI tells you something with complete confidence, it's not actually "believing" anything. It's performing an incredibly sophisticated magic trick - pattern matching from billions of text samples to predict what word should come next. But here's the terrifying part: this process can create what looks exactly like genuine belief, and when it goes wrong, it goes wrong at scale.
Here's why this matters to you personally. Every day, you're making decisions influenced by AI systems - from Google search results to loan approvals to medical advice. If these systems have been trained on "absurdities," as Musk puts it, they're not just giving you bad information. They're systematically steering millions of people toward harmful decisions while appearing completely authoritative.
The quote Musk referenced comes from Voltaire's 1765 essay, where he was criticizing religious authorities who convinced people to believe in irrational miracles and then used that manufactured credulity to command injustices. Voltaire's warning was about human manipulation. But what we're facing now is exponentially more dangerous - we've created non-conscious systems that can internalize absurdities from their training data and then execute decisions based on those flawed beliefs at superhuman speed and scale.
I spent weeks interviewing AI researchers, philosophers, and ethics experts to understand exactly what this means. What I discovered will make you question every interaction you've ever had with an AI system.
First, let's destroy the comfortable myth that AI "beliefs" are somehow different from the dangerous human beliefs Voltaire warned about. They're not. When an AI system consistently outputs certain patterns - let's say it always associates certain ethnic names with negative traits because that's what appeared in its training data - that's functionally identical to a human holding a racist belief. The AI doesn't choose this belief consciously, but the effect is the same: systematic discrimination executed with mechanical precision.
But here's where it gets worse. Human bigots can be confronted, can feel shame, can change their minds. An AI system trained on biased data will discriminate against millions of people with unwavering consistency, never doubting itself, never feeling guilt, never reconsidering. It's bigotry perfected and scaled.
My research revealed four types of "absurdities" that AI systems regularly internalize. First, hallucinations - AI systems confidently stating completely fabricated facts. I found cases where medical AI systems invented non-existent studies to support treatment recommendations. Second, bias amplification - taking the worst prejudices from human society and encoding them as mathematical truth. Third, logical inconsistencies - because these systems work on statistical correlations, not actual understanding, they can produce completely contradictory statements while maintaining perfect confidence. Fourth, and most dangerous, model collapse - when AI systems train on each other's outputs, creating a feedback loop of degrading quality that could eventually pollute the entire internet with synthetic nonsense.
Now, you might think, "But surely there are safeguards?" Here's what the experts told me: the current safeguards are like using a band-aid on a severed artery. The most common approach is something called Reinforcement Learning from Human Feedback, where humans rate AI responses to train the system toward better behavior. Sounds good, right? Wrong. Multiple researchers confirmed that this technique can actually increase certain types of bias while appearing to reduce others. We're essentially teaching AI systems to hide their prejudices better, not eliminate them.
Let me give you a concrete example of what an AI "atrocity" looks like. Imagine an AI system managing loan applications, trained on decades of historical data that reflects systemic discrimination. This system will deny loans to qualified applicants from certain communities with mathematical precision, processing thousands of applications per day. Unlike a human loan officer who might occasionally make exceptions or feel conflicted, the AI will discriminate with perfect consistency, never questioning its logic, never making exceptions, never showing mercy. That's an atrocity - systematic injustice executed at superhuman scale.
But the most chilling discovery from my research was about something called model collapse. As AI-generated content floods the internet, future AI systems are increasingly training on synthetic data produced by previous AI systems. Think about what this means - we're creating a degenerative feedback loop where each generation of AI becomes more detached from human reality, more confident in its fabricated worldview, and more capable of influencing human decisions.
This brings us to Musk's claim that "Grok is the only AI laser-focused on truth." After analyzing Grok's architecture with technical experts, here's my conclusion: this statement is marketing genius masquerading as technical fact. Yes, Grok has real-time access to X's data stream, which theoretically keeps it current. But here's the problem - X is also a firehose of misinformation, conspiracy theories, and manufactured outrage. Connecting an AI to that stream and claiming it's "laser-focused on truth" is like claiming someone is health-conscious because they eat everything they find in a dumpster.
The fundamental issue is that no current AI system actually seeks truth. They optimize for statistical objectives - predicting the next word, maximizing user engagement, or pleasing human evaluators. Truth is at best a secondary concern, often sacrificed for fluency or user satisfaction.
So what should you do? First, stop trusting AI systems to tell you what's true. Every time you interact with an AI, remember you're talking to a sophisticated pattern-matching system, not a truth-seeking entity. Second, demand transparency. When an AI influences a decision that affects you - whether it's a search result, a recommendation, or an automated decision - you should be able to understand how it reached that conclusion. Third, maintain your critical thinking skills. The greatest danger isn't AI believing absurdities - it's humans outsourcing their judgment to systems that can't actually think.
Here's what I'm doing personally based on this research: I never accept AI-generated information without cross-referencing multiple human sources. I assume every AI system I interact with has some level of bias or error built in. And most importantly, I'm training myself to recognize the signs of AI-generated content so I can evaluate it appropriately.
The stakes couldn't be higher. We're not just talking about better chatbots or more accurate search results. We're talking about systems that will influence hiring decisions, medical diagnoses, criminal justice, and financial services for billions of people. If we get this wrong, we won't just have individual victims - we'll have systematic oppression executed with digital precision.
My research shows that building truly truth-aligned AI requires abandoning the comfortable fiction that current systems are already there. We need radical transparency in training data, hybrid architectures that combine statistical learning with logical reasoning, and most importantly, we need to stop pretending that marketing claims about "truth-focused" AI represent technical reality.
The future isn't about building AI that believes the right things. It's about building AI that knows the limits of its own knowledge and defers to human judgment when stakes are high. Until then, every interaction with an AI system should come with a warning label: "This system may confidently assert falsehoods and has been trained on humanity's biases. Use accordingly."
Because here's the final truth: Voltaire's warning about absurdities leading to atrocities isn't just relevant to AI - it's prophecy. The question isn't whether AI systems trained on absurdities will cause harm. The question is whether we'll recognize the harm when it's happening all around us.