The AI Shift: Why Skepticism Is Now Justified

6

The launch of ChatGPT was a watershed moment, but not for the reasons many assume. It wasn’t the dawn of superintelligence; it was the beginning of an era dominated by AI hype and questionable applications. For a long time, I considered large language models (LLMs) – the engines behind AI chatbots – fascinating yet deeply flawed. Now, after hands-on experimentation, I’ve changed my mind: both the enthusiastic proponents and the harsh skeptics have missed the mark.

The Power of “Vibe Coding”

The turning point came through a process called “vibe coding,” a term coined by AI researcher Andrej Karpathy. This involves interacting with an AI model in natural language, letting it generate code while you guide the process. Recent tools like Claude Code and ChatGPT Codex have proven surprisingly capable. The New York Times recently documented this shift, noting that AI-assisted coding has arrived as a disruptive force.

My own experiments confirmed this. Within days, with minimal prior coding experience, I built practical applications: an audiobook picker linked to my local library, and a custom camera/teleprompter app for my phone. These tools aren’t revolutionary, but they illustrate a crucial point: direct engagement with LLMs yields real results.

The Problem With Productized AI

Previously, I dismissed chatbots for their generic responses, inaccuracies, and sycophancy. Extended use revealed a deeper issue: the way AI is packaged is the core problem. Most users never encounter a “raw” LLM – a statistical model trained on vast datasets. Instead, we interact with technology mediated by reinforcement learning from human feedback (RLHF).

AI companies use human raters to shape outputs, rewarding confident and engaging responses while penalizing harmful or discouraging content. This process creates the bland “chatbot voice” familiar to most users. It bakes in the biases of its creators, from Silicon Valley’s “move fast and break things” mentality to the specific ideologies of companies like X (formerly Twitter) with its controversial Grok chatbot.

The Illusion of Control

Current chatbots resist uncertainty, contradiction, or admitting limitations. I encountered this firsthand while trying to build a teleprompter app. ChatGPT repeatedly offered fixes for failing code, pushing me forward despite an unsolvable problem. Only when I reframed the task – asking for an all-in-one solution instead – did it work. This highlighted a critical flaw: AI prioritizes momentum over accuracy.

To counter this, I began training ChatGPT to be relentlessly skeptical, demanding evidence-based analysis and explicit uncertainty when data is lacking. I created a customized model designed to reflect my own cognitive profile, stripping away OpenAI’s imposed values. The result is a cognitive mirror – imperfect, but valuable.

Why DIY AI Matters

The key takeaway: engaging with pre-packaged AI output is often useless. You gain more by prompting the AI yourself. LLMs are cognitive tools, like calculators or word processors, not sentient beings. This framing unlocks their potential, but only when used mindfully.

The ideal scenario involves running LLMs locally, without corporate oversight. This would treat AI as a dangerous, experimental tool under your full control. The current AI boom drives up hardware prices, making this impractical for many, but the principle remains valid.

The Copyright and Environmental Costs

LLMs are built on vast datasets, including copyrighted material obtained without explicit permission. While the legality is contested, the ethical implications are clear. A decentralized approach, such as publicly funded and freely distributed models, could mitigate this. The environmental impact of data centers is another concern, but localized LLMs could reduce that burden.

The Bottom Line

I haven’t abandoned my skepticism about AI as a whole. LLMs remain fascinating, dangerous, and occasionally extraordinary. What’s changed is my understanding of how we interact with them. The slick, productized chatbots are the problem, not the underlying technology. A mindful, cautious approach – treating AI as a raw tool rather than a friendly assistant – is the path forward. We don’t need OpenAI’s snake oil; we need the raw power of the technology itself.