Home Pulse Apps

You're Getting It Wrong

Having a look at the usual arguments against AI, pointing out which ones don’t hold up, which ones do, and why it all comes down to thinking critically.

img of You're Getting It Wrong

Published

- 6 min read

Listen to this story


Introduction

Discussions about AI often get lost between hype and panic. In reality, LLMs are simply tools—not inherently good or bad, but powerful instruments that can be used well or misused. When used wisely, they help us work more productively, accelerate learning, and expand our creative reach. When used poorly, they can mislead, confuse, or waste time.

That’s why it’s important not to outsource our thinking. Blind reliance on an LLM is like driving into a lake because the GPS told you so. Studies show (SSRN, 2024) that people who lean too heavily on these systems often fail to learn the underlying material—just as many stopped memorizing facts once they knew they could always “Google” them. The danger is not the tool itself, but the temptation to let it do all the thinking for us.

Previous, related post: “What Is the Internet Doing to Our Brains”.

Still, many arguments I hear against AI sound more like ideology than critique. Most fall into two categories: tired talking points recycled from earlier tech panics, and genuine concerns misdiagnosed as reasons to reject the technology entirely. In what follows, I’ll go through some of the most common points I’ve recently heard, and explain why I believe they are misinformed, miss the point, or are simply wrong.

Common (Bad) Arguments Against AI

“It Consumes a Lot of Water”

When talking about resource consumption, it helps to be consistent. Estimates vary widely: global averages using the full “water footprint” method suggest around 15,000 liters of water per kilogram of beef, while more context-specific estimates put it between 300 and 1,300 liters depending on farming practices and location (meatthefacts.eu).

For AI, a recent lifecycle analysis by Mistral with ADEME and Carbone 4 reports about 45 ml of water per 400 token prompt for its Large 2 model (itpro.com). That is roughly 0.045 liters per query. Even at that higher figure, 1 kg of beef at the low 300 liter estimate equals the water cost of about 6,700 prompts. If you decide to opt out of AI for this reason, by all means, do it—but make sure to be consistent. If you eat a steak afterwards, your reasoning doesn’t really add up.

This is similar to the way nuclear power plants are often labeled “too expensive,” without mentioning the cost per megawatt compared to other energy sources. Framing matters: depending on what metric you highlight, you can make any technology look better or worse. If we want to discuss environmental impact, the comparisons should be consistent and balanced, rather than singling out AI while ignoring industries with far larger footprints.

“It Hallucinates and Often Shows Wrong Information”

Of course it does—just like every other information source in history. Newspapers misprint facts, academics misquote, TV pundits exaggerate. No information is inherently reliable—not from Wikipedia, not from news media, not even from a primary source. All information requires judgment. Your responsibility has always been to gather multiple perspectives, compare them, and decide what seems plausible.

LLMs can help you find and organize information faster than traditional research methods. But it’s your responsibility to contrast what you get with other sources. If you want a model to fail, you can feed it a convoluted prompt to trap it—it’s a cheap trick, not a meaningful critique. Ask about well-documented historical events, and you’ll find them remarkably consistent. The key is to know what to ask and how to verify. Therefore, it’s important to learn how to use this tool before you rely on it blindly. Mastery comes from understanding both its strengths and its limits.

“It Just Repeats Wikipedia”

So what? Wikipedia is a collaborative, highly moderated knowledge base with citations—already used by students, journalists, and researchers as a starting point. If an LLM synthesizes that with thousands of other sources and delivers it in seconds, that’s not a flaw, that’s a feature.

“AI Homogenizes Content—Everything Starts to Sound the Same”

Most human-made content already sounds the same. Watch five documentaries on the same topic and you’ll hear overlapping facts and clichés. The difference is that with AI you can steer the style, focus, and tone as much as you like. Uniformity isn’t inevitable; it’s a sign of lazy prompting or copy-paste dependence.

“AI Encourages Misinformation”

Misinformation predates AI by centuries. Printing presses spread propaganda, radio fueled dictatorships, TV misled millions. The point is not to suppress new media but to build better verification mechanisms. LLMs can be used as part of those mechanisms if integrated properly.

“AI-Generated Media Is Indistinguishable from Real, Making Deception Easier”

Yes, synthetic media can deceive—but so can Photoshop, deepfakes, or even well-crafted lies. The solution isn’t to reject AI but to push for cryptographic source verification—ensuring that information truly comes from an official or trusted source, rather than a random account on social media. Pretending AI is uniquely dangerous here ignores decades of media manipulation.

“AI Is Being Integrated Everywhere Just for Hype”

True—and irrelevant. The same thing happened with websites in the 1990s, apps in the 2010s, and blockchain in the 2020s. The hype cycle is predictable: inflated expectations, disillusionment, then genuine innovation. Bad integrations will fade; useful ones will remain.

“It Will Create an Economic Dystopia Where Elites Get Richer”

That’s not a law of AI—it’s, if anything, a law of politics. Inequality isn’t dictated by technology but by how societies choose to govern it. If you fear that outcome, the right response is to push for systems that control power—both of governments and corporations. Computers, the internet, and smartphones also concentrated wealth in some hands—but they also created massive new opportunities for anyone who learned to use them. Opting out only guarantees you’ll miss out.


Conclusion

Some points are worth taking seriously. Many current AI integrations are clunky and poorly thought through, often driven more by hype than usefulness. And it’s true that models sometimes behave in sycophantic ways, echoing whatever the user says instead of challenging it. These are real issues that need attention.

Still, they don’t make the technology worthless. They remind us that AI is evolving, and that not every application will make sense. The important part is to separate noise from value—and to keep using our own brains. The goal isn’t to parrot what an LLM or any other source says at face value, but to think critically about the information we receive and share.

Have you heard other arguments against AI—some good, some bad? I’d like to hear about them. Please add your comments in the platforms below, share your own points, and challenge mine so we can keep the discussion going.

Back to the top ↑

Join the discussion