Using LLMs vs. 'The Environment'
A short rant about people who overuse LLM AI's
There’s a strange irony in watching people talk about global warming and other environmental problems while simultaneously hammering away at chatbots like they’re infinite-energy oracles. Every throwaway prompt like “write my tweet,” “summarize this article,” “draft an email to my cat,” etc. runs on megawatts and datacenter water cooling. It’s not that using LLMs is evil; it’s that overusing them as a replacement for thinking has become its own form of digital pollution. While there is a real environmental cost, perhaps the erosion of intellectual self-sufficiency is even more damaging.
Agentic programming tools take this wastefulness to another level. These “autonomous” agents chain LLM calls together, recursively prompting other models, often without meaningful human oversight. Each step burns compute cycles. The result isn’t intelligence—it’s a Rube Goldberg machine of API calls, an illusion of progress paid for in electricity, latency, and cognitive outsourcing. If overusing LLM Chat is leaving the lights on, then agentic systems are the AI equivalent of lighting up Las Vegas to toast a single slice of bread.
To be clear, dear reader, I use gemini-cli and codex a few times a week to good effect. What I am complaining about here is devs on social media bragging how many agentic AI coding sessions they keep running in multiple sessions.
The energy and other environmental externalities for over-using or miss-using AI are extreme, dumping on our environment and our future economy.


Electricity costs go up for everyone when a data center goes into an area. If the company putting in a data center had to pay for the increase in other's costs, they'd think much more about what these are being used for.