February 28, 2026

Why I Don't Plan to Use LLMs

ROI is low, cultural risks are significant, and the environmental cost is growing - LLMs don't make sense for me.

A volcano erupting at night, with city lights surrounding it and sunrise on the left horizon.

Over the last three posts I’ve discussed a number of articles covering a range of topics related to Large Language Models and what are the measurable impacts.

This week I’m going to go over these topics together and see what I’ve gleaned from it. This is my take, so a more subjective approach than the previous posts.

In Review

ROI/Efficiency

The study and the surveys agree - the improvements that have been promised are not showing up on the bottom line. Where there are hints of improvements via surveys, there are no details on how these are actually happening, or what other organizations could do to have improvements themselves. This raises the question of if these are improvements because of LLM use, or if there are other factors that are being conflated, or a causation vs correlation problem. For instance there are macro-economic factors that could be causing complex, significant, changes in the business landscape - tariffs, and energy costs as examples.

Maybe there’s a use-case that hasn’t been found yet which would have a radical positive impact, but every month that goes by without verifiable proof of this happening makes me more skeptical.

In my limited experience using LLMs for editorial tasks and troubleshooting programming issues, it’s hard to say that I really benefited. While I think that some use cases worked pretty well, it never has been as quick and straightforward to get a good result as I hoped. If the core benefit is reduced time and effort, I haven’t experienced it.

This article just came out, reviewing of several surveys which aligns with my conclusion - some are saying there are improvements but they appear small and vague enough that it’s hard to say what’s really going on: 6,000 execs struggle to find the AI productivity boom.

Institutional and Cultural Impacts

This is possibly the biggest and the most critical set of unknowns - which may take years to become fully clear.

There already are signs of skill degradation, and it is easy to see how Generative AI can and will erode trust within and between groups (organizations, institutions, and wider society).

At a minimum, steps should be taken to ensure that people using LLMs maintain their current skill level. As well as transparency and accountability processes to keep and build trust in organizations.

I’ve mentioned before that I’ve noticed my skills and knowledge be adversely affected when trying to use LLMs to troubleshoot programming issues. I’m not a professional programmer so it would be too much to say I lost skill but I certainly didn’t gain skills. Instead I have a drag-and-drop function that I don’t understand as well as the code I wrote myself, which I will need to take apart and figure out so I can maintain it.

There might be a case that my writing has improved as a result of LLM use.

The difference with this use was that I refused to accept edits that I didn’t think I could/would have written if I was more comfortable. I took edits critically and rejected most of them as being out of character, perfectionist, or examples of stereotypical LLM output (looking at you em-dashes!).

Maybe this is a way forward - a semi-adversarial relationship where everything the LLM puts out is interrogated and reviewed with a critical eye. But it’s hard to see that catching on at this time, as it would demand skilled workers with time and resources to engage deeply with their work.

I also want to mention two articles I hoped to cover but had to cut, dealing with the broader cultural impact of Generative AI. Why AI writing is so generic, boring, and dangerous: Semantic ablation AI-induced cultural stagnation is no longer speculation − it’s already happening And one highlighting the continuing push for increases in productivity, regardless of the impact on staff sustainability: The first signs of burnout are coming from the people who embrace AI the most

Environmental Impacts

If the potential destruction of the fabric which allow institutions to function isn’t the biggest risk, environmental costs are.

As with everything so far, the details on what is really happening in terms of energy and water use because of LLM deployments are not clear. As a percentage of the total CO2 output, technology is seen as rather low, at around 4%. But this number is increasing quickly, due in large part to the focus on ever-larger models.

There is an urgent need for better information (as with the previous topics) as quickly as possible.

Stronger regulations, and a broader coming together of individuals and organizations to address the most pressing issue of our time is long overdue.

Overall Conclusion

In summary it seems that the positives of using an LLM for productivity are small at best. While de-skilling is an emerging issue, broader cultural erosion is a major risk - potentially destroying organizations from within - but will take longer to surface. The environmental impact is also unclear, but all indications are that LLMs are harmful and getting worse as models continue to grow.

I didn’t touch on legal, ethical or security concerns - which should be in consideration for any professional use.

From an individual practitioner’s perspective, it might be worth running some tests to see if a LLM (local, or cloud-based) has a significant positive impact for you. I’d just encourage skepticism and active engagement, because skill-loss would be unacceptable.

For an organization there might be more wiggle room to experiment, but a clear understanding of potential risks and what minimum benefits would need to be realized for it to be a success would be key.

As for myself, I might use local LLMs for grammar or minor editorial checks, but I think I will avoid LLM use as much as possible going forward. If something truly incredible happens I’ll revisit things, but as of now the negatives seem to heavily outweigh the positives.

Takeaway

Any implementation should include at least the following:

  • strong organization-wide guidelines, rules, and expectations on LLM use, chains of responsibility, etc.
  • training for all staff on LLMs, covering how the tech works, limitations, and current best practices
  • ensuring that any contracted LLM company, or internal deployment comports with the organizations larger environmental goals and commitments
  • feedback loops where all staff are empowered to share their experiences pro/con, with regular reviews and a process to implement changes as deemed necessary

For individuals I’d suggest that you engage critically with all output from an LLM - ask questions of yourself or people you trust about usefulness and accuracy. Don’t just ask the same (or even a different) LLM and call it a day. The only way you will hold your edge is if you work to hone it consistently.

If you haven’t started implementing LLMs, it would be good to not only have clear steps to address these concerns, but also a clear and measurable KPI or ROI baseline that signifies that the project is worth continuing. Review the implementation at least every 6 months, and adjust, cancel, or broaden as the metrics indicate.

“Hope for the best, plan for the worst”.

As with any new technology or technique, you can test it with the hope it will be an improvement. Just be skeptical of the hype, and treat potential risks seriously, or you may find yourself facing a worst-case scenario with no plan.

Next Time

I’ll be leaving the topic of LLMs, but I’m not sure what I’ll cover next.
I just started reading this Aeon article: The six-second hug, which ties into my thoughts on Kantan Kanban as a productivity tool. It’s likely I’ll start there.

Photo credit: Photo by Timothy Cohen on Unsplash