February 13, 2026

Are LLMs destroying institutions?

How LLM integration may erode expertise, decision quality, and culture.

A man sitting at the edge of a cliff overlooking the Grand Canyon.

This paper discusses three critical risks that come with LLM adoption, and puts forth that they are fatal to institutions 1.
While they focus on civic institutions, we can learn how the corporate and NGO worlds may be impacted and start to formulate solutions.

What are these risks and are others raising similar concerns?

Risk 1: erosion of expertise

The impression that a LLM-based function is of high quality and reliability leads users to rely on it more. Leading to less critical thinking around the results. Critical thinking skills atrophy as the processes used to check work are skipped/diluted when LLMs provide the output. Acceptance of “good enough” as a benchmark for LLM adoption further reduces the space for rigorous checks, quality control and individual engagement.

When LLMs are used in a process, there is pressure to accept what it returns at face value. Beliefs that LLM use will boost productivity, and that computerized processes are inherently more accurate and impartial are factors. As human input is reduced, cognitive unloading onto LLM tools will rise. When issues such as hallucinations create problems, the people involved are reduced to quality checking of provided output and troubleshooting instead of wide ranged problem solving.

Additionally, when staff co-train or fill in for others in a LLM heavy setting they would not be able to engage in the full scope of that role. This means they wouldn’t learn the skills the LLM has replaced, and would have a different, reduced, understanding of how the role fits into the larger whole of the organization.

When I’ve used LLMs to troubleshoot programming issues I’ve experienced the temptation to offload critical thinking.

Recently I was trying to add drag-and-drop to a QML app. I had an approach based on the official documents, but it wasn’t quite working so I put the code into LLMs. This took a couple of hours, and after a while I wasn’t engaging in the same way - I’d stopped reviewing changes and just copy/pasted it in.

Eventually I got to a working solution, but I couldn’t explain the code nearly as well as other functions based on documentation, StackExchange threads, etc.
As time went on, my diligence and critical thinking decreased, which could happen to anyone and in environments that try to mitigate this risk.

As pressure to be productive continues to grow and overall stress levels go up, there’s a real possibility that the ability to engage critically with LLM output will diminish. Potentially leading to a feedback loop of ever-decreasing expertise and ability.

This snippet, from one of their references stuck out (The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers):

… higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking. Qualitatively, GenAI shifts the nature of critical thinking toward information verification, response integration, and task stewardship.

The shift in the nature of critical thinking is noteworthy. It isn’t good or bad, but it may not align with current roles and responsibilities, and if this became widespread it could make systems and long-term thinking even rarer.

Risk 2: “short circuiting” decision making

When LLMs are in place for decision making, they reduce the space available for individuals to reflect critically on a decision, and the opportunity to have discussions with others covering pros and cons.

The aspect they discuss the most is how decisions that are mediated through an LLM are opaque, with no way to know exactly why a conclusion was reached. This undermines legitimacy and erodes trust within, and between, organizations. Due to how LLMs are created, this doesn’t seem to be fixable.

I was struck by how LLM use could reduce the space for people to critically discuss and explore options. People are able to imagine a great range of possible conditions and outcomes, but as Risk #1 points out, if that isn’t used it could be lost. While using an LLM can provide the impression that various options are being examined, due to their nature they don’t/can’t anticipate how changes can cascade or more extreme counter-factuals that a person could. This could leave vulnerability to black swan events, or simply being poorly situated for evolving conditions.

Risk 3: isolation and atomization

As staff work more and more with LLMs, they will work less with other humans (unless work hours expand). This could reduce their ability to gain new skills or insights, grow their network, and understand the broader system they’re working within. In losing a broader context they become less able to correct errors within their organization, or build productive relationships with other organizations, leading to overall systemic decay. Information bubbles, where negative feedback is softened or removed when filtered through (somewhat sycophantic by default) LLMs.

The main impact here is to culture and morale. It’s easy to imagine how a culture of quality could succumb to pressure to meet growing metric benchmarks, and degrade into “workslop” without leadership recognizing it. While at the same time individual workers feel less engaged with the mission, with weaker ties to co-workers. This raises the risk of burnout and turnover, further damaging culture and raising costs.

Isolation would further compound Risk #1. A key way to hone skills is through conversation with, and learning from, others. As person to person interaction decreases so would opportunities for skill exchange and outsider perspectives on issues.

The final third of the paper discusses specific institutions, such as higher-education and the legal system, in more detail. Unfortunately this section isn’t easily abstracted to other institutional settings, but it’s worth reading if you’re interested or directly impacted by one of them.

Conclusion

These three risks, in isolation or together, could undermine and destroy the mid to long-term viability of any organization. While the authors of this paper argue these issues are core to current LLM design and cannot be fixed without fundamental changes, I can’t say one way or the other. But they certainly won’t fixed be if managers, up to C-Suite executives, aren’t aware and actively working to reduce potential damage.

I can say that my personal experience echoes the risk of losing expertise, I recently saw a LinkedIn post and comments saying similar things across a range of fields.
Deloitte also raised this risk in their State of AI in 2026 paper, though mainly in the context of new hires.

While all we have is circumstantial evidence at this point, it seems short-sighted to write them off completely.

If this continues and LLMs are embraced through the education system, we may come to a point where some critical thinking skills won’t be available at any cost.

The second and third risks are harder to track, and will have a longer lead times before impacts are felt. However, they are also easier to address with stringent rules and expectations for decision making and office culture.

Takeaway

Unfortunately there isn’t a simple solution or a universal way to approach these problems, but here are some jumping off points:

  1. Review how you, and people you know, have been using LLMs to see if (and how) you might be losing expertise. If it’s something you value, try using LLMs less for that type of work, or be mindful to engage with it critically. Some examples include
    • Do you have work from an LLM that you can’t fully explain? What would you need to know to be able to?
    • Is there a task you used to do that is only done by a LLM now? Would you be able to do it to the standards you could pre-LLM?
    • What are questions I can ask about what I get from a LLM or during the process of using one, which could build my skills and position me for future challenges?
  2. When a decision is needed, set an expectation that you will spend time reviewing it and examine as many potential outcomes as possible. Prioritize having conversations with stakeholders and experts before making a final decision.
    • Do you understand why the proposed decision was reached? Could you determine how the decision would change if one or more factor was changed?
    • Does the decision adequately account for how the effects of the decision would change circumstances?
    • Are less likely possibilities considered? How about factors that would be outside your direct control or which don’t have immediate, clear impacts (eg. broader economic environment, international relations, climatic shift, etc.)
  3. Strive to stay engaged with others in your organization, outside your immediate co-workers. If you have a leadership role, create low-stress opportunities for staff to meet and discuss ongoing work. It might not be enough to rely on the chance meeting in a hall to spark a conversation.
    • For in-person environments, have internal networking style events which cater to both extroverted and introverted staff
    • Managers should try and understand the interests and goals of their staff and support them in building connections within their organization (or possibly externally) to grow in that area
    • For remote environments, move beyond text channels to incorporate face to face communication as much as possible, while being mindful of timing and staff preference
    • Organize optional, inclusive, non-work events and make them cross-departmental or inter-organizational when practical

Next Time

What do we know about the environmental impact of LLMs, and what to make from all that we’ve learned from this series of articles.

Photo credit: Photo by Reed Geiger on Unsplash


  1. Institution here seems to refer to the informal rules, expectations, and behaviors which allow organizations to function. It is distinct from the formalized roles, responsibilities, etc. that exist over this informal superstructure and allow the day-to-day functions to proceed. ↩︎