@uriel @EmilyEnough
Nope. The whole IT sector uses about 3–5% of global electricity, so poor home insulation is a much bigger problem overall.
Source?
We call it a statistical method, or more precisely a stochastic system. Because, to a large extent, human behaviour itself can be modelled as a stochastic process.
Source? In fact this is false. Human behaviour includes more than a stochastic process, even though it may adopt stochastic heuristics to speed up some computational parts. This is also why LLMs are technically speaking not AI. An AI includes, as human reasoning does, an internal world model and the basic set of Boolean probability-logic rules. See for instance Russell & Norvig’s Artificial Intelligence: A Modern Approach (http://aima.cs.berkeley.edu/global-index.html), or Pearl’s older Probabilistic Reasoning in Intelligent Systems (https://doi.org/10.1016/C2009-0-27609-4). LLMs are, instead, just Markov chains (https://doi.org/10.48550/arXiv.2410.02724). A modern robot vacuum cleaner is more “AI” than an LLM.
This is also the reason why the larger the software project you apply an LLM to, the more likely the failure. Such kind of application requires larger and larger string correlations, which are therefore more and more uncertain and fault-prone, and these faults are therefore also more difficult to spot. Such kind of applications may also require new or innovative kinds of solution, which again are less likely to be stumbled upon by an LLM.
The problems you face when communicating with LLMs are the same ones you face when communicating with people, because statistically speaking an LLM mimics how people communicate.
No, because humans, and also proper AI, have a “logic engine” underneath. It may require some effort to bring the logic engine to the fore instead of poor heuristics, but it can be done (related: Kahneman’s Thinking, Fast and Slow, and the research cited there). With LLM it can’t be done because there’s no logic engine at all there.