@audioflyer79 @davidaugust @alisynthesis This is actually very important. LLMs do not "forget" the way humans do. They don't have memory lapses. Humans have memory lapses and difficulty recalling facts they know. But the nature of computers is to remember perfectly. LLMs exist because they remember perfectly and look for patterns in that memory. If it is to be useful at interpreting human commands and understanding human expectations (see "now anyone can code" PR), it needs to be encoding concepts, not characters. People are making big claims that these machines are somehow conscious or intelligent and are able to understand abstract concepts. If an inference machine with perfect memory states unequivocally at one point that unripe bananas are yellow, and then later states unequivocally that unripe bananas are green, then it is not storing and retrieving conceptual information. Any claim of generalization cannot involve this kind of concept. In the battle between Cog and Cyc, LLMs are Cyc writ large. Cog: https://en.wikipedia.org/wiki/Cog_(project) Cyc: https://en.wikipedia.org/wiki/Cyc