@Robotistry @blterrible @alisynthesis @davidaugust …and anything non-human doesn’t have it because *reasons*. I believe consciousness/awareness/understanding is a continuum, not a binary, and that all of the failures and mistakes made by LLMs could just as easily be attributed to humans in another context. Or to put it another way, that the failures of LLMs are *human* failures, mostly because they are trained on human data. 2/