@mirabilos @rl_dane @HeptaSean How can you tell a system not to "lie" when it has no idea what "truth" is? It has no concept of "fact" or "reality"? How can you instruct it to "do no harm" if the concept of "harm" is meaningless to it?