In reply to
Mathaetaes
@mathaetaes@infosec.exchange
infosec.exchange
Mathaetaes
@mathaetaes@infosec.exchange
infosec.exchange
@mathaetaes@infosec.exchange
·
4d ago
@emilymbender You're far more an expert on this than me, so I will defer to your experience.
But in all these examples, "powerful" describes utility. A more powerful engine can do more things than a weaker one. A more advanced spreadsheet software can help users calculate/track more things than a less advanced one.
By that definition, wouldn't "powerful" for LLMs just mean less wrong, or wrong less often?
At the end of the day, a model is just predicting text. A model can't use a tool, but it can predict the text required to use a tool. A model can't write code, but it can predict the text that a compiler will turn into a program. We keep building integrations that allow tools to be driven by text, which allows text prediction models to 'use' them... but really it's still just predicting text.
The only real metrics that apply to an LLM are size, speed, and accuracy. For frontier models, size and speed is always compensated by throwing more hardware at it, so users never see it. Thus, the only reasonable measure of power, for the journalistic contexts you're talking about, is the accuracy of the text it's predicting.
Thus, a "more powerful AI model" is just one that is less wrong than the previous generations. No?
That said, I do agree with your points that journalists are doing the PR firms' jobs for them when they use "more powerful" as a stand-in for "less wrong."
View full thread on infosec.exchange
0
2
0
Conversation (2)
Showing 0 of 2 cached locally.
Syncing comments from the remote thread. 2 more replies are still loading.
Loading comments...