In reply to
kasperd
@kasperd@westergaard.social
Currently testing this platform to decide whether it's the future of social networking. Curriculum Vitae: PhD degree from Aarhus University Worked at Google Zürich and London Partner at Intempus Timeregistrering - until it was acquired by Visma Operating nat64.net/
westergaard.social
kasperd
@kasperd@westergaard.social
Currently testing this platform to decide whether it's the future of social networking. Curriculum Vitae: PhD degree from Aarhus University Worked at Google Zürich and London Partner at Intempus Timeregistrering - until it was acquired by Visma Operating nat64.net/
westergaard.social
@kasperd@westergaard.social
·
6d ago
Sometimes I know enough about a topic to verify the correctness of an answer. In those cases there is less than 10% chance an AI can give me a useful answer to the question. But there is a non-zero chance.
One example from my own experience was that I wanted to know the git equivalent of this command hg log -r 'children(.)'. I have searched for an answer using search engines. I have asked some people who are promoting use off git. All of that lead me to suggestions that would only produce incomplete outputs. And for years nobody I asked could suggest a fix for this.
Eventually I asked an AI, which initially gave me the same answers I had seen before which did not work. After pointing out 2-3 times that the suggested solution didn’t work, the AI eventually managed to give me a useful suggestion. It hadn’t produced a fully working solution, but it had given me a different approach that I was able to turn into something working by myself. The key point was to use the command git cat-file --batch-all-objects --batch-check as part of my solution.
I wouldn’t have managed to find that solution without the help of AI. But using the AI certainly wasn’t a quick and easy way to a solution either. I had to go through loads of useless output from the AI to pinpoint that one tiny piece of information that was useful.
So my conclusion is that current LLM are sometimes better than nothing, which is a pretty low bar to meet. And they are very far from the all-knowing infallible oracles that proponents say they are.
View full thread on westergaard.social
0
1
0
Conversation (1)
Showing 0 of 1 cached locally.
Syncing comments from the remote thread. 1 more reply is still loading.
Loading comments...