In reply to
wonderingwanderer
@wonderingwanderer@sopuli.xyz
Wherever I wander I wonder whether I’ll ever find a place to call home…
sopuli.xyz
wonderingwanderer
@wonderingwanderer@sopuli.xyz
Wherever I wander I wonder whether I’ll ever find a place to call home…
sopuli.xyz
@wonderingwanderer@sopuli.xyz
·
6d ago
If it's flagged as "assisted by " then it's easy to identify where that code came from. If a commercial LLM is trained on proprietary code, that's on the AI company, not on the developer who used the LLM to write code. Unless they can somehow prove that the developer had access to said proprietary code and was able to personally exploit it.
If AI companies are claiming "fair use," and it holds up in court, then there's no way in hell open-source developers should be held accountable when closed-source snippets magically appear in AI-assisted code.
Granted, I am not a lawyer, and this is not legal advice. I think it's better to avoid using AI-written code in general. At most use it to generate boilerplate, and maybe add a layer to security audits (not as a replacement for what's already being done).
But if an LLM regurgitates closed-source code from its training data, I just can't see any way how that would be the developer's fault...
View full thread on sopuli.xyz
13
17
0
Conversation (17)
Showing 0 of 17 cached locally.
Syncing comments from the remote thread. 17 more replies are still loading.
Loading comments...