Comment on Lutris now being built with Claude AI, developer decides to hide it after backlash
Iheartcheese@lemmy.world 2 weeks agoYeah. Call me if he starts using AI artwork.
Comment on Lutris now being built with Claude AI, developer decides to hide it after backlash
Iheartcheese@lemmy.world 2 weeks agoYeah. Call me if he starts using AI artwork.
wholookshere@piefed.blahaj.zone 2 weeks ago
so you draw the line at stealing artists work, but not programmers work?
Dremor@lemmy.world 2 weeks ago
Being a developer, I don’t care if someone else uses my code. Code is like a brick. By itself it has little value, the real value lies on how it is used. If I find an optimal way to do something, my only wish is to make it available to as much people as possible. For the betterment of humanity.
wholookshere@piefed.blahaj.zone 2 weeks ago
Sure, but that’s just your view.
And also not how LLMs work.
They gobble up everything and cause unreadable code. Not learning.
Dremor@lemmy.world 2 weeks ago
That’s not how LLMs work either.
An LLM had no knowledge, but has the statically probability of a token to follow another token, and given an overall context it create the statically most likely text.
To calculate such probability as accurently as possible you need as much examples as possible, to determine how often word A follow word B. Thus the immense datasets required.
Luckily for us programmers, computer programs are inherently statically similar, which makes LLMs quite good at it.
Now, the programs it create aren’t perfect, but it allows to write long, boring code fast, and even explain it if you require it to. This way I’ve learned a lot of new things that I wouldn’t have unless I had the time and energy to screw around with my programs (which I wished I had, but don’t), or looked around Open Source programs source code, which would take years to an average human.
Now there is the problem of the ethic use of AI, which is a whole other aspect. I use only local models, which I run on my own hardware (usually using Ollama, but I’m looking into NPU enabled alternatives).
Miaou@jlai.lu 2 weeks ago
Elon, Jeff, and Mark thank you for your service
Dremor@lemmy.world 2 weeks ago
I can live with helping some assholes if my contributions help others. At least I don’t make them richer since I only use local IAs.
adeoxymus@lemmy.world 2 weeks ago
Tbh all programmers have been copy pasting from each other forever. The middle step of searching stack overflow or GitHub for the code you want is simply removed
galaxy_nova@lemmy.world 2 weeks ago
Exactly. If someone has already come up with an optimal solution why the hell would I reimplement it. My real problems are not with LLMs themselves but rather the sourcing of the training data and the power usage. If I could use an “ethically sourced” llm locally I’d be mostly happy. Ultimately LLMs are also only good for code specifically. Architecture or things that require a lot of thought like data pipelines I’ve found AI to be pretty garbage at when experimenting
wholookshere@piefed.blahaj.zone 2 weeks ago
That’s not what an LLM is doing is it.
smeg@feddit.uk 2 weeks ago
Lutris is GPL-licenced, so isn’t it the opposite of stealing?
wholookshere@piefed.blahaj.zone 2 weeks ago
LLMs have stolen works from more than just artists.
ALL of public repositories at a minimum have been used as training, regardless of licence. including licneses that require all dirivitive work be under the same license.
so there’s more than just lutris stollen.
lung@lemmy.world 2 weeks ago
So he’s a badass Robinhood pirate that steals code from corporations and gives it to the people?
prole@lemmy.blahaj.zone 2 weeks ago
No, the LLM was trained on other code (possibly including Lutris, but also probably like billions of lines from other things)