Your comment made me very curious, and I dunno if this is hilarious or disappointing.
Comment on ChatGPT's o3 Model Found Remote Zeroday in Linux Kernel Code
thomasembree@me.dm 6 days ago
@Kissaki In another thread, people are mocking AI because the free language models they are using are bad at drawing accurate maps. "AI can't even do geography". Anything an AI says can't be trusted, and AI is vastly inferior to human ability.
These same people haven't figured out the difference between using a language AI to draw a map, and simply asking it a geography question.
callouscomic@lemm.ee 5 days ago
2xsaiko@discuss.tchncs.de 5 days ago
Daniel Stenberg has banned AI-edited bug reports from cURL because they were exclusively nonsense and just wasted their time. Just because it gets a hit once doesn’t mean it’s good at this either.
Kissaki@beehaw.org 5 days ago
It does show that it can be a useful tool, though.
Here, the security researcher was evaluating it and stumbled upon a previously undiscovered security bug. Obviously, they didn’t let the AI create the bug report without understanding it. They verified the answer and took action themselves, presumably analyzing, verifying, and reporting in a professional and respectful way.
The cURL AI spam is an issue at the opposite side of that. But doesn’t really tell us anything about capabilities. It tells us more about people. In my eyes, at least.
2xsaiko@discuss.tchncs.de 5 days ago
Yeah, that’s fair. When verified beforehand, and what it discovered is an actual issue, why not. It does overwhelmingly attract people who have no idea what they’re doing and then submit bogus reports because it looks good to them though.
apotheotic@beehaw.org 5 days ago
Like, I get that there’s people who are mocking AI for the wrong reasons, and they’re silly for that, but there are very real reasons to dislike AI in many applications.
Would chatgpt be able to do this if their dataset had consisted only of ethically obtained data where the authors had provided consent? My money is on no, at least not yet. The technology is in its infancy and has powerful potential, but is having its progress boosted through highly unethical means.
I’m so very much for the concept of AI, its a monumental technology space at its core. But it needs to be done right, and I fear that it never will be, and we will have to live with the sins of the existing models forever. I hope I will be wrong.
If we can reach a future where models are trained on entirely consensual data and the environmental impact of their training and usage isn’t as dire, I’d be so happy.
thomasembree@me.dm 5 days ago
@apotheotic The issue with copyright is an inevitable misstep that was bound to happen while figuring out this technology. However, some of criticisms aren't about ethical issues surrounding copyright, they are about the marketability of skills (such as painting) that you either had to learn yourself or otherwise needed to pay someone to do for you.
Now you can do that with an AI. Great for disabled people who can create freely now, bad for the artists who exploited that for financial gain.
The_Sasswagon@beehaw.org 5 days ago
I don’t think ‘disabled people’ need a computer to generate content to participate in art creation, and I don’t think artists making art is exploitation. The artists, meaning anyone who ever had their art posted online, are the ones being exploited here, their work was stolen and made to work for tech investors.
Even if these were tangible benefits they are a small compensation for the accelerated degradation of our shared planet, the mass robbery of nearly everyone on earth, and the further damage to our ability to critically think and create. And on top of that, the stuff it generates isn’t even very good.
thomasembree@me.dm 5 days ago
@The_Sasswagon AI is not destroying the planet, it literally didn't exist until a few years ago. The way we produce energy is the problem, and that won't go away if we banned AI.
AI is actually accelerating the timeline on a lot of important research, things that were decades away are now just years away. That alone might be what saves the climate.
If it was as simple as using less electricity by using less technology, it wouldn't be so hard to abandon your smartphone.
thomasembree@me.dm 5 days ago
@The_Sasswagon They do if they aren't physically capable of holding a brush, instrument, etc.
This allows people like that to paint, create music, etc. entirely on their own, by their own hand (or voice), without relying on the services of a skilled artist who might not be able to capture what that person is imagining.
People who don't have time to learn painting can now bring beauty into the world that would have otherwise never left their head.
Artists are complaining about that. Fuck them.
apotheotic@beehaw.org 5 days ago
I don’t disagree that its a misstep, but it feels like one that is not going to be corrected. It is going to be treated as the normal thing to do with training AI.
I would hazard that there wouldn’t be nearly as many artists complaining about AI if it hadn’t been trained on immorally obtained inputs. The fact that it can effortlessly recreate the style of an artist that was added to the data without their consent is, I think, what gives most artists the visceral reaction that they have. “Not only is it doing what we can do (to some degree), it is doing so because our work was used without our consent”.
AI is a valuable tool for art if used correctly, I don’t know if I agree that it is a disability aid. I can perhaps concede that someone who is entirely without fine motor ability can now make colours and shapes that vaguely resemble what they had in mind where perhaps they couldn’t before, but its difficult for me to consider that case “creating”. It is creating in the same sense as describing to your friend what you want and them trying to draw what you describe. There’s an output that resembles your input description, which might be enough for some?
thomasembree@me.dm 5 days ago
@apotheotic As for things like creating images in the style of a specific artist, that is not plagiarism unless you are asking for a perfect replica of a specific art piece and claiming it as your own original work.
All artists imitate the styles they find appealing, if you paint a Van Gogh style painting it isn't plagiarism of Van Gogh. Likewise, if I were to imitate Van Gogh's style using an AI, the resulting image would be my original work and not Van Gogh's creation.
apotheotic@beehaw.org 5 days ago
I don’t agree with this argument at all, because if a human artist were to employ the same kind of algorithmic mimicry that an AI does, I would consider it plagiarism. There is a distinct difference between how a human observes and learns from other artists work, and how an AI does it.
Moreover, to take things out of the realm of plagiarism, if a human artist was mimicking the style of another artist and making bank off of it, and the original artist were to say “hey, that’s kinda not cool, I don’t appreciate this” you could have a conversation about how to accommodate both parties. With AI, there is no such conversation to be had, because it will replicate without barriers and do so in volumes that dwarf any sort of output the original artist could dream of, no matter how nicely you ask it not to, unless it was not trained on it in the first place.
Anyway, my pushback in my original message was not about the output being plagiarism or anything of the sort, it was about the usage of authors/artists work as training data (input) being non-consensual.
jarfil@beehaw.org 5 days ago
There are 10 kinds of people: those who think they understand neural networks, those who try to understand neural networks, and those whose neural networks can’t spot the difference.
Not a coincidence the amount of people who are bad at languages, communication, learning, or teaching. On the bright side, new generations are likely to be forced to get better.
thomasembree@me.dm 5 days ago
@jarfil I think it's unavoidable instict. In our ancestral environment, it was basic survival sense to fear the unknown and assume it could be dangerous. Caution just makes sense in that scenario.
There hasn't been enough time for our genes to adapt to our new, radically different environment. So people will continue to react to technological advances as if a tiger could leap out at any moment and maul them to death. Even I experience a vague unease, and I love technology.
FozzyOsbourne@lemm.ee 6 days ago
Searching for answers and creating maps are both completely unrelated to scanning source code for vulnerabilities. What is the point of this comment?
ChairmanMeow@programming.dev 6 days ago
I think the point is that even if LLMs suck at task A, they might be really good at task B. Just because code written by LLMs is often riddled with security flaws, doesn’t mean LLMs also suck at identifying those flaws.
FozzyOsbourne@lemm.ee 6 days ago
Yeah exactly, a code scan is completely unrelated to generative AI, the only thing that even connects them is that someone used the chatbot as an interface to start the scan