Occasionally. If you aren’t even proofreading it that’s dumb, but it can do a lot of heavy lifting in collaboration with a real worker.
For coders, there’s actually hard data on that. You’re worth about a coder and a half using CoPilot or similar.
Comment on 'LLM-free' is the new '100% organic' - Creators Are Fighting AI Anxiety With an ‘LLM-Free’ Movement
lvxferre@mander.xyz 6 months ago
For writers, that “no AI” is not just the equivalent of “100% organic”; it’s also the equivalent as saying “we don’t let the village idiot to write our texts when he’s drunk”.
Because, even as we shed off all paranoia surrounding A"I", those text generators state things that are wrong, without a single shadow of doubt.
Occasionally. If you aren’t even proofreading it that’s dumb, but it can do a lot of heavy lifting in collaboration with a real worker.
For coders, there’s actually hard data on that. You’re worth about a coder and a half using CoPilot or similar.
Zaktor@sopuli.xyz 6 months ago
Sometimes. Sometimes it’s more accurate than anyone in the village. And it’ll be reliably getting better. People relying on “AI is wrong sometimes” as the core plank of opposition aren’t going to have a lot of runway before it’s so much less error prone than people the complaint is irrelevant.
The jobs and the plagiarism aspects are real and damaging and won’t be solved with innovation. The “AI is dumb” is already only selectively true and almost all the technical effort is going toward reducing that. ChatGPT launched a year and a half ago.
lvxferre@mander.xyz 6 months ago
So does the village idiot. Or a tarot player. Or a coin toss. And you’d still need to be a fool if your writing relies on the output of those three. Or of a LLM bot.
You’re distorting the discussion from “now” to “the future”, and then vomiting certainty on future matters. Both things make me conclude that reading your comment further would be solely a waste of my time.
Zaktor@sopuli.xyz 6 months ago
You’re lovely. Don’t think I need to see anything you write ever again.
Ilandar@aussie.zone 6 months ago
Yes, I always get the feeling that a lot of these militant AI sceptics are pretty clueless about where the technology is and the rate at which it is improving. They really owe it to themselves to learn as much as they can so they can better understand where the technology is heading and what the best form of opposition will be in the future. As you say, relying on “haha Google made a funny” isn’t going to cut it forever.
Zaktor@sopuli.xyz 6 months ago
Yeah. AI making images with six fingers was amusing, but people glommed onto it like it was the savior of the art world. “Human artists are superior because they can count fingers!” Except then the models updated and it wasn’t as much of a problem anymore. It felt good, but it was just a pleasant illusion for people with very real reasons to fear the tech.
None of these errors are inherent to the technology, they’re just bugs to correct, and there’s plenty of money and attention focused on fixing bugs. What we need is more attention focused on either preparing our economies to handle this shock or greatly strengthen enforcement on copyright (to stall development). A label like this post is about is a good step, but given how artistic professions already weren’t particularly safe and “organic” labeling only has modest impacts on consumer choice, we’re going to need more.
sonori@beehaw.org 6 months ago
Except when it comes to LLM, the fact that the technology fundamentally operates by probabilisticly stringing together the next most likely word to appear in the sentence based on the frequency said words appeared in the training data is a fundamental limitation of the technology.
So long as a model has no regard for the actual you know, meaning of the word, it definitionally cannot create a truly meaningful sentence. Instead, in order to get a coherent output the system must be fed training data that closely mirrors the context, this is why groups like OpenAi have been met with so much success by simplifying the algorithm, but progressively scrapping more and more of the internet into said systems.
I would argue that a similar inherent technological limitation also applies to image generation, and until a generative model can both model a four dimensional space and conceptually understand everything it has created in that space a generated image can only be as meaningful as the parts of the work the tens of thousands of people who do those things effortlessly it has regurgitated.
This is not required to create images that can pass as human made, but it is required to create ones that are truely meaningful on their own merits and not just the merits of the material it was created from, and nothing I have seen said by experts in the field indicates that we have found even a theoretical pathway to get there from here, much less that we are inevitably progressing on that path.
Mathematical models will almost certainly get closer to mimicking the desired parts of the data they were trained on with further instruction, but it is important to understand that is not a pathway to any actual conceptual understanding of the subject.