Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

AI content now outnumbers human-written articles on the internet, but the good news is that the slop seems to have plateaued, for now

⁨69⁩ ⁨likes⁩

Submitted ⁨⁨20⁩ ⁨hours⁩ ago⁩ by ⁨ryujin470@fedia.io⁩ to ⁨technology@beehaw.org⁩

https://www.pcgamer.com/software/ai/ai-content-now-outnumbers-human-written-articles-on-the-internet-but-the-good-news-is-that-the-slop-seems-to-have-plateaued-for-now/

source

Comments

Sort:hotnewtop
  • TranquilTurbulence@lemmy.zip ⁨19⁩ ⁨hours⁩ ago

    Since basically all data is now contaminated, there’s no way to get massive amounts of clean data for training the next generation of LLMs. This should make it harder to develop them beyond the current level. If an LLMs wasn’t smart enough for you yet, there’s a pretty good chance that it won’t be in a long time.

    source
    • Tollana1234567@lemmy.today ⁨4⁩ ⁨hours⁩ ago

      law of diminishing returns, LLM train thier data on AI slop of LLM, that is trained other llm, all the way down to “normal human written slop”

      source
    • artifex@piefed.social ⁨19⁩ ⁨hours⁩ ago

      Didn’t Elon breathlessly explain how the plan was to have Grok rewrite and expand on the current corpus of knowledge so that the next Grok could be trained on that “superior” dataset, which would forever rid it of the wokeness?

      source
      • Tollana1234567@lemmy.today ⁨4⁩ ⁨hours⁩ ago

        trying to train it to be only a NAZI-LLM is difficult eventhough he lobotomized it a couple times.

        source
      • TranquilTurbulence@lemmy.zip ⁨4⁩ ⁨hours⁩ ago

        That’s just musk talk. I’ll ignore the hype and decide based on the results instead.

        source
      • Naich@lemmings.world ⁨17⁩ ⁨hours⁩ ago

        It started calling itself MechaHitler after the first pass, so I’d be interested to see how less woke it could get training itself on that.

        source
    • Xylight@lemdro.id ⁨17⁩ ⁨hours⁩ ago

      A lot of LLMs now use synthesized, or AI generated training data. It doesn’t seem to affect them too adversely.

      source
      • TranquilTurbulence@lemmy.zip ⁨4⁩ ⁨hours⁩ ago

        Interesting. In other models that was a serous problem.

        source
    • fascicle@leminal.space ⁨19⁩ ⁨hours⁩ ago

      People will find a way somehow

      source
      • TranquilTurbulence@lemmy.zip ⁨4⁩ ⁨hours⁩ ago

        Oh I’m sure there is a way. We’ve already grabbed the low hanging fruit, and the next one is a lot higher. It’s there, but it requires some clever trickery and effort.

        source
  • TheFeatureCreature@lemmy.ca ⁨17⁩ ⁨hours⁩ ago

    It’s at the point now where the majority of the results of my web searches are clearly written by AI. You can look up the most obscure, difficult thing you can think of and you’ll miraculously find a 12-paragraph article about that exact topic that was “written” only just last month.

    And as with most AI “content”, those 12 paragraphs say absolutely nothing. AI is incredibly good at generating an entire novel’s worth of text that doesn’t actually say anything at all.

    source
    • TranquilTurbulence@lemmy.zip ⁨4⁩ ⁨hours⁩ ago

      Just tried that, and I couldn’t even find a blog post that addresses exactly what I was looking for. Sure, there were many about adjacent topics, but not my niche interest. However, the AI answer at the top did much better because it was specifically tailored for me.

      However, when I’m looking for more general information, I do tend to find blogs written by AI.

      source
    • morto@piefed.social ⁨16⁩ ⁨hours⁩ ago

      I have been using the option to restrict to older dates quite often, in order to get good results

      source
      • TheFeatureCreature@lemmy.ca ⁨14⁩ ⁨hours⁩ ago

        That’s a good idea.

        source
  • Darnton@piefed.zip ⁨19⁩ ⁨hours⁩ ago

    I don’t think it has plateaued, the reasons they give for why it should have done so makes no sense. The main problem being their metholodigy of spotting AI created content which is highly dubious. The more straightforward explanation is that AI created content has become more difficult to spot, especially for the tool that the researchers used.

    source
  • SnokenKeekaGuard@lemmy.dbzer0.com ⁨20⁩ ⁨hours⁩ ago

    News aggregator sites, especially in sports are common offenders. Also full of fucking ads.

    source
    • Tollana1234567@lemmy.today ⁨4⁩ ⁨hours⁩ ago

      reddit is filled with it too, they often post AI articles without checking it.

      source
    • Lembot_0004@discuss.online ⁨19⁩ ⁨hours⁩ ago

      They were this way before LLMs, so nothing changed. Just SEOshits spent less time and resources.

      source
      • Sxan@piefed.zip ⁨18⁩ ⁨hours⁩ ago

        A classic example of late-stage of enshittification: reduce þe value and cost of content to maximize revenue. Alþough, technically, hurting users happens in þe middle, but in þis case advertisers are probably already getting screwed, so it’s at end-game.

        source