Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

Seems legit

⁨459⁩ ⁨likes⁩

Submitted ⁨⁨16⁩ ⁨hours⁩ ago⁩ by ⁨The_Picard_Maneuver@piefed.world⁩ to ⁨[deleted]⁩

https://media.piefed.world/posts/pi/lp/pilpL91Tgtn5TK9.jpg

source

Comments

Sort:hotnewtop
  • DarkCloud@lemmy.world ⁨16⁩ ⁨hours⁩ ago

    You can get offline versions of LLMs.

    source
    • criss_cross@lemmy.world ⁨14⁩ ⁨hours⁩ ago

      And gpt-oss is an offline version of chatgpt

      source
    • utopianfiat@lemmy.world ⁨15⁩ ⁨hours⁩ ago

      Indeed huggingface.co/openai-community

      source
    • sp3ctr4l@lemmy.dbzer0.com ⁨12⁩ ⁨hours⁩ ago

      I’ve been toying with Qwen3.

      Opensource too!

      source
    • linkinkampf19@lemmy.world ⁨15⁩ ⁨hours⁩ ago

      First thing that came to mind: GPT4All

      source
    • SubArcticTundra@lemmy.ml ⁨10⁩ ⁨hours⁩ ago

      ollama.org

      source
    • Ghostalmedia@lemmy.world ⁨11⁩ ⁨hours⁩ ago

      I mean, most people have a local LLM in their pocket right now.

      source
  • tomiant@piefed.social ⁨14⁩ ⁨hours⁩ ago

    FCKGW-RHQQ2-YXRKT-8TG6W-2B7Q8

    source
    • Ghostalmedia@lemmy.world ⁨11⁩ ⁨hours⁩ ago

      CrAcKeD

      source
    • eager_eagle@lemmy.world ⁨9⁩ ⁨hours⁩ ago

      male sure to disconnect the internet first

      source
  • uriel238@lemmy.blahaj.zone ⁨8⁩ ⁨hours⁩ ago

    Offline LLMs exist but tend to have a few terabytes of base data just to get started (e.g. before LORAs)

    source
    • nomorebillboards@lemmy.world ⁨2⁩ ⁨hours⁩ ago

      I thought it was more like 10-20GB to start out with a usable (but somewhat stupid) model.

      Are you confusing the size of the dataset with the size of the model?

      source
  • bjoern_tantau@swg-empire.de ⁨15⁩ ⁨hours⁩ ago

    It’s just audio of French farting cats.

    source
    • Lemmyoutofhere@lemmy.ca ⁨14⁩ ⁨hours⁩ ago

      Le pfffft.

      source
  • SSUPII@sopuli.xyz ⁨15⁩ ⁨hours⁩ ago

    If we assume a CD, you can probably fit a 256M model in it. But it will LOAD.

    source
    • MacNCheezus@lemmy.today ⁨13⁩ ⁨hours⁩ ago

      DVDs exist. They can fit approx. 7B params, enough to be somewhat productive.

      source
  • khepri@lemmy.world ⁨8⁩ ⁨hours⁩ ago

    Could you crunch an LLM into 700Mb that was still functional? Cause this looks like a fun thing to actually do as a joke.

    source
    • yellowbadbeast@lemmy.blahaj.zone ⁨5⁩ ⁨hours⁩ ago

      Qwen3-0.6B is about 400 MB at Q4 and is surprisingly coherent for what it is.

      source
      • khepri@lemmy.world ⁨5⁩ ⁨hours⁩ ago

        That’s so crazy that an LLM capable of doing anything at all can be that small! That’s leaves room for like an entire .avi episode of family guy at dvd resolution on there, which is the natural choice for the remaining space of course

        source
      • khepri@lemmy.world ⁨4⁩ ⁨hours⁩ ago

        Wow, just popped it onto my very slow desktop and this little model rips haha. I really think tiny LLMs with a good LoRA on top are going to be a huge deal going forward

        source
    • lime@feddit.nu ⁨7⁩ ⁨hours⁩ ago

      yes! tinyllama is somewhere around 600MB. it’s hilariously inept. it’s like someone jpeg-compressed a robot.

      source
  • NullPointerException@lemmy.ca ⁨15⁩ ⁨hours⁩ ago

    That’s just Dr Sbaitso.

    source
  • SubArcticTundra@lemmy.ml ⁨10⁩ ⁨hours⁩ ago

    Does anyone know of any OSS LLMs that can search the web the way ChatGPT can?

    source
    • yellowbadbeast@lemmy.blahaj.zone ⁨5⁩ ⁨hours⁩ ago

      It’s not the LLM that does the web searching, but the software stack around it. On its own, an LLM is just a text completer. What you’d need a frontend like OpenWebUI or Perplexica that would ask the LLM for, say five internet search queries that could return useful information for the prompt, throw those queries into SearxNG, and then pipe the results into the LLM’s context for it to be used.

      As for the models themselves, any decently-sized one that was released fairly recently would work. If you’re looking specifically for open-source rather than open-weight models (meaning that the training data and methodologies were also released rather than just the model weights), GPT-OSS 20B/120B and the OLMo models are recent standouts there. If not, the Qwen3 series are pretty good.

      source
      • SubArcticTundra@lemmy.ml ⁨1⁩ ⁨hour⁩ ago

        Thank you

        source
    • MonkderVierte@lemmy.zip ⁨10⁩ ⁨hours⁩ ago

      Depends. Does ChatGPT ignore robots.txt too?

      source
  • faizalr@piefed.social ⁨15⁩ ⁨hours⁩ ago

    It reminds me of the Britannica Encyclopedia on CD.

    source
    • KyuubiNoKitsune@lemmy.blahaj.zone ⁨9⁩ ⁨hours⁩ ago

      Encarta 95

      source
  • MidsizedSedan@lemmy.world ⁨16⁩ ⁨hours⁩ ago

    Isn’t it possible to download all of wikipedia, and it being surprisenly a small file size? Can it fit on a CD?

    source
    • AmbiguousProps@lemmy.today ⁨15⁩ ⁨hours⁩ ago

      It could fit on a BDXL disc.

      source
      • masterspace@lemmy.ca ⁨14⁩ ⁨hours⁩ ago

        You can fit text-only wikipedia on a normal Blu Ray as it’s only about 24GB. You can also easily fit Llama 3.1 or any of the other open, offline capable ai models as they’re only about 4GB.

        source
      • gustofwind@lemmy.world ⁨15⁩ ⁨hours⁩ ago

        could also store it on a flashdrive or micro sd card

        source
    • Axolotl_cpp@feddit.it ⁨16⁩ ⁨hours⁩ ago

      No, you really can’t; It’s like 43 gb the text only version

      source
      • puppycat@lemmy.blahaj.zone ⁨2⁩ ⁨hours⁩ ago

        yes you really can; it’s like 20-25 gb depending on how recent of a copy you have. I’ve been seeding wikipedia for almost a year and it barely takes any space on my computer

        source
      • BanMe@lemmy.world ⁨14⁩ ⁨hours⁩ ago

        So gonna need like 2 CDs then

        source
    • SSUPII@sopuli.xyz ⁨16⁩ ⁨hours⁩ ago

      No

      (English) 24,05GB without media. Adding media adds 428,36TB.

      source
      • Axolotl_cpp@feddit.it ⁨16⁩ ⁨hours⁩ ago

        Can you give me the text only version link? I found only a version tgat is like 43gb

        source
        • -> View More Comments
      • GregorGizeh@lemmy.zip ⁨13⁩ ⁨hours⁩ ago

        500TB is still surprisingly reasonable for what is essentially a library of human (surface level) knowledge.

        It would be interesting to know how large the file would be including all Text Form references (i’d imagine anything else such as videos would completely blow the proportions)

        source
    • rain_worl@lemmy.world ⁨14⁩ ⁨hours⁩ ago

      kiwix? that’s compressed (afaik), and when i tried, it took up half of my disk space and needed ethernet

      source
  • SanctimoniousApe@lemmings.world ⁨15⁩ ⁨hours⁩ ago

    Maybe they meant GTA?

    source