Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

How did websites like TinEye recognize cropped photos of the same image (and other likened pictures), without the low-entry easyness of LLM/AI Models these days?

⁨46⁩ ⁨likes⁩

Submitted ⁨⁨10⁩ ⁨hours⁩ ago⁩ by ⁨bathing_in_bismuth@sh.itjust.works⁩ to ⁨[deleted]⁩

source

Comments

Sort:hotnewtop
  • over_clox@lemmy.world ⁨10⁩ ⁨hours⁩ ago

    JPEG works in 8x8 pixel blocks, and back in the day, most JPEG images weren’t all that big. Each 8x8 pixel block (64 pixels per block) could easily and quickly be processed as if it were a single pixel.

    So if you had a 1024x768 JPEG, then the fast scanning technique would only scan the 128x96 blocks, not necessary to process every single pixel.

    Of course the results could never be perfectly accurate, but most images are unique enough that this would be more than sufficient for fast scanning.

    source
    • bathing_in_bismuth@sh.itjust.works ⁨10⁩ ⁨hours⁩ ago

      Okay, not entirely a layman but also not exactly an expert, if the Photoshop max pixelated entry has the same formula as the detailed comparison it would match? And if that is the case, I imagine all the human input data and behavioral wise would only better the algorithm?

      source
      • over_clox@lemmy.world ⁨10⁩ ⁨hours⁩ ago

        Looking past the days of old, while also dismissing modern artificial intelligence, the same techniques would still work if you just processed the thumbnails of the images, which for simplicity sake, might as well be a 1/8 scale image, if not actually even lower resolution.

        source
        • -> View More Comments
  • Sleepkever@lemmy.zip ⁨3⁩ ⁨hours⁩ ago

    Looking up similar images and searching for crops are computer vision topics, not large language model (basically text predictor) or image generation ai topics.

    Image hashing has been around for quite a while now and there is crop resistant image hashing libraries readily available like this one: pypi.org/project/ImageHash/

    It’s basically looking for defining features in images and storing those in an efficient searchable way probably in a traditional database. As long as they are close enough or in the case of a crop, a partial match, it’s a similar image.

    source
  • Nemo@slrpnk.net ⁨10⁩ ⁨hours⁩ ago

    They had the AI models of those days.

    source
    • bathing_in_bismuth@sh.itjust.works ⁨10⁩ ⁨hours⁩ ago

      That’s cool, didn’t know AI models where a thing in those days. Are they comparable (maybe more crude?) to nowadays tech? Like, did they use machineearning? As far as I remember there were not much dedicated AI accelerating hardware pieces. Maybe a beefy GPU for neural network purposes? Interesting though

      source
      • Zwuzelmaus@feddit.org ⁨10⁩ ⁨hours⁩ ago

        Models were a thing even some 30 or 40 years ago. Processing power makes most of the difference today: it allows larger models and quicker results.

        source
        • -> View More Comments
      • cecilkorik@lemmy.ca ⁨9⁩ ⁨hours⁩ ago

        We didn’t call them AI because they weren’t (and aren’t) intelligent, but marketing companies eventually realized there were trillions of dollars to be made convincing people they were intelligent and created models explicitly designed to convince people of things like the idea that they are intelligent and can have genuine conversations like a real human and create real art like a real human and totally aren’t just empty-headedly mimicking thousands of years of human conversation and art, and immediately used them to convince people that the models themselves were intelligent (and many other things besides). Given that marketing and advertising literally exist to convince people of various things and have become exceedingly good at it, it’s really a brilliant business move and seems to be working great for them.

        source
      • brucethemoose@lemmy.world ⁨9⁩ ⁨hours⁩ ago

        Oh and to answer this, specifically, Nvidia has been used in ML research forever. It goes back to 2008 and stuff like the GTX 280. Maybe earlier.

        So have CPUs. In fact, Intel made specific server SKUs for giant AI users like Facebook. See: servethehome.com/facebook-introduces-next-gen-coo…

        source
    • cecilkorik@lemmy.ca ⁨9⁩ ⁨hours⁩ ago

      We didn’t call them AI because they weren’t (and aren’t) intelligent, but marketing companies eventually realized there were trillions of dollars to be made convincing people they were intelligent and created models explicitly designed to convince people of things like the idea that they are intelligent and can have genuine conversations like a real human and create real art like a real human and totally aren’t just empty-headedly mimicking thousands of years of human conversation and art, and immediately used them to convince people that the models themselves were intelligent (and many other things besides). Given that marketing and advertising literally exist to convince people of various things and have become exceedingly good at it, it’s really a brilliant business move and seems to be working great for them.

      source
  • Feyd@programming.dev ⁨9⁩ ⁨hours⁩ ago

    What you’re looking for is the history of “computer vision”

    source