over_clox@lemmy.world 1 day ago
JPEG works in 8x8 pixel blocks, and back in the day, most JPEG images weren’t all that big. Each 8x8 pixel block (64 pixels per block) could easily and quickly be processed as if it were a single pixel.
So if you had a 1024x768 JPEG, then the fast scanning technique would only scan the 128x96 blocks, not necessary to process every single pixel.
Of course the results could never be perfectly accurate, but most images are unique enough that this would be more than sufficient for fast scanning.
bathing_in_bismuth@sh.itjust.works 1 day ago
Okay, not entirely a layman but also not exactly an expert, if the Photoshop max pixelated entry has the same formula as the detailed comparison it would match? And if that is the case, I imagine all the human input data and behavioral wise would only better the algorithm?
over_clox@lemmy.world 1 day ago
Looking past the days of old, while also dismissing modern artificial intelligence, the same techniques would still work if you just processed the thumbnails of the images, which for simplicity sake, might as well be a 1/8 scale image, if not actually even lower resolution.
bathing_in_bismuth@sh.itjust.works 1 day ago
That makes sense. Ive seen it do some amazing results but also some painfully hard-to-make mistakes. Minda neat, imagine going by that mindset, making the most with what you have, without a never ending redundant hell of depencies for even the most basic functiin/feature?!
brucethemoose@lemmy.world 1 day ago
That is, indeed, the motto of ML research for a long time. Just finding more efficient approaches.
It’s people like Altman the introduced the idea of not innovating and just scaling up what you already have. Hence many in the research community know he’s full of it.