De-duplicate the internet. You have your orders.
Comment on New "symbolic image compressor" posted in r/computerscience turns out to be AI hallucinated nonsense
altkey@lemmy.dbzer0.com 5 hours ago
Reading OP and thinking about their misinformed understanding of what they are doing, I came upon an idea I propose to all of you: the almighty Babylonian Compression Algorythm.
As long as we have all combinations of (say, 256x256px) images in the database, we can cut down image size to just a reference to a file in said database.
It produces a bit-by-bit copy of any image without any compression, so it puts OOP’s project to shame. Little, almost non-existent problem is having access to said database, bloated with every existing but also not-yet-existing image. But since OOP’s solution depends on proprietary ChatGPT on someone else’s server, we are on par there.
A_norny_mousse@feddit.org 3 hours ago
axexrx@lemmy.world 4 hours ago
Like a library of Babel of images.
Armok_the_bunny@lemmy.world 3 hours ago
Funny enough that actually wouldn’t be more efficient of a compression algorithm, the size of the file reference would be at best exactly the same size as the image that is being referenced, just because any fewer bits would lead to duplicate reference locations.
qaz@lemmy.world 43 minutes ago
Funny thing is that it would probably be more efficient as OOP’s approach since it stores a word in a JSON map for each pixel.