Have you tried compressing it again?
freamon@preferred.social 2 days ago
TV shows and movies are already compressed. If you try to compress something that's already compressed, it typically ends up bigger if anything.
Tungsten5@lemm.ee 2 days ago
EveryMuffinIsNowEncrypted@lemmy.blahaj.zone 2 days ago
Why does it get bigger? I’ve wondered that for a while now.
I would think that compressing something that’s already compressed would still compress it further but at diminishing returns.
MrNesser@lemmy.world 2 days ago
Once the files are added to the zip folder your also adding information about the files so they can be removed.
EveryMuffinIsNowEncrypted@lemmy.blahaj.zone 2 days ago
Ohhh you know that makes sense. So, basically, what you’re saying is this?
cam_i_am@lemmy.world 1 day ago
There’s more to it than that. Firstly, at a theoretical level you dealing with the concepts of entropy and information density. A given file has a certain level of information in it. Compressing it is sort of like distilling the file down to its purest form. Once you reached that point, there’s nothing left to “boil away” without losing information.
Secondly, from a more practical point of view, compression algorithms are designed to work nicely with “normal” real world data. For example as a programmer you might notice that your data often contains repeated digits. So say you have this data: “11188885555555”. That’s easy to compress by describing the runs. There are three 1s, four 8s, and seven 5s. So we can compress it to this: “314875”. This is called “Run Length Encoding” and it just compressed our data by more than half!
But look what happens if we try to apply the same compression to our already compressed data. There are no repeated digits, there’s just one 3, then one 1, and so on: “131114181715”. It doubled the size of our data, almost back to the original size.
This is a contrived example but it illustrates the point. If you apply an algorithm to data that it wasn’t designed for, it will perform badly.