First just think about the logic of what I said before: if there are finite number of combinations in the link, how can you possibly link to a larger number of items? It’s just logically impossible.
Then how is it that I was able to link to 800 words with 5 characters, (stripping aside the static portion of the links)?
The fact that you were able to link to 800 words doesn’t really mean anything. somesite.com/a could point to a file that was gigabytes. This doesn’t meant the file got compressed to a. Right?
There also might be less combinations for that site than it appears. For an 800 word chunk of grammatical English text, there are a lot less combinations than the equivalent length in arbitrary characters. Instead of representing each character in a word, it could just use an id like dog=1, antidisestablishmentarianism=2 and so on. Even using tricks like that though, it’s pretty likely you’re only able to link to a subset of all the possible combinations.
Regarding compression in general, it’s a rule that you can’t compress something independent of its content. If you could do that, even if the compression only reduced the file by the tiniest fraction you could just repeatedly apply the algorithm until you end up where the problem I described is obvious. If you could compress any large file down to a single byte, then that single byte can only represent 256 distinct values. However there are more than 256 distinct files that can exist, so clearly something went wrong. This rule is kind of like breaking the speed of light or perpetual motion: if you get an answer that says you have perpetual motion or FTL travel then you automatically know you did something wrong. Same thing with being able to compress without regard to the content.
Kerfuffle@sh.itjust.works 1 year ago
First just think about the logic of what I said before: if there are finite number of combinations in the link, how can you possibly link to a larger number of items? It’s just logically impossible.
The fact that you were able to link to 800 words doesn’t really mean anything.
somesite.com/a
could point to a file that was gigabytes. This doesn’t meant the file got compressed toa
. Right?There also might be less combinations for that site than it appears. For an 800 word chunk of grammatical English text, there are a lot less combinations than the equivalent length in arbitrary characters. Instead of representing each character in a word, it could just use an id like
dog=1
,antidisestablishmentarianism=2
and so on. Even using tricks like that though, it’s pretty likely you’re only able to link to a subset of all the possible combinations.Regarding compression in general, it’s a rule that you can’t compress something independent of its content. If you could do that, even if the compression only reduced the file by the tiniest fraction you could just repeatedly apply the algorithm until you end up where the problem I described is obvious. If you could compress any large file down to a single byte, then that single byte can only represent 256 distinct values. However there are more than 256 distinct files that can exist, so clearly something went wrong. This rule is kind of like breaking the speed of light or perpetual motion: if you get an answer that says you have perpetual motion or FTL travel then you automatically know you did something wrong. Same thing with being able to compress without regard to the content.