If we consider information to be safe if we encrypt it (e.g., text in a file, encrypted with modern strong encryption), would it be safer (as in harder to crack) if we then encrypted the encrypted file, and encrypted the encrypted^2 file, etc.? Is this what strong encryption already does behind the scenes?
I would say, what is the point? If you encrypt something with AES256 it still takes lifetime of the universe to brute force, but if a flaw in the algorithm is discovered or computing power exceeds current projections (say with quantum computing) double or triple encryption won’t help.
We tried this in the 90s with VPNs with a variation of DES called 3DES and we have since created better algorithms.
solrize@lemmy.world 2 months ago
As people have said, the keys have to be completely independent of each other or else the layering can make the encryption weaker. And, if you’re worried about one of your layers being weak, you shouldn’t be using that layer in the first place.
I think SSL/TLS actually gained something from this though. The initial key agreement phase generated (from my foggy memory) a “premaster secret”, then hashed it with both SHA-1 and MD5 and combined the two hashes in some way. Those were the two hash algorithms popular in that era. Later on, weaknesses (free collisions) were found in MD5 and even later, in SHA-1. By combining both algorithms, SSL avoided any hint of compromise from those particular hash problems. SSL’s designer Paul Kocher later said he was very glad he had specified using both.
I would say though, that secure hashing (with a public algorithm and no secrets) has generally been considered a more difficult problem than secret-key encryption or authentication. And SHA1 and MD5 both used design approaches now considered dubious.
ryannathans@aussie.zone 2 months ago
Many applications like signal use two layers of encryption. One classical, and one quantum secure.