Comment on Phones have unique phone numbers, why dont computers have unique computer-numbers?
jeffhykin@lemm.ee 6 months agono need for an endpoint to be directly exposed
If I were an engineer in the past, trying to send a message back to an endpoint (e.g. a server response) I would’ve reached for everything having a static IP, same as the EID system with phones, instead of the DHCP multi-tier NAT type system with temp addresses.
I’m all but certain they didnt do it for privacy reasons at the time.
BearOfaTime@lemm.ee 6 months ago
Well, endpoints then were largely mainframe type systems, long before PCs existed, let alone network-capable PCs and http. So it was a different idea than what we have today.
Before internet, you could connect two physically disparate systems using point-to-point, permanently switched connections (so it always consumed a potential connection even when no data was being transmitted). If you had Point A connected to Point B, you need a third connection to comm with Point C. The idea was, if B already had a connection to C, why not share that bandwidth/connection so A only needed one connection? And then apply a data-switching concept (e.g. Packet switching), instead of circuit-switching.
We were still using P-to-P connections in the late 90’s because internet capabilites weren’t quite up to what some systems needed for latency, timing, and bandwidth.
At first, just getting two endpoint mainframes connected was a big deal, and individual user devices wasn’t much of a thought, yet. Most stuff was still mainframe based, so having those connected was sufficient for user communication/data sharing anyway. Since user connectivity wasn’t the main concern - moving data from one system to another was, say an entity has 2 locations, and needs to sync the systems in those two locations. So you either use a circuit-switched P-to-P, with downtime for users when sync is happening, or send physical tapes (magnetic or even pinched paper tapes) cross-country to move data, with that data being out-of-sync and requiring manual updates to re-sync.
Routing was necessary primarily for backbone transit, secondarily for organizations with multiple systems, hence the IP Classful approach.
DHCP is a local network requirement - ask any Admin about static IP addresses - that’s a nightmare. I don’t even like it at home with a handful of devices.
NAT is a result of the limited IP address space, not DHCP - there’s simply not enough addresses in 32bits for every local device to have a public IP (nor would you want this), plus having multiple services behind a router using local addressing. Even with static local addresses, you’d need NAT.
Also, look at the time - if you had a LAN in the late 80’s, it was something like Banyan Vines or Netware IPX (neither of which was routable originally), for local comms between local systems. Any internet/external network requirements were for (again) moving data between disparate locations. The idea that a workstation needed specific internet/non-local access to (what?) really didn’t make sense. It would comm with a local data source (mainframe/IBM 360, etc), and that system would manage retrieving or syncing data from elsewhere. A workstation was largely a dumb terminal before PCs (other than actual “workstations” which is a different animal".