> Only the edge equipment would need to be IPv4+ aware.
"Only"? That's still the networking stack of every desktop, laptop, phone, printer, room presentation device, IoT thing-y. Also every firewall device. Then recompile every application to use the new data structures with more bits for addresses.
And let's not forget you have to update all the DNS code because A records are hardcoded to 32-bits, so you need a new record type, and a mechanism to deal with getting both long and short addresses in the reply (e.g., Happy Eyeballs). Then how do you deal with a service that only has a "IPv4+" address but application code that is only IPv4-plain?
Basically all the code and infrastructure that needed to be updated and deployed for IPv6 would have to be done for IPv4+.
But the desktop/laptop/phone/printer was the EASIEST thing to change in that 30 year history. And it would have been the easiest thing to demand a change req from a company for.
Yes: but the process would have been exactly the same whether for a hypothetical IPv4+ or the IPng/IPv6 that was decided on; pushing new code to every last corner of the IP universe.
How could it have been otherwise given the original network structures were all of fixed lengths of 32 bits?
If we have IPv4 address 1.2.3.4, and the hypothetical IPv4+ adds 1.2.3.4.1.2.3.4 (or longer), how would a IPv4-only router handle 1.2.3.4.1.2.3.4? If an IPv4-only host or application gets a DNS response with 1.2.3.4.1.2.3.4, how is it supposed to use it?
As I see it, the transition mechanism for some IPv4+ that 'only' has longer addresses is exactly the same as for IPv6: new code paths that use new data structures, with a gradual rollout with tech refreshes and code updates where hosts slowly go from IPv4-only to IPv4-and-IPv4+ at different rates in different organizations.
If you think it's somehow different, can you explain how it is so? What proposal available (especially when IPng was being decided on in the 1990s) would have allowed for a transition that is different than the one described above (gradual, uncoördinated rollout)?
The proposal is that IPv4+ would be interpretable as an IPv4 packet. Either the IP header is extended, or we add another protocol layer for the IPv4+ bits (IPv4+ is another envelope for the user payload).
DNS is like today: A and AAAA records for IPv4 and IPv4+ respectively.
Core routers do not need to know about IPv4+, and might never know.
The transition is similar to 6to4. The edge router does translation to allow IPv4+ hosts to connect to IPv4 hosts. IPv4 hosts are unable to connect to IPv4+ directly (only via NAT). So it has the similar problem to IPv6 that you ideally want all servers to have a full IPv4 address.
What you don't have is a completely parallel addressing system, requirements to upgrade all routers (only edge routers for 4+ networks), requirements to have your ISP cooperate (they can just give you an IPv4 and you handle IPv4+ with your own router), and no need that the clients have two stacks operating at once.
It's essentially a better NAT, one where the clients behind other NATs can directly connect, and where the NAT gradually disappears completely.
If you hand UTF-8 that actually uses anything added by utf-8 to something that can only render ASCII, the text will be garbled. People can read garbled text ok if it’s a few missing accented characters in a western language, but it’s no good for Japanese or Arabic.
In networking terms, this is like a protocol which can reach ipv4 hosts only but loses packets to the ipv4+ hosts randomly depending on what it passes through. Who would adopt a networking technology that fails randomly?
v6 has nearly 3 billion users. How is that abysmal?
We've never done something like the v4->v6 migration before, on this sort of scale. It's not clear what the par time for something like this is. Maybe 30 years is a normal amount of time for it to take?
HTTP->HTTPS was this kind of scale, and it was smooth because they changed as little as possible while also being very careful about default behaviors.
3 billion people sorta use ipv6, but not really, cause almost all of those also rely on ipv4 and no host can really go ipv6-only. Meanwhile, many sites are HTTPS-only.
And because it's a layer 7 thing, so it only required updating the server and client software, not the OS... and only the client and server endpoints and not the routers in between... and because we only have two browser vendors who between them can push the ecosystem around, and maybe half a dozen relevant web server daemons.
Layer 3 of the Internet is the one that requires support in all software and on all routers in the network path, and those are run by millions of people in hundreds of countries with no central entity that can force them to do anything.
HTTP->HTTPS is only similar in terms of number of users, not in terms of the deployment itself. The network effects for IP are much stronger than for HTTP.
They don't "sorta" use v6, they're properly using it, and you can certainly go v6-only. I'm posting from a machine with no v4. Also, if you want to go there: HTTPS was released before IPv6, and yet still no browser is HTTPS only, despite how much easier it is to deploy it.
I know they aren't very comparable in a technical way, but look at the mindset. IPv6 included decisions that knowingly made it more different from v4 than strictly needed, cause they wanted it to be perfect day 1. If they did HTTPS like this, it'd be tied to HTTP/2.
Most browsers now discourage plain HTTP with a warning. Any customer-facing server basically needs to use HTTPS now. And you're rare if you actually have no ipv4, not even via a tunnel.
The compromised "ipv4+" idea a bunch of people keep asking for wouldn't require changing the spec down the road. ISPs would just need to clean up their routes later, and SLAAC could still exist as an optional (rather than default) feature for anyone inclined to enable later. Btw, IPv6 spec was only finalized in 2017, wasn't exactly one-shot.
I don't know if HTTP's job is easier. Maybe on the client side, since there were never that many browsers, but you have load-balancers, CDNs, servers, etc. HTTP/2 adoption is still dragging out because of how many random things don't support it. Might be a big reason why gRPC isn't so popular too.
> HTTP->HTTPS was this kind of scale, and it was smooth because they changed as little as possible while also being very careful about default behaviors.
HTTP->HTTPS is not equivalent in any way. The payload in HTTP and HTTPS are exactly the same; HTTPS simply adds a wrapper (e.g., stunnel can be used with an HTTP-only web server). Further HTTP(S) is only on the end points, and specifically in the application layer: your OS, switch, firewall, CPE, ISP router(s), etc, all can be left alone.
If you're not running a web browser or web server (i.e., FTP, SMTP, DNS, database) then there are zero changes that need to be made to any code on a system. This is not true for changing the number of bits the addressing space: every piece of code that calls socket(), bind(), connect(), etc, has to be touched.
Whereas the primary purpose of IPng was to expand the address space, which means your OS, switch, firewall, CPE, ISP router(s), etc, all have to be modified to handle more address bits in the Layer 3 protocol data unit.
Plus stuff at the application layer like DNS (since A records are 32-bit only, you need an entire new network type): entire new library functions had to be created (e.g., gethostbyname() replaced by getaddrinfo()).
I hear people say the IETF/IP Wizards of the 1990s should have "just" picked an IPng that was a larger address space, but don't explain how IPv4 and hypothetical IPv4+ would actually work. Instead of 1.1.1.1, a packet comes in with 1.1.1.1.1.1.1.1: how would a non-IPv4+ router know what to do with that? How would non-updated routers and firewalls be able to handle longer addresses? How would non-updated DNS code be able to handle new record types with >32 bits?
HTTP->HTTPS looks easy in hindsight, but there were plenty of ways it could have gone wrong. They took the path of least resistance, unlike ipv6. I know they're different layers ofc.
To answer the last question, routers would need IPv4+ support, just like ipv6 which already happened. The key is it's much easier for users to switch after. No dual stack, you get the same address, routes, DNS, and middleboxes like NAT initially. ISPs can't hand out longer addrs like /40 until things like DNS are upgraded in-place to support that, but again those are pretty invisible changes throughout the stack.
> To answer the last question, routers would need IPv4+ support, just like ipv6 which already happened.
So exactly like IPv6: you need to roll out new code everywhere.
> The key is it's much easier for users to switch after. No dual stack, you get the same address, routes, DNS, and middleboxes like NAT initially. ISPs can't hand out longer addrs like /40 until things like DNS are upgraded in-place to support that, but again those are pretty invisible changes throughout the stack.
So exactly like IPv6: you need to roll out new code everywhere.
Would organization have rolled out in IPv4+ any differently than IPv6? Some early, some later, some questioning the need at all. It's the exact same coördination / herding cats problem.
It's a simple toggle on vs asking orgs to redo their entire network. In both cases you need routers and network stacks to support the new packet format, but that isn't the hard part of ipv6, we already got there and people still aren't switching.
Sorry, I'm still not seeing how a IPv4+ would be any less complicated (or as simple) as IPv6. In either case you would still have to:
* roll out new code everywhere
* enable the protocol on your routers
* get address block(s) assigned to you
* put those blocks into BGP
* enable the protocol on middleware boxes
* have translation boxes for new-protocol hosts talk to old-protocol-only hosts
* enable the protocol on end hosts
And just because you do it, does not mean anyone else would do in the same timeframe (or ever). You're back in the chicken-and-egg of whether servers/services do it first ("where are the clients?"), or end-devices ("where are the services?").
Redo all your addresses and routes, reconfigure or replace NAT and DHCP, reconfigure firewall, change your DNS entries at minimum. If it's a home or small business and you don't want to fight the defaults, you go from NAT to NATless.
"Only"? That's still the networking stack of every desktop, laptop, phone, printer, room presentation device, IoT thing-y. Also every firewall device. Then recompile every application to use the new data structures with more bits for addresses.
And let's not forget you have to update all the DNS code because A records are hardcoded to 32-bits, so you need a new record type, and a mechanism to deal with getting both long and short addresses in the reply (e.g., Happy Eyeballs). Then how do you deal with a service that only has a "IPv4+" address but application code that is only IPv4-plain?
Basically all the code and infrastructure that needed to be updated and deployed for IPv6 would have to be done for IPv4+.