nanog mailing list archives

Re: Will a single /27 get fully routed these days?


From: Sander Steffann <sander () steffann nl>
Date: Sun, 26 Jan 2014 08:56:16 +0100

Hi Owen,

Op 26 jan. 2014, om 05:36 heeft Owen DeLong <owen () delong com> het volgende geschreven:

On Jan 25, 2014, at 13:59 , Sander Steffann <sander () steffann nl> wrote:

Hi,

[…] But, when that happens ARIN will only have the 'Dedicated IPv4 block to facilitate IPv6 Deployment' [1] left, 
and it will use 'a minimum size allocation of /28 and a maximum size allocation of /24' for that block. The block 
is meant for things like dual stacked DNS servers, NAT64 and other IPv6 deployments where a bit of IPv4 is still 
necessary.

I wonder how reachable those systems will be... Will people adjust their filters, or will most usage of this block 
(and thereby all new entrants in the ISP market in the ARIN region) just be doomed?

That's actually may not be the best question. That block will come from within a specific prefix and I suspect that 
ISPs and the like will adjust their filters FOR THAT PREFIX.

Same question… Will people adjust their filters, (even if only for that prefix)? All over the world? I think 'will 
adjust their filters for XYZ' is highly optimistic, but let's hope it will work, otherwise the ISPs in the ARIN region 
will have a problem. (Or maybe not: existing ISPs (for who a /2[4-8] is not a significant amount) might not mind if a 
new competitors only gets a /2[5-8] that they cannot route globally. But I really hope it doesn't come to that.)

But more important: which /10 is set aside for this? It is not listed on https://www.arin.net/knowledge/ip_blocks.html

Consider the possibility of a policy change which allows the transfer of smaller blocks (current ARIN policy limits 
this to /24 minimum, but ARIN policy is not immutable, we have a policy development process so that anyone who wants 
to can start the process of changing it.)

I’m well aware of that, but I’ll stick to RIPE policies for now :-)

Cheers,
Sander



Current thread: