nanog mailing list archives

Re: IPV6 in enterprise best practices/white papaers


From: Karl Auer <kauer () biplane com au>
Date: Wed, 30 Jan 2013 21:24:20 +1100

On Wed, 2013-01-30 at 09:39 +0200, Jussi Peltola wrote:
High density virtual machine setups can have 100 VMs per host.

OK, I see where you are coming from now.

Hm. If you have 100 VMs per host and 48 hosts on a switch, methinks you
should probably invest in the finest switches money can buy, and they
will have no problem tracking that state. While it is certainly a hefty
dose *more*, it is geometrically, not exponentially more, so not a
scalability issue IMHO. An ordinary old IPv4 switch tracks 4800 L2/port
mappings in the scenario you describe. If each VM had just two
addresses, it would be 9600...

I wonder if there is hard information out there on how many multicast
groups a modern switch can actually maintain in this regard. Are you
saying you have seen actual failures due to this, or are you supposing?
Serious question - is this a possible problem or an actual problem?

multicast groups - some virtual hosters give /64 per VM, which brings
about all kinds of trouble not limited to multicast groups if the client
decides to configure too many addresses to his server.

There is always some way to misconfigure a network to cause trouble.
It's a bit unfair to make that IPv6's fault.

As a matter of interest, what is the "all kinds of trouble" that a
client can cause by configuring too many addresses on their server?
Things that are not the client-side problems, obviously ;-)

Regards, K.


-- 
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Karl Auer (kauer () biplane com au)
http://www.biplane.com.au/kauer
http://www.biplane.com.au/blog

GPG fingerprint: B862 FB15 FE96 4961 BC62 1A40 6239 1208 9865 5F9A
Old fingerprint: AE1D 4868 6420 AD9A A698 5251 1699 7B78 4EEE 6017




Current thread: