nanog mailing list archives

Re: scaling linux-based router hardware recommendations


From: Baldur Norddahl <baldur.norddahl () gmail com>
Date: Wed, 28 Jan 2015 16:35:52 +0100

10g transceivers are not overly expensive if you buy compatible modules.

SFP+ Direct attach cable is $16.
SFP+ multimode module is $18.
SFP+ singlemode LR module is $48.

That is nothing compared to what vendors are asking for a "real" router.

I believe there are many startups that are going for 2x 10G transit with
full tables. We are one of them for sure. And then you need a cheap way to
handle up to 20G bidirectional traffic, because as a startup it is not a
good idea to fork over what equals to a whole year of salary to Cisco or
Juniper. Even if you have that kind of money, you would want to spent it on
something that will get you revenue.

The obvious solution is a server (or two for redundancy) running Linux or
BSD. You will be getting the Intel NIC with two SFP+ slots, so you can
connect a transit connection directly to each server.

This works well enough. We used a setup just like that for a year, before
we upgraded to a hardware router. The weak point is that it will likely
have trouble if you get hit by a real big DDoS with small packets.

But back to cost of things. If I use my own company as an example, we are a
FTTH provider. We use PON switches with 2x 10G ports on each switch. You
can get many PON switches for the price of one router with at least 4x 10G
ports (equivalent to the Linux routers). The PON switches will earn you
revenue, it is what you connect your customers to. Better to get a bigger
network, than spend the money on a router.

The cost of SFP+/XFP and GPON C+ modules on the PON switch is only about
10% of the cost of the switch itself (again using compatible modules).

A switch with 24x1G and 4x 10G can be bought for $3000. You can fill it
completely with optics for $300 - again about 10%.

My point is that if you are in an environment where every dollar counts,
you do not need to spent a majority of your funds on optics. And neither do
you need that expensive router until later in the game.

Regards,

Baldur





On 28 January 2015 at 15:35, Charles N Wyble <charles () thefnf org> wrote:

There is no free lunch. If you want " tools that end users can just use"
then buy Cisco.

Otherwise you need to roll up your sleeves and take the pieces and put
them together. Or hire people like me to do it for you.

It isn't overly complicated in my opinion. Also you'll find plenty of
reasonably priced Linux or BSD integration engineers out there across the
globe who are used to doing this sort of thing.

Now once you move beyond basic forwarding / high PPS processing (which
seems mostly commodity now) and get into say 80gbps (40gbps full duplex)
IPS , ip reputation, data loss prevention, SSL MITM, AV... well that
requires some very beefy hardware. Can that be done on x86? I doubt it.

Tilera seems the way to go here. Newer FPGA boards can implement various
CPU architectures on the fly. You also have CUDA. I hadn't seen chelsio,
I'm very excited about that. Ill have one in my grubby little hands soon
enough.

transceivers are still horribly expensive. This is a major portion of the
bom cost on any build, no matter what software stack is putting packets
onto them.

It isn't so simple once you move beyond the 1gbps range and want full
feature set. And not in one box I think. Look at https://www.bro.org/ for
interesting multi box scaling.

On January 28, 2015 7:02:34 AM CST, "Paul S." <contact () winterei se> wrote:
That's the problem though.

Everyone has presentations for the most part, very few actual tools
that
end users can just use exist.

On 1/28/2015 午後 08:02, Robert Bays wrote:
On Jan 27, 2015, at 8:31 AM, Jim Shankland <nanog () shankland org>
wrote:

My expertise, such as it ever was, is a bit stale at this point, and
my
figures might be a little off. But I think the general principle
applies: think about the minimum number of x86 instructions, and the
minimum number of main memory accesses, to inspect a packet header,
do a
routing table lookup, and enqueue the packet on an outbound
interface. I
can't see that ever getting reduced to the point where a generic
server
can handle 40-byte packets at line rate (for that matter, "line
rate" is
increasing a lot faster than "speed of generic server" these days).
Using DPDK it’s possible to do everything stated and achieve 10Gbps
line rate at 64byte packets on multiple interfaces simultaneously.  Add
ACLs to the test setup and you can reach significant portions of 10Gbps
at 64byte packets and full line rate at 128bytes.

Check out Venky Venkatesan’s presentation at the last DPDK Summit for
interesting information on pps/CPU cycles and some of the things that
can be done to optimize forwarding in a generic processor environment.



http://www.slideshare.net/jstleger/6-dpdk-summit-2014-intel-presentation-venky-venkatesan




!DSPAM:54c8de34274511264773590!

--
Sent from my Android device with K-9 Mail. Please excuse my brevity.



Current thread: