nanog mailing list archives

Re: Rack rails on network equipment


From: Mel Beckman <mel () beckman org>
Date: Mon, 27 Sep 2021 22:05:29 +0000

I think the primary issue for front- vs rear-mounted switches is cooling. As long as you use switches that can pull 
cooling air from either the front or the back, it’s feasible to mount the TOR switches in the back.

For example, I think these are parts I used to order for Cisco Catalyst 3850-48XS switches:

FAN-T3-R= Fan module front-to-back airflow for 48XS

FAN-T3-F= Fan module back-to-front airflow for 48XS

But if the switch is hardwired to pull cooling air from the front, it’s going to be taking in hot air, not cold air, 
which could lead to overheating.

As far as rails mounting time goes, it’s just not enough of a time factor to outweigh more important factors such as 
switch feature set, management architecture, or performance. Dell is pretty much in the back of the line for all of 
those factors,

 -mel

On Sep 27, 2021, at 2:32 PM, Andrey Khomyakov <khomyakov.andrey () gmail com<mailto:khomyakov.andrey () gmail com>> 
wrote:

Folks,

I kind of started to doubt my perception (we don't officially calculate it) of our failure rates until Mel provided 
this:
"That’s about the right failure rate for a population of 1000 switches. Enterprise switches typically have an MTBF of 
700,000 hours or so, and 1000 switches operating 8760 hours (24x7) a year would be 8,760,000 hours. Divided by 12 
failures (one a month), yields an MTBF of 730,000 hours." At least I'm not crazy and our failure rate is not abnormal.

I really don't buy the lack of failure in 15 years of operation or w/ever is the crazy long period of time that is 
longer than a standard depreciation period in an average enterprise. I operated small colo cages with a handful of 
Cisco Nexus switches - something would fail once a year at least. I operated small enterprise data centers with 5-10 
rows of racks - something most definitely fails at least once a year. Fun fact: there was a batch of switches with the 
Intel Atom clocking bug. Remember that one a couple of years ago? The whole industry was swapping out switches like mad 
in a span of a year or two... While I admit that's an abnormal event, the quick rails definitely made our efforts a lot 
less painful.

It's also interesting that there were several folks dismissing the need for toolless rails because switching to those 
will not yield much savings in time compared to recabling the switch. Somehow it is completely ignored that recabling 
has to happen regardless of the rail kit kind, i.e. it's not a data point in and of itself. And since we brought up the 
time it takes to recable a switch at replacement point, how is tacking on more time to deal with the rail kit a good 
thing? You have a switch hard down and you are running around looking for a screwdriver and a bag screws. Do we truly 
take that as a satisfactory way to operate? Screws run out, the previous tech misplaced the screw driver, the screw was 
too tight and you stripped it while undoing it, etc, etc...

Finally, another interesting point was brought up about having to rack the switches in the back of the rack vs the 
front. In an average rack we have about 20-25 servers, each consuming at least 3 ports (two data ports for redundancy 
and one for idrac/ilo) and sometimes even more than that. Racking the switch with ports facing the cold aisle seems to 
then result in having to route 60 to 70 patches from the back of the rack to the front. All of a sudden the cables need 
to be longer, heavier, harder to manage. Why would I want to face my switch ports into the cold aisle when all my 
connections are in the hot aisle? What am I missing?

I went back to a document my DC engineering team produced when we asked them to eval Mellanox switches from their point 
of view and they report that it takes 1 person 1 minute to install a Dell switch from cutting open the box to applying 
power. It took them 2 people and 15 min (hence my 30 min statement) to install a Mellanox switch on traditional rails 
(it was a full width switch, not the half-RU one). Furthermore, they had to install the rails in reverse and load the 
switch from the front of the rack, because with 0-U PDUs in place the racking "ears" prevent the switch from going in 
or out of the rack from the back.

The theme of this whole thread kind of makes me sad, because summarizing it in my head comes off as "yeah the current 
rail kit sucks, but not enough for us to even ask for improvements in that area." It is really odd to hear that most 
folks are not even asking for improvements to an admittedly crappy solution. I'm not suggesting making the toolless 
rail kit a hard requirement. I'm asking why we, as an industry, don't even ask for that improvement from our vendors. 
If we never ask, we'll never get.

--Andrey


On Mon, Sep 27, 2021 at 10:57 AM Mel Beckman <mel () beckman org<mailto:mel () beckman org>> wrote:
That’s about the right failure rate for a population of 1000 switches. Enterprise switches typically have an MTBF of 
700,000 hours or so, and 1000 switches operating 8760 hours (24x7) a year would be 8,760,000 hours. Divided by 12 
failures (one a month), yields an MTBF of 730,000 hours.

 -mel

On Sep 27, 2021, at 10:32 AM, Doug McIntyre <merlyn () geeks org<mailto:merlyn () geeks org>> wrote:

On Sat, Sep 25, 2021 at 12:48:38PM -0700, Andrey Khomyakov wrote:
We operate over 1000 switches in our data centers, and hardware failures
that require a switch swap are common enough where the speed of swap starts
to matter to some extent. We probably swap a switch or two a month.
...

This level of failure surprises me. While I can't say I have 1000
switches, I do have hundreds of switches, and I can think of a failure
of only one or two in at least 15 years of operation. They tend to be
pretty reliable, and have to be swapped out for EOL more than anything.



Current thread: