nanog mailing list archives

Re: private ip addresses from ISP


From: Robert Bonomi <bonomi () mail r-bonomi com>
Date: Tue, 23 May 2006 09:47:01 -0500 (CDT)



Date: Tue, 23 May 2006 09:36:30 -0400
To: nanog () nanog org
From: Daniel Senie <dts () senie com>
Subject: Re: private ip addresses from ISP


At 09:22 AM 5/23/2006, Robert Bonomi wrote:

Date: Tue, 23 May 2006 03:33:34 -0400
From: Richard A Steenbergen <ras () e-gerbil net>
To: nanog () nanog org
Subject: Re: private ip addresses from ISP


On Mon, May 22, 2006 at 04:30:37PM -0400, Andrew Kirch wrote:

  3) You are seeing packets with source IPs inside private space
arriving at
your interface from your ISP?
...
Sorry to dig this up from last week but I have to strongly disagree with
point #3.
From RFC 1918
   Because private addresses have no global meaning, routing information
   about private networks shall not be propagated on inter-enterprise
   links, and packets with private source or destination addresses
   should not be forwarded across such links. Routers in networks not
   using private address space, especially those of Internet service
   providers, are expected to be configured to reject (filter out)
   routing information about private networks.

The ISP shouldn't be "leaving" anything to the end-user, these packets
should be dropped as a matter of course, along with any routing
advertisements for RFC 1918 space(From #1). ISP's who leak 1918 space
into my network piss me off, and get irate phone calls for their
trouble.

The section you quoted from RFC1918 specifically addresses routes, not
packets.

I quote, from the material cited above:
      "  ..., and packets with private source or destination addresses
       should not be forwarded across such links.  ...  "

There are some  types of packets that can legitimately have RFC1918 source
addresses --  'TTL exceeded' for example -- that one should legitimately
allow across network boundaries.

Really? You really want TTL-E messages with RFC1918 source addr? Even 
if they're used as part of a denial of service attack? Even though 
you can't tell where they actually came from?

"Can be" is not sufficient (in and of itself, that is) reason to block. 
_Anything_ "can be" used as part of a DOS attack.

TTL-E messages _do_ have legitimate function in network management.
TTL-E messages _can_ originate from RFC1918 space,  addressed to 'public
internet' addresses.  Usefully, and meaningfully.  Ever hear of 'traceroute'?
Ever use it where packets went across a network using RFC1918 internally?
Ever had a route die _between_ two RFC1918 addressed nodes on somebody elses
network?

If you don't like that example, substitute "host/network unreachable", or
'ICMP redirect'. Or packet-size limit exceeded for a 'DNF' packet.  If you
don't get those messages back, you can't communicate.

         If you're receiving RFC1918 *routes* from anyone, you need to
thwack them over the head with a cluebat a couple of times until the cluey
filling oozes out. If you're receiving RFC1918 sourced packets, for the
most part you really shouldn't care.

*I* care.

When those packets contain 'malicious' content, for example.

When the provider =cannot= tell me which of _their_own_customers_ originated
that attack, for example.  (This provider has inbound source-filtering on
their Internet 'gateway' routers, but *not* on their customer-facing 
equipment
(either inbound or outbound.)

So you really don't want ANY packets with RFC 1918 source addresses 
then, not even ICMP TTL-E messages, since they could be used in a 
malicious fashion, and you would not be able to determine the true origin.

You need to learn to read.  I said "malicious content", not 'used in a
malicious fashion'.

Packets that could have legitimate meaning should be passed on.

Packets that _cannot_ have legitimate meaning should not be.

It's even more comical when the NSP uses RFC1918 space internally, and does
*not* filter those source addresses from their customers.

You mean like Comcast using Cisco routers in their head-ends and 
having the 10/8 address show up in traceroutes and so forth? 

not at all.  You're either ignorant of network architecture, or trying
to pick fights.

I was talking about a situation where a customer machine of a network
that uses RFC1918 addresses internally starts sending malicious packets
with a RFC1918 source address that _matches_ that of one of the *in*use*
addresses on the service-provider network.  AND the service-provider does
not do ingress (from the customer) or egress (to the customer) filtering
of RFC1918 address-space.  Customer A's machine starts attacking Customer 
B, C, D, E, F ...;  those ill-informed customers don't null-route outbound 
RFC1918 addresses; you get an 'inadvertent' smurf attack on the NSP resource.
It's comical, because the ISP's 'bad practice' facilitated the attack on
the ISP.

"Traceroute", by the way, *is* one of those 'legitimate' cases for which 
RFC1918 source-address packets should be allowed across network boundaries.

Proper "good net neighbor" egress filtering of RFC1918 source addresses 
takes a number of separate rules.  Several 'allows', followed by a default
'deny'.

                                                             Not sure 
to what degree it's the NSP's fault vs. the router vendors', but yes.

Can't imagine why you think it might be the fault of the router vendor.

Can't imagine  why think it is somebody's "fault".

                                     There are semi-legitimate reasons for
packets with those sources addresses to float around the Internet, and
they don't hurt anything.

I guess you don't mind paying for transit of packets that _cannot_possibly_
have any legitimate purpose on your network.

Along with this goes the usual flamewar over RFC 2827, ingress 
filtering (of which URPF is a subset implementation).

If everybody did _egress_ filtering of 'cannot possibly be legitimate' traffic,
ingress filtering of that traffic would not be necessary.  Unfortunately, not
everybody does, so ingress filtering _is_ necessary.

'ingress' filtering is self-defense.
'egress' filtering is about 'being a good net neighbor'.

Some of us, on the other hand, _do_ object.

And some of us pay for bandwidth, care about getting congestion 
problems from useless traffic, etc. Perhaps it makes the case a lot 
clearer for selling "better than equal" service to the highest bidder 
if your network is overrun with undesired traffic.




Current thread: