Interesting People mailing list archives

Re: Researchers explore scrapping Internet - Yahoo! News


From: David Farber <dave () farber net>
Date: Mon, 16 Apr 2007 05:40:14 -0400



Begin forwarded message:

From: "Standeford, Dugie" <dstandeford () warren-news com>
Date: April 16, 2007 3:36:45 AM EDT
To: dave () farber net
Subject: RE: [IP] Re: Researchers explore scrapping Internet - Yahoo! News

Dave:

Dugie Standeford here, European Correspondent for Warren Communications News.

The Time article mentions that the EU is also looking at the possibility of scrapping the Internet through the FIRE program. I wonder if anyone on the list is familiar with that program or knows any of the people working on it who might be willing to discuss it with me for a story.


________________________________

From: David Farber [mailto:dave () farber net]
Sent: Sun 4/15/2007 7:23 AM
To: ip () v2 listbox com
Subject: [IP] Re: Researchers explore scrapping Internet - Yahoo! News





Begin forwarded message:

From: Bob Frankston <Bob2-19-0501 () bobf frankston com>
Date: April 14, 2007 9:10:58 PM EDT
To: dave () farber net, ip () v2 listbox com
Cc: "'Bob Hinden'" <bob.hinden () nokia com>, dpreed () reed com, "Vinton
G. Cerf" <vint () google com>, Waclawsky John-A52165 <jgw () motorola com>
Subject: RE: [IP] Re: Researchers explore scrapping Internet - Yahoo!
News

I was going to pass on commenting this but in reading through the
release I saw it as a learning opportunity and also I think David,
Vint and others may be reticent about defending something that
actually is working rather well despite the problems.



While I'm an advocate of reinventing the Internet the gist of the
story is a failure to understand why the Internet has become what it
is. It's akin to the attempts to fix the US Constitution by getting
rid of that First Amendment because we now know what speech is good
and what is not.



What seems to be missing from these efforts is a protocol that's in
the spirit of the end-to-end opportunity-creating approach that
defines the Internet but doing less in the network rather than more.
I often refer to the existing implementation has having training
wheels because of the dependency on a single backbone which I call
"Internet Inc"



Projects like GENI that attempt to make the Internet work better miss
this point. What we need are protocols which make any subset of
connected first class network. These systems can then connect in any
way without being dependent upon the particulars of the path or a
backbone.



Some of this nascent in P2P and Skype. But there is a tendency to
build atop the current Internet as with using the @ to extend the
address in email and SIP rather than starting from the edge using
self-coined GUIDs which can be stable. Sort of like the MAC address
in XNS but far more general and distributed.



It would be a "clean-slate" approach in that is valid in its own
right but still takes advantage of the current Internet as just
another transport in the same way that today's Internet used the then-
existing telecom infrastructure. Today's Internet still has path
dependencies that provide an opportunity for the current business
models depend on controlling the paths so they can be operated as
profit centers. By removing this vestige of path-dependence we force
the issue and find we'll need to fund the transport as physical
infrastructure rather than as billable services.



This approach follows from viewing the End-to-End argument as a
constraint and solving for connectivity given that constraint. The
current Internet made some engineering compromises faced with the
constraints of the day - we should now move on rather than moving
backwards.



The challenge is that many people see the Internet in terms of the
accidental properties and still don't understand how it can work as
well as it does let alone how it can work better with less governance
and control.



Skimming the story ...

Researchers Explore Scrapping Internet


The Internet "works well in many situations but was designed for
completely different assumptions," said Dipankar Raychaudhuri, a
Rutgers University professor overseeing three clean-slate projects.

"It's sort of a miracle that it continues to work well today."



! If this is what is driving the funding then we should be worried.
It's the end-to-end principle that still stands. The problem is with
the engineering compromises that made left us with a path-dependent
Internet. Again, I'm not saying they were wrong as much the approach
was useful scaffolding. Yes, it is a miracle that it works today but
that's testament to the success of end-to-end despite the compromises
in the initial implementation.



And it could take billions of dollars to replace all the software and
hardware deep in the legacy systems.



! Y2K all-over-again? Has no one learned the lessons of how to
maintain compatibility. If you aren't path dependent then
compatibility is greatly simplified.



"The network is now mission critical for too many people, when in the
(early days) it was just experimental," Zittrain said.



The Internet's early architects built the system on the principle of
trust. Researchers largely knew one another, so they kept the shared
network open and flexible - qualities that proved key to its rapid
growth.



But spammers and hackers arrived as the network expanded and could
roam freely because the Internet doesn't have built-in mechanisms for
knowing with certainty who sent what.



! This sounds eerily like saying that things are too important for
the First Amendment which was built on trust. Those who worked on
Multics had a very strong sense of distrust and the Web happened
because it didn't require trust. The dynamic works because we can
trust but verify thanks to digital protocols. We don't need to
predefine good behavior.



The network's designers also assumed that computers are in fixed
locations and always connected. That's no longer the case with the
proliferation of laptops, personal digital assistants and other
mobile devices, all hopping from one wireless access point to
another, losing their signals here and there.



Engineers tacked on improvements to support mobility and improved
security, but researchers say all that adds complexity, reduces
performance and, in the case of security, amounts at most to bandages
in a high-stakes game of cat and mouse.



Workarounds for mobile devices "can work quite well if a small
fraction of the traffic is of that type," but could overwhelm
computer processors and create security holes when 90 percent or more
of the traffic is mobile, said Nick McKeown, co-director of
Stanford's clean-slate program.



! Sure, the naïve assumption of immobility allowed for the IP address
to comingle naming with path but that's not a defining assumption for
end-to-end. It was an implementation compromise. If we composite from
the edge mobility becomes the norm. The problem is indeed trying to
patch around but this description shows why the designers understood
trust very well and recognized that trust has to be end-to-end and
not a property of the network. Too bad the temptation of big funding
makes all problems seem to be network problems.



The Internet will continue to face new challenges as applications
require guaranteed transmissions - not the "best effort" approach
that works better for e-mail and other tasks with less time sensitivity.



! The alternative to best efforts is a very high priced special
network. But we have one - it's the phone network but it's too
expensive for anyone to use including the phone companies. It's only
best efforts that allows us to the available capacity and get voice
and video performance well above that afforded by the PSTN. Yet just
as with WAP, people assume that we don't already have a solution that
works very well and instead create a crisis the demands their
favorite hack. John Waclawsky has a nice list of citations of failed
QoS (Non-best-effort) experiments.



Think of a doctor using teleconferencing to perform a surgery
remotely, or a customer of an Internet-based phone service needing to
make an emergency call. In such cases, even small delays in relaying
data can be deadly.



! We have a term for time critical remote surgery. It's called
homicide. Oh, if a packet gets delayed that's bad but if the phone
wire breaks, well, that doesn't count.



And one day, sensors of all sorts will likely be Internet capable.



! One day? Why aren't they already? But I'll admit that the current
Internet isn't as device friendly because of the conflicting demands
on the IP address but that's why we need simple edge identifiers and
protocols but we can still use the existing Internet as a transport
and do devices now.



Even if the original designers had the benefit of hindsight, they
might not have been able to incorporate these features from the get-
go. Computers, for instance, were much slower then, possibly too weak
for the computations needed for robust authentication.



! They don't need hindsight - they had foresight. And they knew you
don't do authentication in the network itself because that's
meaningless. This security stuff is not at all new



Kleinrock, the Internet pioneer at UCLA, questioned the need for a
transition at all, but said such efforts are useful for their out-of-
the-box thinking. "A thing called GENI will almost surely not become
the Internet, but pieces of it might fold into the Internet as it
advances," he said.



! He's right - we just need to learn our lessons and build on what we
have but not necessarily in a way that makes us more dependent upon
particulars.



Any redesign may incorporate mechanisms, known as virtualization, for
multiple networks to operate over the same pipes, making further
transitions much easier. Also possible are new structures for data
packets and a replacement of Cerf's TCP/IP communications protocols.



! Duh? Don't we have VPNs now?











-----Original Message-----
From: David Farber [mailto:dave () farber net]
Sent: Saturday, April 14, 2007 18:44
To: ip () v2 listbox com
Subject: [IP] Re: Researchers explore scrapping Internet - Yahoo! News







Begin forwarded message:



From: Bob Hinden <bob.hinden () nokia com>

Date: April 14, 2007 12:49:23 AM EDT

To: David Farber <dave () farber net>

Cc: Bob Hinden <bob.hinden () nokia com>

Subject: Researchers explore scrapping Internet - Yahoo! News

Reply-To: bob.hinden () nokia com



For IP if you like.



I like this this "clean slate" approach.  However, if there is to be

a new Internet it has to be developed without the constraints of the

current business models.   Otherwise we just get small incremental

changes.  The current Internet was not designed around the business

models of the 1980's.   It created it's own business models, but the

currents business models lock us into the current Internet.



Bob



------------------



-------------------------------------------

Archives: http://v2.listbox.com/member/archive/247/=now

RSS Feed: http://v2.listbox.com/member/archive/rss/247/

Powered by Listbox: http://www.listbox.com <http://www.listbox.com/>



-------------------------------------------
Archives: http://v2.listbox.com/member/archive/247/=now
RSS Feed: http://v2.listbox.com/member/archive/rss/247/
Powered by Listbox: http://www.listbox.com <http://www.listbox.com/>




<<attachment: winmail.dat>>



-------------------------------------------
Archives: http://v2.listbox.com/member/archive/247/=now
RSS Feed: http://v2.listbox.com/member/archive/rss/247/
Powered by Listbox: http://www.listbox.com

Current thread: