Interesting People mailing list archives

Article on "Future of Secuirty"


From: Dave Farber <dave () farber net>
Date: Thu, 01 Jan 2004 16:12:37 -0500


Delivered-To: dfarber+ () ux13 sp cs cmu edu
Date: Thu, 01 Jan 2004 15:22:48 -0500 (EST)
From: Raymcfarld () aol com
Subject: Article on "Future of Secuirty"
To: dave () farber net
ave,

Got this from a colleague. OK to post to IP if you feel it worthy. It has
something in it for everyone to get upset over regardless of "one's persuasion"!
Unfortunately, I don't see anything changing until either vendors can be held
civilly liable for losses incurred through use of their software, and/or
criminally liable for any loss of life or real property. Until you effect their
bottom line, or their freedom, nothing will really get fixed in my view. The
financial bottom line seems to be the only real incentive that business really
cares about.

Ray
------------
The Future of Security by Scott Berinato, Dec 30, 2003 ComputerWorld

Scenario One

After the Storm, Reform

There's no need to imagine a worst-case scenario for Internet security in the
year 2010. The worst-case scenario is unfolding right now.

Based on conservative projections, we'll discover about 100,000 new software
vulnerabilities in 2010 alone, or one new bug every five minutes of every hour
of every day. The number of security incidents worldwide will swell to about
400,000 a year, or 8,000 per workweek.

Windows will approach 100 million lines of code, and the average PC, while it
may cost $99, will contain nearly 200 million lines of code. And within that
code, 2 million bugs.

By 2010, we'll have added another half-a-billion users to the Internet. A few
of them will be bad guys, and they'll be able to pick and choose which of
those 2 million bugs they feel like exploiting.

In other words, today's sloppiness will become tomorrow's chaos.

The good news is that we probably won't get to that point. Most experts are
optimistic about the future security of the Internet and software. Between now
and 2010, they say, vulnerabilities will flatten or decline, and so will
security breaches. They believe software applications will get simpler and smaller,
or at least they won't bloat the way they do now. And they think experience
will provide a better handle on keeping the growing number of bad guys out of
our collective business. Some even suggest that by 2010, a software Martin
Luther will appear to nail 95 Theses--perhaps in the form of a class-action
lawsuit--to a door in Redmond, kicking off a full-blown security reformation.

The bad news is that this confidence, this notion of an industrywide
smartening up, is based on the assumption that there will be a security incident of
such mind-boggling scope and profoundly disturbing consequence--the so-call
digital Pearl Harbor--that conducting business as usual will become inconceivable.

The digital Pearl Harbor: What it's not

The phrase digital Pearl Harbor was first seen in print in 1991. D. James
Bidzos, then president of RSA, said the government's digital signature standard
provided "no assurance that foreign governments cannot break the system,
running the risk of a digital Pearl Harbor."

By 1998, the term's use was reasonably common, a dark, lowering cloud on the
horizon of the Internet revolution. Newsweek, in an article from that year,
suggested it would come in the form of a "sophisticated attack on our digital
workings [which] could create widespread misery: everything from power failures
to train wrecks."

Since then, the phrase has become bromidic to the point that former
cybersecurity czar Richard Clarke declared that "digital Pearl Harbors are happening
every day."

Whether conceived of as rare or quotidian, the digital Pearl Harbor's
definition has remained constant: It's a computer outage, a big one, a physically and
financially damaging one. More recently, it has become a shorthand way to
say, "Terrorists will take down the Internet."

In either case, this definition is wrong. Not only is it wrong, it's not even
useful.

"I hesitate to even use the term," says Jeff Schmidt, an elected member of
the FBI's InfraGard national executive board. "It's come to mean any attack
that's massively inconvenient. But I don't think they merit the term digital Pearl
Harbor."

"We need to distinguish between the mischievous and the malicious," says
Darwin John, who served recently (albeit briefly) as CIO of the FBI and is
considered one of the godfathers of the CIO profession. "We've tolerated the attacks
until now because they're mischievous. The malicious attack will be the one
that moves the public consciousness, and it's so much harder to know what that
attack will be."

It's much easier to know what a digital Pearl Harbor won't be. Taking down
the Internet or ATM networks, compromising the Social Security database, even
hacking into the electric grid--Schmidt and others argue that while each event
may be part of a digital Pearl Harbor, none qualifies in and of itself. None
would galvanize society, spurring it to action.

And it needn't be a terrorist attack. Open networks coupled with vulnerable
software make it more likely that a transformational event will arise from a
more banal source, like a motivated group of computer experts, a common thief
or, most fickle of all, an accident.

The coming digital Pearl Harbor doesn't even have to be a single event.
Thinking about the nature of disasters, Software Engineering Institute fellow Watts
Humphrey consulted nuclear power people. "I talked to one guy who did nothing
but review incidents," Humphrey says. "And typically, these kinds of
disasters result from a combination of many smaller events that each seem highly
unlikely. But they all happen at once to create unforeseeable consequences."

That's the "Perfect Storm" theory, and what makes an event perfect (in a
negative sense) is the apparent lack of relationship between systems in a complex
environment. The blackout last August was a Perfect Storm. Random, seemingly
unrelated factors--an aging power grid, certain corporate decisions, a heat
wave, a history of deregulation and some human errors--all came together to
darken a significant chunk of the northern hemisphere.

"That's how modern systems fail," says Humphrey. "And our networks are so big
and fast that things which seem damn near impossible happen every few days."

Not even loss of life necessarily means an event is a digital Pearl Harbor.
Three years ago, four Marines were killed after a hydraulics failure on a V22
Osprey plane. They took all the proper measures, but because of software bugs,
their plane still crashed. Few even heard of the event, never mind demanded
more secure software as a result.

Those scenarios, no matter how dire, didn't rise to the level of a Pearl
Harbor because they failed to inflict significant, collective psychological
damage. Before Internet security changes in fundamental ways, we will have to feel
as shocked and vulnerable as all Americans did reading the newspaper and
listening to the radio on the morning of Dec. 7, 1941 (or watching television on
Sept. 11, 2001).

In a sense, this should be obvious. If digital Pearl Harbors were happening
every day, they wouldn't be Pearl Harbors. They'd have a name that conveyed
their seriousness, but also their ubiquity and survivability. They'd have a name
like "virus outbreaks."

Still, no matter how nebulous the name, we're hurtling toward what many
experts keep referring to, darkly, as the "point."

"The more complex you get, the more vulnerable you are," says Peter Tippett,
CTO of TruSecure, a security services company, and noted security expert.
Tippett argues that if we simply extend the present situation into the future, the
level of complexity and vulnerability we would create will make a digital
Pearl Harbor inevitable--and before 2010.

"For seven years, we've had these negative events," says Howard Schmidt, vice
president and CISO of eBay and former vice chairman of the President's
Critical Infrastructure Protection Board, and, before that, CSO of Microsoft. "And every time there's an event, it's called a wake-up call. It's like those alarms
that crescendo to wake you up. We're getting to that point, where it's so
loud, you wake up."

TIPPING POINT: On Dec. 7, 2008, computer systems around the world go down
simultaneously. They do not come back up.

December 7, 2008: A moment that will live in cyber-infamy

The alarm goes off in 2008. Several security experts' composite picture of a
digital Pearl Harbor looks like this (although given that the event is by
definition unpredictable, it will, in fact, probably not look like this):

It is global and instantaneous. It is so fast--seconds long--that no one
knows about it until it's over. It does not attack PCs; it attacks the Internet infrastructure--such as domain name servers and routers--and industrial systems
connected to the Internet, like utility control systems. It exploits an
unknown or little-known vulnerability.

Five factors distinguish the digital Pearl Harbor from the virus attacks
we've suffered to date.

First, it disrupts backup systems. Fragile networks heretofore have been
mitigated largely with backup. Disrupt that and badness follows.

Second, it leads to cascading failures. All of those massively inconvenient
attacks people previously referred to as Pearl Harbors pile up. Due to the loss
of backup, corporate earnings data is irretrievably lost. This panics Wall
Street and destabilizes the financial sector. People run to their banks, but the
banks cannot disburse funds; their networks are down. As are the credit card
networks and the ATMs .

If you don't have cash, you go hungry.

Then the lights wink out. Everywhere.

And it begins to get cold.

Panic is a key part of a digital Pearl Harbor. "If you can disrupt the flow
of money and resources, that's where I'd look for incidents to become bigger
than what we've experienced so far," says Michael Hershman, an international
security expert who has worked in military intelligence, and who was a senior
staff investigator on the Senate Watergate Committee. Hershman now runs Civitas
Group, a security consultancy, with Sandy Berger, the former national security
adviser to President Clinton, and Richard Clarke. "Where you see panic and
money, that's where I'd look for a digital Pearl Harbor."

Third, though the attack is instantaneous, its aftereffects linger for weeks.
People are hungry. Freezing. The old and the young begin to die. The strong
turn against each other.

Fourth, after it's over, the attack's origin is pinpointed and the
vulnerability it exploited is determined. That's another element that's been missing from most recent security events, especially virus outbreaks, and most notably in
the August 2003 blackout. Blame has not been assigned; no heads have rolled.
No one has even called for heads to roll. No heads can be found to roll.

Last, and perhaps most important, once the source of the event is determined,
it's revealed that the loss of property and life was completely and
absolutely and tragically avoidable.

2009: Recrimination, reconstruction, reformation

That moment--the exposure of negligence to the public--is when security will
start to get better. The senselessness of the incident and the profound losses
it leads to will generate outrage.

The first response is litigation. Lawyers will prosecute vendors, ISPs and
others based on downstream liability; that is, they will follow the chain of
negligence and hold people accountable all along it. Hackers, whether their
intent was malicious or not, will be arrested and prosecuted. If the event's nexus
is overseas, foreign governments will cooperate to bring the miscreants to
justice.

After litigation comes regulation. Historically, regulation always follows
catastrophe. In 1912, Marconi Co. operators aboard the Titanic were slow to
receive the iceberg warnings because relays were jammed by the crush of
unregulated amateur wireless users hogging the spectrum. The Radio Act of 1912 followed
and, eventually, the Federal Communications Commission was formed. The crash
of 1929 begat sweeping financial regulations and gave birth to the Securities
and Exchange Commission.

"In the past, IT would have argued that you can't regulate because
information technology is so different," says John. He doesn't buy it. "They said the
same about oil. Sure enough, regulation brought order to that developing
industry, and it will do the same here."

We've seen this quite a bit recently with HIPAA, Gramm-Leach-Bliley,
Sarbanes-Oxley and, most similarly, the Patriot Act, which was a sweeping reaction to
an attack that freaked us out.

"What follows regulation?" asks Jeff Schmidt. "Standards."

Internet security could use a lot of those, such as standard vulnerability
reporting processes, standard software patches, a single naming convention for
alert levels when viruses are discovered, standard secure configurations of
software.

"Take any mature discipline and there are standards," Jeff Schmidt says. "If
I work in biological handling, I know what a Level 2 clean room is. It doesn't
matter who I work for. Standards will demystify security."

The final phase of the corrective response to the digital Pearl Harbor will
be a reformation, a cultural shift toward better, more proactive security. If
the first two stages represent our pound of cure, this is the ounce of
prevention.

Of course, to have a reformation, you need a Martin Luther, a leader who's
not only willing to push for radical change, but who also has a plan. Perhaps a
rebel within Microsoft who sacrifices his career to change the culture and
practices he's experienced firsthand. (Luther, it should be noted, was just such
an insider who was disgusted by the pope's practice of generating revenue by
selling indulgences--that is, pardons from purgatory.) Or maybe it's an
outsider with a lot of passion for the issue and money to support his cause.

In the case of a security reformation, this leader would borrow from the
ideas of experts who already have reformist ideas, like SEI's Humphrey. Known as
the Edward Deming of software, he has implemented and proposed radical changes
to the way software is made. Humphrey is unsparing in his criticism of
contemporary software security. We're letting creative artists build bridges, he
says, then trying to stabilize them with unlicensed laborers while they're
collapsing.

Included in Humphrey's blueprint for a security reformation are new software
development processes that change the governance and structure of software
engineering to favor security. Called Team Software Process (TSP) and Personal
Software Process (PSP), they entail a fundamental shift in software development
practice from the regular army model--top-down command--to a special
operations model wherein a small group is given objectives and let loose to fulfill
them. "I want the technical community to become professionals," Humphrey says,
"to say, This is how we do our job."

TSP and PSP have already been found to reduce coding errors by factors of up
to 10 or more. Microsoft tried it and reduced bugs within a 24,000-line
program from more than 350 to about 25.

Humphrey also has conceived of even more radical changes, including a
software engineering curriculum modeled on medical school, complete with professional
internships.

A full-blown security reformation would mark a triumph over the "tragedy of
the commons," the dilemma that bedevils Internet security today. A principle in
ecology, the tragedy of the commons states that individual short-term benefit
trumps collective long-term benefit. That is, I will let my sheep graze on
the commons to increase my personal wealth even if it contributes to the
degradation of the commons as a whole.

In security, individual companies make, buy and deploy software to gain a
competitive edge, even as the networking of that software degrades security for
everyone. There's no incentive for any single company to improve security for
everyone, especially if doing so threatens the company's competitive position
and wealth.

"By 2010, there will be a growing general awareness, a link between what
individual users do and how that affects the national interest," says Tom
Longstaff, the manager of the CERT Analysis Center, which takes in data on the
Internet's swelling number of vulnerabilities and security incidents. "I think of World War II," he adds, "and rationing rubber and nylon. After a momentous event,
there's often a subjugation of the tragedy of the commons."

A security reformation will not take place overnight. Longstaff believes that
even with a digital Pearl Harbor in 2008, we'll be only 20% reformed by 2010.
Whit Diffie, Sun Microsystems' CSO, suggests a 10-year time frame before we
should mandate zero tolerance for insecure software and enforce strict
liability laws. Even Humphrey says, "I'm hopeful, but the issue is one of time."

This vision of security in 2010 is a rosy picture painted with cynical
strokes.

"God, we've been resilient," says Patrick Gray, the director of the Internet
Security Systems' X-Force National Emergency Response, "but the ugliness is
lurking. We're reaching our limit with the angst. Popeye once said, 'I've had
alls I can stands and I can't stands no more.' We're reaching that point."

And when we do, everything will fall apart. And then, and only then, will it
begin to get better.

Scenario Two

Welcome to the lockdown

After the Digital Pearl Harbor, one simple truth will become apparent to
everyone: The surest and fastest way to avoid another one, to save lives and to
make the world's computer systems secure, is to lock them up, freeze them in a
permanent status quo. Put functions into chips that not only won't integrate
with other applications but can't. Extensibility in 2010 is a liability, not a
feature.

"That [scenario] is appealing because it's one of the simplest things you can
do with computers: restrict their abilities," says Peter Tippett, CTO of
security vendor TruSecure and noted security expert.

Tippett can't bear to imagine such a world. But Software Engineering
Institute fellow Watts Humphrey has resigned himself to it. "If we force security restrictions, we'll dry up a lot of innovation," he says. "That's a cycle we're
likely to go through."

At the same time that the integration of applications becomes unethical as
well as physically impossible, there will be a human lockdown. After decades
spent making access to applications universal, computer scientists and software designers will focus on preventing access. Obviously, if bad guys can't get in,
they can't do damage. Even good guys will face broad strictures on what data
is available to them.

So there will be a surge in the development of software that blocks access to
applications such as chat rooms, the Web, databases, whatever. And even
features within programs, like the ability to forward e-mail messages, will be shut
off. Again, the thinking is that since openness got us into this mess, only a
lockdown will get us out of it.

Authentication applications will explode. The federal government will mandate
that users must authenticate their identity to access the Internet itself, a
sort of digital passport system for entering cyber-country.

However, as Dan Geer, former CTO of @Stake, notes, authentication can't
possibly keep up with the number of people who need it and the number of
transactions we try to control with it. Authentication doesn't scale.

But surveillance does. "The costs to observe are virtually zero, so it's not
a question of will it exist, but what will we do with it?" Geer asks.

Enforcement of the government's security policy will come from broad,
ubiquitous surveillance, both visual monitoring and keystroke logging. The adaptation
of cheap wireless gadgets like RFIDs will make the tracking of people and
things simple, cheap and inevitable.

Some people, perhaps the majority, will accept this as the price that must be
paid to avoid another digital Pearl Harbor. Others will rue what the lockdown
has wrought: an utter lack of privacy, a digital iron curtain descending upon
innovation, economic stagnation, social calcification. Big Brother will
arrive fashionably late, but arrive he will. Security and privacy will become
dominant themes in the elections of 2010 and 2012.

Geer is convinced we're heading toward a broadly surveilled police state.
"I'm sad about this," he says, "but I'm trying to be realistic."
-------------------------------------
You are subscribed as interesting-people () lists elistx com
To manage your subscription, go to
 http://v2.listbox.com/member/?listname=ip

Archives at: http://www.interesting-people.org/archives/interesting-people/


Current thread: