RISKS Forum mailing list archives

Risks Digest 27.84


From: RISKS List Owner <risko () csl sri com>
Date: Wed, 16 Apr 2014 16:06:25 PDT

RISKS-LIST: Risks-Forum Digest  Wednesday 16 April 2014  Volume 27 : Issue 84

ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks)
Peter G. Neumann, moderator, chmn ACM Committee on Computers and Public Policy

***** See last item for further information, disclaimers, caveats, etc. *****
This issue is archived at <http://www.risks.org> as
  <http://catless.ncl.ac.uk/Risks/27.84.html>
The current issue can be found at
  <http://www.csl.sri.com/users/risko/risks.txt>

  Contents:
Spider threat fixed by software (Martyn Thomas)
Whitehat hacker goes too far, gets raided by FBI, tells all (Sean Gallagher)
OpenSSL Mallocware = Malware (Henry Baker)
The Heartbleed Challenge (cloudflarechallenge via Monty Solomon)
Re: How Heartbleed Broke the Internet, And Why It Can Happen Again
  (Jonathan S. Shapiro)
"CRA loses 900 SIN numbers through Heartbleed bug" (Candice So via
  Gene Wirchenko)
Vicious Heartbleed bug bites millions of Android phones, other devices
  (Dan Goodin)
All sent and received e-mails in Gmail will be analyzed, says Google
  (Casey Johnston)
"Digital Privacy Act allows companies to hand over customer
  information without warrant or consent" (Brian Jackson)
Apple, Samsung, mobile carriers to debut anti-theft kill switch
 in 2015 (Cyrus Farivar)
Fingerprint lock in Samsung Galaxy 5 easily defeated by whitehat
 hackers (Dan Goodin)
Unintended Denial of Service by Banking Security (Toby Douglass)
"Microsoft confirms it's dropping Windows 8.1 support" (Woody Leonhard)
Abridged info on RISKS (comp.risks)

----------------------------------------------------------------------

Date: Sat, 12 Apr 2014 16:00:54 +0100
From: Martyn Thomas <martyn () thomas-associates co uk>
Subject: Spider threat fixed by software

"Petrol-sniffing spiders have forced Mazda to issue a voluntary recall
notice so it can apply a software fix to its cars."
  http://www.bbc.co.uk/news/technology-26921734

De-bugging was tried first, but failed.

------------------------------

Date: Fri, 11 Apr 2014 7:57:29 PDT
From: "Peter G. Neumann" <neumann () csl sri com>
Subject: Whitehat hacker goes too far, gets raided by FBI, tells all
  (Sean Gallagher)

Sean Gallagher, *Ars Technica*, 9 Apr 2014
[via "InfoSec News".  PGN]
http://arstechnica.com/tech-policy/2014/04/whitehat-hacker-goes-too-far-gets-raided-by-fbi-tells-all/

A whitehat hacker from the Baltimore suburbs went too far in his effort to
drive home a point about a security vulnerability he reported to a client.
Now he's unemployed and telling all on reddit.

David Helkowski was working for Canton Group, a Baltimore-based software
consulting firm on a project for the University of Maryland (UMD), when he
claims he found malware on the university's servers that could be used to
gain access to personal data of students and faculty. But he says his
employer and the university failed to take action on the report, and the
vulnerability remained in place even after a data breach exposed more than
300,000 students' and former students' Social Security numbers.

As Helkowski said to a co-worker in Steam chat, "I got tired of being
ignored, so I forced their hand." He penetrated the university's network
from home, working over multiple VPNs, and downloaded the personal data of
members of the university's security task force. He then posted the data to
Pastebin and e-mailed the members of the task force anonymously on 15 Mar.

One day later, the FBI obtained a search warrant for Helkowski's home.
While no charges have yet been filed against him, Helkowski's employment
with Canton Group has ended. And yesterday, he took to reddit to tell
everyone about it in a post entitled "IamA Hacker who was Raided by the FBI
and Secret Service AMAA!" To prove his identity, he even posted a redacted
copy of the search warrant he was served.

How did the FBI track him down so fast? It turns out that Helkowski told
just about everyone (including co-workers) about what he was doing. And
since the vulnerability he used was the same one Canton Group had reported
to UMD on 27 Feb, it didn't take a lot of sleuthing to follow a trail that
pointed straight back to Helkowski's home in the Baltimore suburb of
Parkville. [...]

------------------------------

Date: Thu, 10 Apr 2014 21:52:26 -0700
From: Henry Baker <hbaker1 () pipeline com>
Subject: OpenSSL Mallocware = Malware

This "heartbleed" bug is indeed heartbreaking to all us computer scientists
who have worked entire careers to provide computer languages and tools in
which these kinds of bugs simply can't happen.

Buffer overflows and memory allocation bugs stopped being funny when I was
still an undergraduate in the 1960's.  If you won't use a memory safe
programming language, perhaps you should return your Computer Science
degree.  Memory misuse bugs now cost companies fortunes, and with autos,
airplanes and medical appliances so dependent upon software, these bugs will
cost people their lives.

This use of unsafe programming languages is no longer just sloppy
programming; it's now negligent MALpractice.  Sooner or later, a software
"engineer" will lose his/her house as a result of such malpractice; juries
will simply lose patience with an industry that refuses to implement even
the most rudimentary safety precautions such as memory-safe programming.

There's a reason _malloc_ is called _MALloc_: it's MALware!

http://www.tedunangst.com/flak/post/analysis-of-openssl-freelist-reuse

analysis of openssl freelist reuse

About two days ago, I was poking around with OpenSSL to find a way to
mitigate Heartbleed. I soon discovered that in its default config, OpenSSL
ships with exploit mitigation countermeasures, and when I disabled the
countermeasures, OpenSSL stopped working entirely.  That sounds pretty bad,
but at the time I was too frustrated to go on.  Last night I returned to the
scene of the crime.

freelist

OpenSSL uses a custom freelist for connection buffers because long ago and
far away, malloc was slow. Instead of telling people to find themselves a
better malloc, OpenSSL incorporated a one-off LIFO freelist. You guessed
it. OpenSSL misuses the LIFO freelist. In fact, the bug I’m about to
describe can only exist and go unnoticed precisely because the freelist is
LIFO.

OpenSSL reads data from the connection into a temporary buffer, allocating a
new one if necessary. For reference, consult the ssl/s3_pkt.c functions
ssl3_read_n and ssl3_read_bytes. Be mindful you are not confused by the
difference between the record buffer s->s3->rrec and the read buffer
s->s3->rbuf. The buffer setup and release functions themselves live in
ssl/s3_both.c.

On line 1059, we find a call to ssl3_release_read_buffer after we have read
the header, which will free the current buffer.

if (type == rr->type) /* SSL3_RT_APPLICATION_DATA or SSL3_RT_HANDSHAKE */ { [...] if (!peek) { rr->length-=n; 
rr->off+=n; if (rr->length == 0) { s->rstate=SSL_ST_READ_HEADER; rr->off=0; if (s->mode & SSL_MODE_RELEASE_BUFFERS) 
ssl3_release_read_buffer(s); } }

There's one small problem. We're not actually done with it yet. It still has
some interesting data in it that we will want to read later. Fortunately,
this is only a small problem because the LIFO freelist will give it right
back to us! It has to chill on the freelist for few microseconds, but then
the next call to ssl3_read_n will call setup and start right back where we
left off. Same buffer, same contents.

rb = &(s->s3->rbuf); if (rb->buf == NULL) if (!ssl3_setup_read_buffer(s)) return -1; left = rb->left;

Unless, of course, there is no freelist and releasing the read buffer
actually, you know, releases it, which is what happens when you compile with
OPENSSL_NO_BUF_FREELIST. Now that first buffer is gone forever, and it's a
different buffer that we start reading from. But this new, different buffer
isn't very likely to have the same data as the old buffer. OpenSSL gets very
confused when it can't find the data it expects and aborts the connection.

(Riddle: what is the value of rb->left?)

patch

The solution is quite simple. Don't release the buffer if we're not done
with it. There are probably many ways to shave this yak; here's one:

diff -u -p -r1.20 s3_pkt.c --- s3_pkt.c 27 Feb 2014 21:04:57 -0000 1.20 +++ s3_pkt.c 10 Apr 2014 03:31:18 -0000 @@ 
-1054,8 +1054,6 @@ start: { s->rstate=SSL_ST_READ_HEADER; rr->off=0; - if (s->mode & SSL_MODE_RELEASE_BUFFERS) - 
ssl3_release_read_buffer(s); } } return(n);

analysis

This bug would have been utterly trivial to detect when introduced had the
OpenSSL developers bothered testing with a normal malloc (not even a
security focused malloc, just one that frees memory every now and
again). Instead, it lay dormant for years until I went looking for a way to
disable their Heartbleed accelerating custom allocator.

Building exploit mitigations isn't easy. It's difficult because the
attackers are relentlessly clever. And it's aggravating because there's so
much shitty software that doesn't run properly even when it's not under
attack, meaning that many mitigations cannot be fully enabled. But it's
absolutely infuriating when developers of security sensitive software are
actively thwarting those efforts by using the world's most exploitable
allocation policy and then not even testing that one can disable it.

Update: Turns out I'm not the first person to run into this. Here's a
four-year-old bug report. And another.

  [We need a malloculist to see through this mess!??  PGN]

------------------------------

Date: Sat, 12 Apr 2014 03:12:15 -0400
From: Monty Solomon <monty () roscom com>
Subject: The Heartbleed Challenge (cloudflarechallenge)

Can you steal the keys from this server?

So far, two people have independently solved the Heartbleed Challenge.  The
first was submitted at 4:22:01 PT by Fedor Indutny (@indutny). He sent at
least 2.5 million requests over the span of the challenge, this was
approximately 30% of all the requests we saw. The second was submitted at
5:12:19 PT by Ilkka Mattila of NCSC-FI using around 100 thousand requests.

We confirmed that both of these individuals have the private key and that it
was obtained through Heartbleed exploits. We rebooted the server at 3:08 PT,
which may have contributed to the key being available in memory, but we
can't be certain. [...]

https://www.cloudflarechallenge.com/heartbleed

------------------------------

Date: Tuesday, April 15, 2014
From: Jonathan S. Shapiro <shap () eros-os org>
Subject: Re: How Heartbleed Broke the Internet, And Why It Can Happen Again

  [Via Dave Farber's IP distribution.  PGN]

In the aftermath of Heartbleed, several opinions have been posed on the IP
list that are not supported by the evidence. I'd like to address some of
them briefly:

THE MANY EYEBALLS DEBATE

This discussion seems to have originated with Stephen Henson at Wired.
Henson proposes that professionally paid eyeballs and rigorous quality
assurances catch bugs like Heartbleed. It's hard to say whether this
*particular* issue would have been caught, but this proposition has been
tested repeatedly, and it's thoroughly discredited. Attackers have
consistently been able to slip bugs into the source code of programs even
when those applications use a very careful, disciplined, multi-party review
process. The attacker success rate is so close to 100% as makes no
difference.

Henson goes on to pursue a largely uninformed and unsubstantiated diatribe
about Open Source software that is mostly nonsense, but as much as I like
open source, Gumby Wallace's appeal to the so-called "many eyeballs effect"
(or as Eric Raymond put it originally: "With enough eyeballs, all bugs are
shallow.") is misguided. First, it depends a lot on the quality of the
eyeballs; as with everything else, reviewer quality and skill follows a
normal distribution. Mr. Raymond has long since acknowledged to me privately
that his slogan was created for the purpose of marketing open source, and is
technically very problematic. But second, refer back to what I said above:
when qualified reviewers have been consciously selected and are operating
under a careful, deliberate, and concerted review process, the bad guys
still exceed the Ivory Snow metric; they win more than 99 and 44/100% of the
time. It doesn't matter if the process involved is open source or closed
source.

The *important* advantages of Open Source in this discussion are (1) that a
repair for a critical bug, once found, can be locally applied in minutes,
and (2) if it's critical enough, you can dig in and fix it yourself. This is
in strong contrast to closed source products where the turnaround time from
bug report to patch is best measured in months or years and the customer is
essentially helpless while the repair deployment process slowly grinds
along.

Abe Singer responded to Mr. Wallace with some comments on the importance of
diversity. Intuitively this makes sense, but there is a disturbing doctoral
dissertation from Columbia that examined this proposition empirically and
found it lacking. They showed that two groups of programmers operating from
the same specification in complete isolation from each other tend to make
highly correlated mistakes. Diversity would be very valuable if we could
actually achieve it, and there has been interesting work on achieving
diversity through automated means. In the mean time, a lot of our battle
could be eliminated by adoption of safe programming languages and type
systems. That would at least leave us time to think about the other half of
the problem.

LEGACY HARDWARE

Brett Glass chimed in about the benefits of segmented hardware, referencing
the Intel platforms of his youth. Matt Kauffman responded discussing the
Intel architecture's evolution, but he didn't really question the premise.

First, I note that Mr. Glass wants much more than mere segmentation, and
Intel's i432 did a decent job of architecting what he seems to want. I'd
refer him to the excellent post-hoc assessment on that machine by Colwell,
Gehringer, and Jensen: Performance Effects of Architectural Complexity in
the Intel 432. After reading that paper, it's instructive to follow up by
reading Fleisch: The Failure of Personalities to Generalize, and then see if
anybody has had the stones to publicly write up the failure of the
Itanium. The point being that when things get too complex, it becomes
inherently impossible either to manage their complexity or to limit their
pace of rework to the point where something actually gets finished.

There certainly *are* useful changes to hardware that would improve things,
but segmentation isn't one of them. There are three reasons for this:

1. Segmentation requires language level support that has long since gone the
way of the dinosaur. Ultimately, *this* is why segments were removed from
the Intel hardware. The programming languages that can use them no longer
exist. So it isn't just a matter of restoring this feature to the
hardware. It would require hundreds of *billions* of dollars worth of new
software. If we're going to undertake that kind of cost, there are better
yields to be had than what we can get from segmentation.

Ivan Goddard has been working on an interesting variation on segments, but
I'm very skeptical that he'll be able to make his approach work with
mainstream languages.

2. Modern safe languages almost entirely subsume the benefits of
segmentation. When combined with strong type systems, they even let us do
cross-process object sharing with sane semantics, which paging+segmentation
has a very hard time supporting. There is an argument that hardware checking
holds up better against ambient background ionizing radiation, but that
argument becomes dubious when examined critically in modern
implementations. As a software-only approach, I'd point Brett and other
interested readers at the very successful work that Brad Chen and his team
have done on software component sandboxing in "Native Client".

3. From a formal verification perspective, segmentation introduces aliasing
in the ground memory model. This makes formal verification *much* harder
than a flat memory model.

WHAT WE NEED

At the risk of running afoul of "this is new, therefore better", I think
that there *are* some things hardware could give us with minimal and
compatible change that would make an enormous difference. The main
impediment to wide adoption of safe languages at this point is cost of
conversion and the unpredictability of garbage collection performance. The
first is incrementally getting fixed, and the second seems to have given way
in the face of recent work on continuous concurrent collection. There are
two things we could use from the hardware that are useful for a broad range
of applications that would significantly improve and simplify the concurrent
collection problem:

 * Recursive virtualization, where an application can set memory page
    permissions on itself.
 * Support for scheduler activations, or at least user-mode fault delivery.

Recursive virtualization has mostly happened on the later 64-bit Intel
processors, and exists in preliminary form on the 64 bit ARM machines.

Implementing scheduler activations is possible on 64-bit Intel systems, but
turns out not to be possible on 32-bit ARM processors because of a flaw in
the design of processor status word handling. I talked to Richard
Grisenthwaite (ARM's principal architect) about fixing this in ARM64 at one
point, but I'm not sure what impact, if any, that conversation ultimately
had.

Please note that both of these are *general* architectural features that
have many applications. High-performance GC is just one of them. Better
still: both can be implemented without breaking current software in any way.

Jonathan S. Shapiro, Ph.D., Managing Partner,, PixelFab, LLC

------------------------------

Date: Tue, 15 Apr 2014 18:14:23 -0700
From: Gene Wirchenko <genew () telus net>
Subject: "CRA loses 900 SIN numbers through Heartbleed bug" (Candice So)

[CRA is the Canada Revenue Agency (the Canadian equivalent of the
IRS), and a SIN Number is the Canadian equivalent of the U.S. SSN.]

Candice So, *IT Business*, 14 Apr 2014
CRA loses 900 SIN numbers through Heartbleed bug
http://www.itbusiness.ca/news/cra-loses-900-sin-numbers-through-heartbleed-bug/48041

------------------------------

Date: Wed, 16 Apr 2014 02:28:08 -0400
From: Monty Solomon <monty () roscom com>
Subject: Vicious Heartbleed bug bites millions of Android phones, other
 devices (Dan Goodin)

Dan Goodin, Ars Technica, 14 Apr 2014
Not the exclusive province of servers, Heartbleed can hack end users, too.

The catastrophic Heartbleed security bug that has already bitten Yahoo Mail,
the Canada Revenue Agency, and other public websites also poses a formidable
threat to end-user applications and devices, including millions of Android
handsets, security researchers warned.

Handsets running version 4.1.1 of Google's mobile operating system are
vulnerable to attacks that might pluck passwords, the contents of personal
messages, and other private information out of device memory, a company
official warned on Friday. Marc Rogers, principal security researcher at
Lookout Mobile, a provider of anti-malware software for Android phones, said
some versions of Android 4.2.2 that have been customized by the carriers or
hardware manufacturers have also been found to be susceptible. Rogers said
other releases may contain the critical Heartbleed flaw as well. Officials
with BlackBerry have warned the company's messenger app for iOS, Mac OS X,
Android, and Windows contains the critical defect and have released an
update to correct it. ...

http://arstechnica.com/security/2014/04/vicious-heartbleed-bug-bites-millions-of-android-phones-other-devices/

------------------------------

Date: April 16, 2014 at 1:50:38 EDT
From: Monty Solomon <monty () roscom com>
Subject: All sent and received e-mails in Gmail will be analyzed, says Google
  (Casey Johnston)

Casey Johnston, Ars Technica, 15 Apr 2014
The new text might be a reaction to the e-mail scanning lawsuit.

Google added a paragraph to its terms of service as of Monday to tell
customers that, yes, it does scan e-mail content for advertising and
customized search results, among other reasons. The change comes as Google
undergoes a lawsuit over its e-mail scanning, with the plaintiffs
complaining that Google violated their privacy.

E-mail users brought the lawsuit against Google in 2013, alleging that the
company was violating wiretapping laws by scanning the content of
e-mails. The plaintiffs' complaints vary, but some of the cases include
people who sent their e-mails to Gmail users from non-Gmail accounts and
nonetheless had their content scanned. They argue that since they didn't use
Gmail, they didn't consent to the scanning. ...

<http://arstechnica.com/business/2014/04/google-adds-to-tos-yes-we-scan-all-your-e-mails/>

------------------------------

Date: Fri, 11 Apr 2014 10:21:29 -0700
From: Gene Wirchenko <genew () telus net>
Subject: "Digital Privacy Act allows companies to hand over customer
  information without warrant or consent" (Brian Jackson)

Brian Jackson, *IT Business*, 10 Apr 2014
http://www.itbusiness.ca/article/digital-privacy-act-allows-companies-to-hand-over-customer-information-without-warrant-or-consent

selected text:

Canada's Research Chair of Internet and e-commerce law is concerned that the
newly introduced Digital Privacy Act could actually result in the personal
information of more Canadians being given away without their consent or
knowledge, he writes in a new blog post.

Michael Geist combed over the legislation tabled in the Senate earlier this
week and discovered this nugget of legalese that expands warrantless
disclosure:

  ... an organization may disclose personal information without the
  knowledge or consent of the individual=85 if the disclosure is made to
  another organization and is reasonable for the purposes of investigating a
  breach of an agreement or a contravention of the laws of Canada or a
  province that has been, is being or is about to be committed and it is
  reasonable to expect that disclosure with the knowledge or consent of the
  individual would compromise the investigation;

This means that companies, for example an Internet service provider (ISP),
will hand over personal details about customers without need for consent
from that customer or a court order being provided. Not only could legal
authorities take advantage of this new power, but so could any other
organization that=92s doing its own investigation of a possible contract
breach or legal violation.

------------------------------

Date: Wed, 16 Apr 2014 02:01:09 -0400
From: Monty Solomon <monty () roscom com>
Subject: Apple, Samsung, mobile carriers to debut anti-theft kill switch
 in 2015 (Cyrus Farivar)

Cyrus Farivar, Ars Technica, 15 Apr 2014
Voluntary industry move appears to get ahead of pending anti-theft bill.

Rather than waiting for pending legislation to mandate an anti-theft kill
switch, the leading mobile phone manufacturers and service
providers-including Apple, Samsung, Huawei, AT&T, T-Mobile, Verizon, and
Sprint-came together Tuesday to impose their own solution.

The new "Smartphone Anti-Theft Voluntary Commitment" stipulates that new
phones made after July 2015 will have a "preloaded or downloadable"
anti-theft tool.

Two months ago, Mark Leno, a California state senator introduced a bill in
response to the rise of smartphone theft. More than 50 percent of all
robberies in San Francisco involve a smartphone, according to law
enforcement statistics Leno cites in his bill.  Sections of the bill also
note that smartphone theft was up 12 percent in Los Angeles in 2012, and
nationwide, 113 smartphones are lost or stolen each minute.

Should the California bill become law in the Golden State, it likely would
have a dramatic effect on sales of mobile phones across the US as companies
could not afford to ignore the country's most populous state. ...

http://arstechnica.com/tech-policy/2014/04/apple-samsung-mobile-carriers-to-debut-anti-theft-kill-switch-in-2015/

------------------------------

Date: Wed, 16 Apr 2014 01:58:10 -0400
From: Monty Solomon <monty () roscom com>
Subject: Fingerprint lock in Samsung Galaxy 5 easily defeated by whitehat
 hackers (Dan Goodin)

Dan Goodin, Ars Technica, 15 Apr 2014
Multiple weaknesses put devices and PayPal accounts within reach of attackers.

The heavily marketed fingerprint sensor in Samsung's new Galaxy 5 smartphone
has been defeated by whitehat hackers who were able to gain unfettered
access to a PayPal account linked to the handset.

The hack, by researchers at Germany's Security Research Labs, is the latest
to show the drawbacks of using fingerprints, iris scans, and other physical
characteristics to authenticate an owner's identity to a computing
device. ...

http://arstechnica.com/security/2014/04/fingerprint-lock-in-samsung-galaxy-5-easily-defeated-by-whitehat-hackers/

------------------------------

Date: Sun, 13 Apr 2014 02:53:01 +0100
From: Toby Douglass <trd () 45mercystreet net>
Subject: Unintended Denial of Service by Banking Security

I recently lived in New York City for a period.  I opened an account with
the Bank of America, an international account, sans SSN.  Upon departing, I
moved to Tunis and began using my Dutch bank account, for the lower card-use
charges, and so wished to transfer my US balance to the Dutch account.

Upon coming to make this transfer, I discover BoA prevents accounts being
emptied by attackers imposes a transfer limit of one thousand US dollars per
day.  It is possible to raise the limit to ten thousand dollars (which of
course even then may or may not be enough), by subscribing to a two-factor
authentication scheme, but this scheme, although usable outside the USA, can
only be subscribed to within the USA.

It is fair to say BoA has indeed prevented accounts from being emptied, by
dint of applying this security mechanism, but emptying accounts is in fact
normal functionality and the mechanism to permit this normal functionality
is offered only to a subset of users.

As matters stand, customers who leave the US and then discover this security
mechanism must pay a 2.5% fee to remit their balance, as there is a 25 USD
fee per transfer.

------------------------------

Date: Mon, 14 Apr 2014 11:13:50 -0700
From: Gene Wirchenko <genew () telus net>
Subject: "Microsoft confirms it's dropping Windows 8.1 support"
  (Woody Leonhard)

[Warning: AFAICS, the problem is that a new updating system has to be
installed, but many are having trouble installing it, but Microsoft is
cutting support if one does not have the update.  The headline is somewhat
accurate and somewhat misleading.]

Woody Leonhard | InfoWorld, 14 Apr 2014
Microsoft TechNet blog makes clear that Windows 8.1 will not be patched;
  users must get Windows 8.1 Update if they want security patches
http://www.infoworld.com/t/microsoft-windows/microsoft-confirms-its-dropping-windows-81-support-240407

------------------------------

Date: Sun, 7 Oct 2012 20:20:16 -0900
From: RISKS-request () csl sri com
Subject: Abridged info on RISKS (comp.risks)

 The ACM RISKS Forum is a MODERATED digest. Its Usenet manifestation is
 comp.risks, the feed for which is donated by panix.com as of June 2011.
=> SUBSCRIPTIONS: PLEASE read RISKS as a newsgroup (comp.risks or equivalent)
 if possible and convenient for you.  The mailman Web interface can
 be used directly to subscribe and unsubscribe:
   http://lists.csl.sri.com/mailman/listinfo/risks
 Alternatively, to subscribe or unsubscribe via e-mail to mailman
 your FROM: address, send a message to
   risks-request () csl sri com
 containing only the one-word text subscribe or unsubscribe.  You may
 also specify a different receiving address: subscribe address= ... .
 You may short-circuit that process by sending directly to either
   risks-subscribe () csl sri com or risks-unsubscribe () csl sri com
 depending on which action is to be taken.

 Subscription and unsubscription requests require that you reply to a
 confirmation message sent to the subscribing mail address.  Instructions
 are included in the confirmation message.  Each issue of RISKS that you
 receive contains information on how to post, unsubscribe, etc.

=> The complete INFO file (submissions, default disclaimers, archive sites,
 copyright policy, etc.) is online.
   <http://www.CSL.sri.com/risksinfo.html>
 *** Contributors are assumed to have read the full info file for guidelines.

=> .UK users may contact <Lindsay.Marshall () newcastle ac uk>.
=> SPAM challenge-responses will not be honored.  Instead, use an alternative
 address from which you NEVER send mail!
=> SUBMISSIONS: to risks () CSL sri com with meaningful SUBJECT: line.
 *** NOTE: Including the string `notsp' at the beginning or end of the subject
 *** line will be very helpful in separating real contributions from spam.
 *** This attention-string may change, so watch this space now and then.
=> ARCHIVES: ftp://ftp.sri.com/risks for current volume
     or ftp://ftp.sri.com/VL/risks for previous VoLume
 http://www.risks.org takes you to Lindsay Marshall's searchable archive at
 newcastle: http://catless.ncl.ac.uk/Risks/VL.IS.html gets you VoLume, ISsue.
   Lindsay has also added to the Newcastle catless site a palmtop version
   of the most recent RISKS issue and a WAP version that works for many but
   not all telephones: http://catless.ncl.ac.uk/w/r
 <http://the.wiretapped.net/security/info/textfiles/risks-digest/> .
==> PGN's comprehensive historical Illustrative Risks summary of one liners:
    <http://www.csl.sri.com/illustrative.html> for browsing,
    <http://www.csl.sri.com/illustrative.pdf> or .ps for printing
  is no longer maintained up-to-date except for recent election problems.
 *** NOTE: If a cited URL fails, we do not try to update them.  Try
  browsing on the keywords in the subject line or cited article leads.
==> Special Offer to Join ACM for readers of the ACM RISKS Forum:
    <http://www.acm.org/joinacm1>

------------------------------

End of RISKS-FORUM Digest 27.84
************************


Current thread: