RISKS Forum mailing list archives

Risks Digest 31.07


From: RISKS List Owner <risko () csl sri com>
Date: Wed, 20 Feb 2019 16:37:02 PST

RISKS-LIST: Risks-Forum Digest  Wednesday 20 February 2019  Volume 31 : Issue 07

ACM FORUM ON RISKS TO THE PUBLIC IN COMPUTERS AND RELATED SYSTEMS (comp.risks)
Peter G. Neumann, moderator, chmn ACM Committee on Computers and Public Policy

***** See last item for further information, disclaimers, caveats, etc. *****
This issue is archived at <http://www.risks.org> as
  <http://catless.ncl.ac.uk/Risks/31.07>
The current issue can also be found at
  <http://www.csl.sri.com/users/risko/risks.txt>

  Contents:
The Instant, Custom, Connected Future of Medical Devices (Janet Morrissey)
Disinformation and fake news: House of Commons DCMS Committee
  (Brian Randell)
Psy-Group interferes with local California election (joh hight)
El Chapo's encryption defeated by turning his IT consultant (Bruce Schneier)
Russia to Temporarily Disconnect from Internet as Part of Cyberdefense Test
  (RBC)
A Lime scooter accident left Ashanti Jordan in a vegetative state.
  Now her mother is suing on her behalf.  (WashPost)
Google: 'Nest' microphone was on 'double-secret probation' (Nick Bastone)
Seatback cameras on Singapore Airlines (Henry Baker)
These Android Apps Have Been Tracking You, Even When You Say Stop
  (Laura Hautala)
Out of the Way, Human! Delivery Robots Want a Share of Your Sidewalk
  (Scientific American)
Call to Ban Killer Robots in Wars (Pallab Ghosh)
Vision system for autonomous vehicles watches not just where pedestrians
  walk, but how (techcrunch.com)
Machine learning causing a "science crisis"? (Mark Thorson, Richard Stein)
AAAS: Machine learning 'causing science crisis' (bbc.com)
An Elon Musk-backed AI firm is keeping a text generating tool under
  wraps amid fears it's too dangerous (Business Insider via Nancy Leveson)
OpenAI built a text generator so good it's considered too dangerous
  to release (techcrunch via Richard Stein)
Risks of automatic text generation (Mark Thorson)
This posting could be completely fake ... (Rob Slade)
Mailing list risks (Gabe Goldberg)
Navigation apps sending heavy traffic through quiet Alexandria neighborhoods
  (Alexandria Virginia News)
What is a Smart Microwave? (Gabe Goldberg)
Backup.  Backup, backup, backup.  (Rob Slade)
Re: `Zero Trust' AI: Too Much of a Good Thing is Wonderful (Amos Shapir)
Re: A Machine Gets High Marks for Diagnosing Sick Children (Wol,
  Andrew Duane)
Re: Crypto CEO dies holding only passwords that can unlock millions
  in customer coins (Wendy M. Grossman)
Re: How does NYPD surveill thee? Let me count the Waze (Amos Shapir)
Abridged info on RISKS (comp.risks)

----------------------------------------------------------------------

Date: Wed, 20 Feb 2019 11:30:43 -0500
From: ACM TechNews <technews-editor () acm org>
Subject: The Instant, Custom, Connected Future of Medical Devices
  (Janet Morrissey)

Janet Morrissey, *The New York Times* 14 Feb 2019, via ACM TechNews,
20 Feb 2019

A growing number of companies are using Internet of Things technology to
create new medical treatments facilitated by connected, customized
devices. One example is the One Drop diabetes self-management system, which
combines sensors, an app, and a Bluetooth glucose meter to track and monitor
blood glucose levels, food, exercise, and medication. One Drop uses
artificial intelligence to predict a patient's blood glucose level over the
next 24 hours, and suggests strategies for controlling fluctuations. Other
new medical innovations range from implants to help paralysis victims walk
to smart pills that detect when patients fail to comply with their drug
regimens. Another emerging technology uses three-dimensional printers to
manufacture patient-tailored medical devices, such as knee joints and spinal
implants, based on the patient's magnetic resonance imaging and computed
tomography scans.

https://orange.hosting.lsoft.com/trk/click%3Fref%3Dznwrbbrs9_6-1e7a7x21a571x069423%26

  [However, will we trust IoT that is likely to be riddled with security
  flaws and reliability problems, with risks to human safety?  PGN]

------------------------------

Date: February 18, 2019 at 1:05:41 PM EST
From: Brian Randell <brian.randell () newcastle ac uk>
Subject: Disinformation and fake news: House of Commons DCMS Committee

  [via Dave Farber's IP list; PGN-ed]

A big story here in the UK today is the House of Commons Digital, Culture,
Media and Sport Committee's report on Disinformation and `fake news' - see
for example:

https://www.theguardian.com/technology/2019/feb/18/facebook-regulation-fake-news-mps-deepfake
https://www.bbc.co.uk/news/technology-47255380
https://www.itv.com/news/2019-02-18/social-media-ethics-and-regulation-all-you-need-to-know/
https://www.telegraph.co.uk/politics/2019/02/17/facebook-has-behaved-like-digital-gangster-say-mps-accuse-firms/

However, these articles tend not to provide a link to the actual 109-page
report -- which in fact is online at
https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/1791.pdf

This is the Final Report in an inquiry on disinformation that has spanned
over 18 months, covering individuals' rights over their privacy, how their
political choices might be affected and influenced by online information,
and interference in political elections both in this country and across the
world -- carried out by malign forces intent on causing disruption and
confusion.

We have used the powers of the Committee system, by ordering people to give
evidence and by obtaining documents sealed in another country's legal
system. We invited democratically-elected representatives from eight
countries to join our Committee in the UK to create an `International Grand
Committee', the first of its kind, to promote further cross-border
co-operation in tackling the spread of disinformation, and its pernicious
ability to distort, to disrupt, and to destabilise.  Throughout this inquiry
we have benefitted from working with other parliaments.  This is continuing,
with further sessions planned in 2019. This has highlighted a worldwide
appetite for action to address issues similar to those that we have
identified in other jurisdictions.

This is the Final Report in our inquiry, but it will not be the final word.
We have always experienced propaganda and politically-aligned bias, which
purports to be news, but this activity has taken on new forms and has been
hugely magnified by information technology and the ubiquity of social media.
In this environment, people are able to accept and give credence to
information that reinforces their views, no matter how distorted or
inaccurate, while dismissing content with which they do not agree as `fake
news'.  This has a polarising effect and reduces the common ground on which
reasoned debate, based on objective facts, can take place. Much has been
said about the coarsening of public debate, but when these factors are
brought to bear directly in election campaigns then the very fabric of our
democracy is threatened.  [...]

------------------------------

Date: Mon, 18 Feb 2019 12:55:07 -0800
From: john hight <johnhight () gmail com>
Subject: Psy-Group interferes with local California election

https://www.newyorker.com/magazine/2019/02/18/private-mossad-for-hire

And the recent mention of the NSO Group here:

https://businessmirror.com.ph/2019/02/17/undercover-spy-exposed-in-nyc-was-1-of-many/

Previous reference to Black Cube here:

https://www.nytimes.com/2019/01/28/world/black-cube-nso-citizen-lab-intelligence.html

------------------------------

Date: Fri, 15 Feb 2019 21:30:58 +0000
From: Bruce Schneier <schneier () schneier com>
Subject: El Chapo's encryption defeated by turning his IT consultant

From CRYPTO-GRAM, 15 Feb 2019

  [For back issues, or to subscribe, visit Crypto-Gram's web page
  (https://www.schneier.com/crypto-gram.html)]

In a daring move that placed his life in danger, the I.T. consultant
eventually gave the F.B.I. his system's secret encryption keys in 2011 after
he had moved the network's servers from Canada to the Netherlands during
what he told the cartel's leaders was a routine upgrade.

A Dutch article says that it's a BlackBerry system.
https://www.volkskrant.nl/nieuws-achtergrond/nederlandse-politie-tapte-anderhalf-jaar-lang-alle-communicatie-van-mexicaanse-drugsbaron-el-chapo-~bab33a30/

El Chapo had his IT person install "...spyware called FlexiSPY on the
'special phones' he had given to his wife, Emma Coronel Aispuro, as well as
to two of his lovers, including one who was a former Mexican lawmaker." That
same software was used by the FBI when his IT person turned over the
keys. Yet again we learn the lesson that a backdoor can be used against you.

And it doesn't have to be with the IT person's permission. A good
intelligence agency can use the IT person's authorizations without his
knowledge or consent. This is why the NSA hunts sysadmins.
[https://theintercept.com/2014/03/20/inside-nsa-secret-efforts-hunt-hack-system-administrators/]

------------------------------

Date: Fri, 15 Feb 2019 14:56:03 -0500
From: Gabe Goldberg <gabe () gabegold com>
Subject: Russia to Temporarily Disconnect from Internet as Part of
  Cyberdefense Test (RBC)

Moscow, Russia: The entire country of Russia is planning to temporarily
disconnect from the Internet as part of a test to gauge its cybersecurity
capabilities, with the long-range goal of keeping all internal web traffic
on its own servers and out of reach form foreign hackers. Russian news site
RBC reported that the move will analyze the country's preparedness for
legislation mandating a sovereign Internet. The stated goal of the
legislation is to protect Russia from cyber attacks from countries like the
U.S., ``The project was developed taking into account the U.S. national
cybersecurity strategy adopted in 2018, which declares the principle of
`maintaining peace by force,'' and Russia, along with Iran and North Korea,
is accused of hacker attacks, RBC reported. The exercises are expected to
determine what amendments need to be made to the draft law and what costs.
will be required for its implementation.
https://www.rbc.ru/technology_and_media/08/02/2019/5c5c51069a7947bef4503927

Hope to see you again soon.

------------------------------

Date: Fri, 15 Feb 2019 15:32:44 -0500
From: Gabe Goldberg <gabe () gabegold com>
Subject: A Lime scooter accident left Ashanti Jordan in a vegetative state.
  Now her mother is suing on her behalf.  (WashPost)

Lime — which has received hefty investments from Uber and Alphabet — has
been valued at more than $1 billion, according to Bloomberg News

https://www.bloomberg.com/news/articles/2018-07-09/uber-will-rent-scooters-through-its-app-in-partnership-with-lime
despite the company admitting that some of its models have caught on fire
and broken in half while people are riding them.

https://www.washingtonpost.com/technology/2018/11/10/electric-scooter-giant-lime-launches-global-recall-one-its-models-amid-fears-scooters-can-break-apart/%3Futm_term%3D.991f7575d237
At the same time investment money was pouring into Lime, injured scooter
riders began pouring into emergency rooms nationwide,

https://www.washingtonpost.com/business/economy/scooter-use-is-rising-in-major-cities-so-are-trips-to-the-emergency-room/2018/09/06/53d6a8d4-abd6-11e8-a8d7-0f63ab8b1370_story.html%3Futm_term%3D.daec840de0a1
leading some doctors to accuse companies such as Bird and Lime of spawning a
public health crisis.

https://www.washingtonpost.com/technology/2019/01/25/electric-scooters-send-more-people-hospital-than-bicycles-walking-new-study-finds/%3Futm_term%3D.d8663ef33c30

https://www.washingtonpost.com/technology/2019/02/11/lime-scooter-accident-left-ashanti-jordan-vegetative-state-now-her-mother-is-suing-company-her-behalf/

------------------------------

Date: Wed, 20 Feb 2019 14:42:53 -0800 (GMT-08:00)
From: Henry Baker <hbaker1 () pipeline com>
Subject: Google: 'Nest' microphone was on 'double-secret probation'
  (Nick Bastone)

Nick Bastone, Business Insider, 20 Feb 2019
Google: 'Nest' microphone was on 'double-secret probation'
https://www.businessinsider.com/nest-microphone-was-never-supposed-to-be-a-secret-2019-2

Google says the built-in microphone it never told Nest users about was
'never supposed to be a secret'

In early February, Google announced that its home security and alarm system
Nest Secure would be getting an update.  Users, the company said, could now
enable its virtual-assistant technology, Google Assistant.

The problem: Nest users didn't know a microphone existed on their security
device to begin with.  The existence of a microphone on the Nest Guard,
which is the alarm, keypad, and motion-sensor component in the Nest Secure
offering, was never disclosed in any of the product material for the device.

On Tuesday, a Google spokesperson told Business Insider the company had made
an `error'.  ``The on-device microphone was never intended to be a secret
and should have been listed in the tech specs.  That was an error on our
part.  The microphone has never been on and is [activated only] when users
specifically enable the option.''

Google also said the microphone was originally included in the Nest Guard
for the possibility of adding new security features down the line, like the
ability to detect broken glass.

Still, even if Google included the microphone in its Nest Guard device for
future updates -- like its Assistant integration -- the news comes as
consumers have grown increasingly wary of major tech companies and their
commitment to consumer privacy.  For Google, the revelation is particularly
problematic and brings to mind previous privacy controversies, such as the
2010 incident in which the company acknowledged that its fleet of Street
View cars "accidentally" collected personal data transmitted over consumers'
unsecured WiFi networks, including emails.

If @Google's @Nest Secure devices really had secret microphones that they
hid from consumers, those consumers should probably be forgiven if they
don't trust the company's after-the-fact promises that it never spied on
them.

  [A Fortune article on this subject drew this response from Gabe Goldberg:
    ``Simple error, could happen to anyone.''  Perhaps facetious?  PGN]

------------------------------

Date: Wed, 20 Feb 2019 14:58:44 -0800 (GMT-08:00)
From: Henry Baker <hbaker1 () pipeline com>
Subject: Seatback cameras on Singapore Airlines

  [Singapore Airlines: Hmmmm...  Why am I not surprised?  Lemme see...]

    [This item is a comment on an earlier item.  PGN-ed]
  Linda Poon, CityLab, 21 Apr 2017
  Singapore, City of Sensors  [what, no Censors?  PGN]
  https:/www.citylab.com/life/2017/04/singapore-city-of-sensors/523392/

  They're on buses, atop buildings, in parks, and inside drains as part of
  the island's ``vision to become the world's first `Smart Nation'.''  But
  what do they mean for privacy?  In short, Singapore is a city -- and
  nation -- of sensors, barely noticeable to the average citizen. ... The
  engineers behind it have dubbed the plan `E3A', for ``Everyone,
  Everything, Everywhere, All the Time.''

  
https://www.reuters.com/article/us-singapore-surveillance/singapore-to-test-facial-recognition-on-lampposts-stoking-privacy-fears-idUSKBN1HK0RV

1. Aradhana Aravindan, John Geddie
Singapore to test facial recognition on lampposts, stoking privacy fears

In the not too distant future, surveillance cameras sitting atop over
100,000 lampposts in Singapore could help authorities pick out and
recognize faces in crowds across the island-state.
https://boingboing.net/2019/02/20/singapore-airlines-says-seatba.html

2. Mark Frauenfelder, Boing Boing, 20 Feb 2019
Singapore Airlines says seatback cameras are "disabled".
https://boingboing.net/2019/02/20/singapore-airlines-says-seatba.html

------------------------------

Date: Wed, 20 Feb 2019 11:30:43 -0500
From: ACM TechNews <technews-editor () acm org>
Subject: These Android Apps Have Been Tracking You, Even When You Say Stop
  (Laura Hautala)

Laura Hautala, CNet, 14 Feb 2019, via ACM TechNews, 20 Feb 2019

International Computer Science Institute researchers estimated that about
17,000 Android apps collect identifying information, creating a permanent
record of the activity on the owner's device. This practice apparently
violates Google's policy on collecting data that can be used for targeted
advertising. The apps track users by linking their Advertising ID number
with other identifiers on the phone that are hard or impossible to reset,
such as the phone's media access control address, International Mobile
Equipment Identity, and Android ID. Fewer than 33% of identifier-collecting
apps accept only the Advertising ID, as recommended by Google's best
developer practices. The researchers noted the apps have been installed on
at least 100 million devices. Google said it has investigated their
findings, and taken remedial action on certain apps.

https://orange.hosting.lsoft.com/trk/click%3Fref%3Dznwrbbrs9_6-1e7a7x21a579x069423%26

------------------------------

Date: Tue, 19 Feb 2019 06:51:30 -0800
From: Richard Stein <rmstein () ieee org>
Subject: Out of the Way, Human! Delivery Robots Want a Share of Your
  Sidewalk (Scientific American)

https://www.scientificamerican.com/article/out-of-the-way-human-delivery-robots-want-a-share-of-your-sidewalk/

"Starship's robots operate almost entirely autonomously in mapped areas, but
remote human operators monitor them in case they need to intervene.  Still,
even Starship previously admitted people have occasionally given its $5,500
robots a kick in passing."

Sidewalk-based delivery-bots must navigate like an autonomous vehicle on a
highway or in congested city street conditions. Given their reduced mass and
velocity, these delivery bots do not appear too threatening from a collision
perspective. They must avoid obstacles including people on crutches, animals
out for a stroll, furniture movers, sidewalk repairs, etc. Wise to have have
carbon-based oversight when incidents arise.

------------------------------

Date: Wed, 20 Feb 2019 11:30:43 -0500
From: ACM TechNews <technews-editor () acm org>
Subject: Call to Ban Killer Robots in Wars (Pallab Ghosh)

Pallab Ghosh, BBC News, 15 Feb 2019, via ACM TechNews, 20 Feb 2019

A scientific coalition is urging a ban on the development of weapons
governed by artificial intelligence (AI), warning they may malfunction
unpredictably and kill innocent people. The coalition has established the
Campaign to Stop Killer Robots to lobby for an international accord. Said
Human Rights Watch's Mary Wareham, autonomous weapons "are beginning to
creep in. Drones are the obvious example, but there are also military
aircraft that take off, fly, and land on their own; robotic sentries that
can identify movement." Clearpath Robotics' Ryan Gariepy advocates for a
ban, and cautions that AI's abilities "are limited by image recognition.
It ... does not have the detail or context to be judge, jury, and executioner
on a battlefield." The New School in New York's Peter Asaro adds that
illegal killings by autonomous weaponry raise issues of liability, which
would likely make the weapon's creators accountable.

https://orange.hosting.lsoft.com/trk/click%3Fref%3Dznwrbbrs9_6-1e7a7x21a56fx069423%26

------------------------------

Date: Sun, 17 Feb 2019 15:19:08 -0800
From: Richard Stein <rmstein () ieee org>
Subject: Vision system for autonomous vehicles watches not just where
  pedestrians walk, but how (techcrunch.com)

https://techcrunch.com/2019/02/16/vision-system-for-autonomous-vehicles-watches-not-just-where-pedestrians-walk-but-how/

"The University of Michigan, well known for its efforts in self-driving car
tech, has been working on an improved algorithm for predicting the movements
of pedestrians that takes into account not just what they're doing, but how
they're doing it. This body language could be critical to predicting what a
person does next."

Risk: Human movement extrapolation based on a finite set of initial
conditions or movements may compel collision.

Would like to see how this AV vision platform performs against a Michael
Jackson Moon Walk or a break dance sequence.

------------------------------

Date: Sun, 17 Feb 2019 11:33:34 -0800
From: Mark Thorson <eee () dialup4less com>
Subject: Machine learning causing a "science crisis"? (BBC)

The premise seems to be that machine learning is causing irreproducible
results to be reported because the algorithms are finding patterns that
aren't there.  These algorithms are being used to mine large datasets that
have already been collected.  I think this concern may be misplaced.  The
actual problem may be relying too much on a _post_hoc_ analysis, though with
these datasets you often don't have a choice.  The worst example that comes
to mind is Lilly's Alzheimer's drug solanezumab.  After failing two Phase
III clinical trials, Lilly hired an outside data analysis firm to study the
data.  I don't know how they analyzed the data, but you don't need machine
learning to do a _post_hoc_ analysis.  They found an effect in a subgroup of
the dataset -- subjects who had the very earliest symptoms.  So now, Lilly
is working on a third Phase III clinical trial, this time only with subjects
at the earliest detectable stage.  I can't recall any other drug that
actually had a third PIII trial, but Lilly has thrown so many billions into
this project that the glimmer of hope provided by the _post_hoc_ subgroup
analysis means they can't give up now.

https://www.bbc.com/news/science-environment-47267081

------------------------------

Date: Tue, 19 Feb 2019 17:09:16 -0800
From: Richard Stein <rmstein () ieee org>
Subject: Machine learning 'causing science crisis' (BBC)

https://www.bbc.com/news/science-environment-47267081

'The "reproducibility crisis" in science refers to the alarming number of
research results that are not repeated when another group of scientists
tries the same experiment. It means that the initial results were wrong. One
analysis suggested that up to 85% of all biomedical research carried out in
the world is wasted effort.

'But, according to Dr Allen, the answers they come up with are likely to be
inaccurate or wrong because the software is identifying patterns that exist
only in that data set and not the real world.

'Often these studies are not found out to be inaccurate until there's
another real big dataset that someone applies these techniques to and says
'oh my goodness, the results of these two studies don't overlap',' she said.

A worrisome trend, given that dataset analysis may be applied to influence
regulatory approvals for new medical devices and pharmaceuticals. Other
industries and their outputs may also be subject to dataset bias that
erroneously attributes ready for sale or use.  Caveat emptor.

------------------------------

Date: Fri, 15 Feb 2019 07:48:07 -0800
From: Nancy Leveson <leveson.nancy8 () gmail com>
Subject: An Elon Musk-backed AI firm is keeping a text generating tool under
  wraps amid fears it's too dangerous (Business Insider)

The following story is appalling. Why would anyone want to spend their lives
doing something so evil? At least people in the past, like those in the
Manhattan Project, understood they were in an ethical quandary and there
were some reasonable arguments for upsides in what they were doing. I can't
see any upsides to this. Why would they bother to do it at all?  Another
reason I left computer science and AI --- there seemed to be no appreciation
for the difference between machines and humans nor any thoughts for the
ethics of what they were doing. Evil is often done simply because nobody
thought about the results of their behavior, not just because they didn't
care. I'm not sure I would want to live in the world that is coming. We now
have all the technological tools for 1984, and even 1984 looks relatively
benign.

An Elon Musk-backed AI firm is keeping a text-generating tool under wraps,
amid fears it's too dangerous.

Business Insider

Elon Musk is cofounder of OpenAI, which has made] an AI tool that can
generate fake text. The Guardian's Alex Hern played with the system,
generating a fake article on Brexit and a new paragraph for George Orwell's
"1984."  The company isn't open-sourcing the system because it fears it
could be misused, for example to infinitely generating negative or positive
reviews.  AI research nonprofit OpenAI has created a system that can
generate fake text.

https://apple.news/AnxgN-f-_TOyRak8do1D-cA

------------------------------

Date: Mon, 18 Feb 2019 15:50:45 -0800
From: Richard Stein <rmstein () ieee org>
Subject: OpenAI built a text generator so good it's considered too dangerous
  to release (techcrunch)

'OpenAI said its new natural language model, GPT-2, was trained to predict
the next word in a sample of 40 gigabytes of Internet text. The end result
was the system generating text that "adapts to the style and content of the
conditioning text," allowing the user to "generate realistic and coherent
continuations about a topic of their choosing."  The model is a vast
improvement on the first version by producing longer text with greater
coherence.

'But with every good application of the system, such as bots capable of
better dialog and better speech recognition, the non-profit found several
more, like generating fake news, impersonating people, or automating abusive
or spam comments on social media.'

Should all content be required to carry an authentication label to
discriminate carbon v. silicon authorship: (A)rtificially Composed or (B)ot
Authored in addition to a (C)opyright stamp? How to establish content
authentication without imposing censorship on free expression rights or
press freedoms?

Maliciously deployed GPT-2 capabilities can accelerate public trust erosion
in traditional news services. GPT-2 may already embody the perfect
propaganda machine awaiting exploitation ala Orwell's 1984.

April Fools Recursion Risk: GPT-2 authors best-seller on how to write
Ph.D. dissertations.

------------------------------

Date: Sun, 17 Feb 2019 10:29:43 -0800
From: Mark Thorson <eee () dialup4less com>
Subject: Risks of automatic text generation

Technology for advanced automatic text generation being held back because of
fear of malicious applications.

http://www.taipeitimes.com/News/biz/archives/2019/02/17/2003709852

More details below, including samples of input and resulting output.  At
first I thought it was producing clever crap, but after reflection I see
this as a remarkable achievement.  It reminds me of Wernicke's aphasia, a
brain dysfunction that causes people to produce language in complete
sentences that are syntactically correct, but nonsense sometimes referred to
as a "word salad".  The text produced by the AI is not this incoherent by
any means.  It's defect is far more subtle, not seen in human language.  I'd
call it a "coherent idea salad".  Any human capable of producing language of
such syntactic quality and coherence would also have a clearer line of
thought.  But for a large part of what people use language for, this
technology may have already achieved what would satisfy many people.  Most
people are not writing a legal brief, technical manual, or patent
application where every word counts.  I would guess far more written
language is used for thank-you notes, letters describing my vacation,
complaints to Citibank, and other prose which is much more flexible with
regard to the thinking behind it.  A large number of people, perhaps a
majority, cannot compose even these more mundane writings.  For these
people, this software could greatly expand the reach of what they can
express.  We're still in the early days of this technology, and with a few
years of further development it could greatly change how people use
language.  It may be revolutionary.

https://blog.openai.com/better-language-models/

------------------------------

Date: Tue, 19 Feb 2019 11:59:58 -0800
From: Rob Slade <rmslade () shaw ca>
Subject: This posting could be completely fake ...

Fake news may not exactly be news.  And we've got (deep) fake videos, and
even fake faces.

Open AI has created a fake news generator that scares even itself.  Given an
initial sentence, the system will generate a complete article, with fake,
but convincing and realistic sounding, facts, experts, institutions, and
quotes.  Unusually, for an "open" enterprise, they are not releasing the
full system, but only an earlier limited version.

https://blog.openai.com/better-language-models/

Or, maybe the rmslade () shaw ca account is attached to an AI experiment gone
horribly wrong, and generating fake news email messages ...

------------------------------

Date: Fri, 15 Feb 2019 00:01:09 -0500
From: Gabe Goldberg <gabe () gabegold com>
Subject: Mailing list risks

Received email:

  I have been made aware of several instances in North Carolina where NARFE
  members received letters from Nationwide Insurance, which is a NARFE
  Affinity Partner, where the recipients' names have been different than the
  address where the letter was delivered. I mentioned these letters to
  National President Ken Thomas and checked with the employee who
  coordinates all NARFE contracts with our Affinity Partners. A "glitch" was
  discovered in the system that is used to coordinate the mailing labels
  used for Affinity mailings. The "glitch" created a shift in the
  spreadsheet used which caused some of the cells to shift thereby placing
  recipients' names on different lines as the addresses. NARFE HQ is looking
  into the system to be certain that these occurrences are not repeated.

    [Similar cases have been reported in RISKS in the past.  Nothing new
    here, but it is just one more reminder that not enough people are
    reading RISKS?  PGN]

------------------------------

Date: Fri, 15 Feb 2019 14:31:03 -0500
From: Gabe Goldberg <gabe () gabegold com>
Subject: Navigation apps sending heavy traffic through quiet Alexandria
  neighborhoods (Alexandria Virginia News)

ALEXANDRIA, VA.  Sandwiched between Interstate 395 and Interstate 495 and
fed up with cut-through commuter traffic, residents of central Alexandria
are asking the city for help.  Residents say navigation apps like WAZE have
increased traffic in their neighborhoods — Seminary Hill, Seminary Ridge,
Clover College Park and Taylor Run — where streets are narrow and children
play.  Roughly 44 percent of the traffic in our neighborhood originates
outside the neighborhood and ends outside the neighborhood. They don't stop,
they're just coming through the neighborhood.  [...]

The city is pressing ahead with short, medium and long term goals aimed at
encouraging traffic to remain on the main arteries and out of the
neighborhoods.  If we can keep cars on the arterials, hopefully they'll cut
through the neighborhoods less, said Hillary Orr, Deputy Director of
Transportation, City of Alexandria.

http://www.localkicks.com/community/news/navigation-apps-sending-heavy-traffic-through-quiet-alexandria-neighborhoods

------------------------------

Date: Fri, 15 Feb 2019 14:38:20 -0500
From: Gabe Goldberg <gabe () gabegold com>
Subject: What is a Smart Microwave?

Most people think of a microwave as a device to reheat food or pop up a bag
of popcorn for family movie night.

https://www.lifewire.com/smart-microwave-4159823

If that's the Jeopardy answer, the question is: A bad idea.

------------------------------

Date: Wed, 13 Feb 2019 11:11:09 -0800
From: Rob Slade <rmslade () shaw ca>
Subject: Backup.  Backup, backup, backup.

OK, you've all heard about VFEmail.net.
https://thehackernews.com/2019/02/vfemail-cyber-attack.html
http://www.itpro.co.uk/security/32972/us-email-provider-wiped-out-by-hacker

The fact that no ransom was demanded is a possible indicator of "disgruntled
employee" along with the fact that the different servers had different
authentications.  Which brings me to my recommendation for ransomware and
many, many other forms of attack: backup.

Backup, backup, backup.

The oldest protection in the book, possibly the most effective, and the one
that everyone has (mostly invalid) reasons that they don't use.

Yes, I know the backup servers were formatted as well.  That just means you
use other forms of backup.

I've got an external drive that's semi-permanently attached and running a
Windows backup program. It's supposed to backup any changes every fifteen
minutes. I don't really trust it, but I've recovered stuff off it
occasionally. I don't really trust it because it's attached. Like in the
VFEmail case, I figure if I can get at it without plugging in cables, so can
the bad guys. I figure the same goes for other machines on the LAN or online
or cloud drives or storage systems. I do keep my "current" presentations on
Google Drive, just in case.

The one I really rely on is an old Passport drive. I have to plug it in to
make a backup. I do it sporadically, and probably not as frequently as I
should, but it's been surprisingly effective. That drive is, itself, backed
up on to external and non- connected laptops. (Well, at this point, laptop.
It's on the Windows laptop. It used to be on the Mac as well, but the Mac
had a corruption breakdown recently, and I replaced the drive. Since I keep
all my old drives [hey, I'm an old malware researcher, and I've got samples
and zoos all over the place, so just sending them to recycling would be a
bit irresponsible] then I guess it is still backed up on a very external
drive.)

I got a "credit card" USB drive at a show, recently, and I keep it in my
wallet. It's pig slow, so I don't do backups on it as much, but I do keep my
current presentations on it, and, at the moment as I writing this, I'm
backing up all my email onto it.

OK, this is all just to back up my own stuff, and I couldn't keep masses of
corporate data in my wallet.  (Although it's surprising how much of the most
important stuff you can put on there.)  But the point is the same: backups
can save your backside, and a little thought and imagination is more
important than million dollar contracts on remote hot sites.

------------------------------

Date: Sat, 16 Feb 2019 11:17:21 +0200
From: Amos Shapir <amos083 () gmail com>
Subject: Re: `Zero Trust' AI: Too Much of a Good Thing is Wonderful (R-31.06)

A common tactic of authoritarian regimes is to make laws which are next to
impossible to abide by, then not enforce them.  This creates a culture where
it's perfectly acceptable to ignore such laws, yet the regime may use
selective enforcement to punish dissenters -- since legally, everyone is
delinquent.

Now try to teach humorless AI to work in such an "everyone does it"
atmosphere...

------------------------------

Date: Thu, 14 Feb 2019 14:05:21 +0000
From: Wols Lists <antlists () youngman org uk>
Subject: Re: A Machine Gets High Marks for Diagnosing Sick Children
  (RISKS-31.06)

As someone who unfortunately has a fair bit of experience in the system, I
notice two things in particular about dealing with doctors.

Firstly, speaking the same language is important - most of my bad
experiences have been with foreign doctors, no matter how good their
English.

And secondly, *experience counts*! Doctors, like the rest of us, make
mistakes when outside their comfort zone.

I remember an article from a computer mag in the 80s where a GP had written
a simple diagnostic program in, iirc, Logo or Prolog, which was easily
extensible as new patients with new illnesses and symptoms came in. And the
doctor said that many patients liked using it - it was easier to be honest
with it :-)

But from the doctor's point of view, it showed him the questions the program
had asked and the patient's responses, and which illnesses were ruled in/out
by those answers. The really crucial aid the program gave the doctor was
that it didn't forget, and quite often prompted the doctor to check out an
illness that wouldn't have crossed his mind without the program.

A Silicon-based Physician Assistant that can explain what conclusion it has
come to, and why, could be a massive help to a Carbon-based Physician,
especially a junior one gaining experience. Even if said explanation is
pretty simplistic.

------------------------------

Date: Thu, 14 Feb 2019 10:44:44 -0500
From: Andrew Duane <e91.waggin () gmail com>
Subject: Re: A Machine Gets High Marks for Diagnosing Sick Children
  (RISKS-31.06)

These are all excellent points, all of which already exist to some extent
with current medical technology. Sadly though, in the U.S. at least, the
most important question might be:

  Who will get sued when something gives a wrong diagnosis or treatment?

It is sad that important life-saving technology is delayed and even denied
based on the fact that it's not "perfect" despite "perfection" not being
achievable. People will always be more suspicious of automation and
machinery than they are of other people. Witness the debates over
self-driving cars and the Trolley Problem.

------------------------------

Date: Fri, 15 Feb 2019 19:36:51 +0000
From: "Wendy M. Grossman" <wendyg () pelicancrossing net>
Subject: Re: Crypto CEO dies holding only passwords that can unlock millions
  in customer coins (RISKS-31.06)

Many families face this sort of problem, too - the standard security
advice never to share passwords is really inappropriate for these
situations.

------------------------------

Date: Sat, 16 Feb 2019 11:02:39 +0200
From: Amos Shapir <amos083 () gmail com>
Subject: Re: How does NYPD surveill thee? Let me count the Waze (RISKS-31.06)

Of course, if Waze is forbidden to post this info, other applications to do
that would pop up; even if the state somehow manages to block all of these,
groups on any messaging system or special sharable sites could do the same
(such sites had been used since before Waze existed).

They'd never learn.

------------------------------

Date: Mon, 14 Jan 2019 11:11:11 -0800
From: RISKS-request () csl sri com
Subject: Abridged info on RISKS (comp.risks)

 The ACM RISKS Forum is a MODERATED digest.  Its Usenet manifestation is
 comp.risks, the feed for which is donated by panix.com as of June 2011.
=> SUBSCRIPTIONS: The mailman Web interface can be used directly to
 subscribe and unsubscribe:
   http://mls.csl.sri.com/mailman/listinfo/risks

=> SUBMISSIONS: to risks () CSL sri com with meaningful SUBJECT: line that
   includes the string `notsp'.  Otherwise your message may not be read.
 *** This attention-string has never changed, but might if spammers use it.
=> SPAM challenge-responses will not be honored.  Instead, use an alternative
 address from which you never send mail where the address becomes public!
=> The complete INFO file (submissions, default disclaimers, archive sites,
 copyright policy, etc.) is online.
   <http://www.CSL.sri.com/risksinfo.html>
 *** Contributors are assumed to have read the full info file for guidelines!

=> OFFICIAL ARCHIVES:  http://www.risks.org takes you to Lindsay Marshall's
    searchable html archive at newcastle:
  http://catless.ncl.ac.uk/Risks/VL.IS --> VoLume, ISsue.
  Also,  ftp://ftp.sri.com/risks for the current volume
     or ftp://ftp.sri.com/VL/risks-VL.IS for previous VoLume
  If none of those work for you, the most recent issue is always at
     http://www.csl.sri.com/users/risko/risks.txt, and index at /risks-31.00
  Lindsay has also added to the Newcastle catless site a palmtop version
  of the most recent RISKS issue and a WAP version that works for many but
  not all telephones: http://catless.ncl.ac.uk/w/r
  ALTERNATIVE ARCHIVES: http://seclists.org/risks/ (only since mid-2001)
 *** NOTE: If a cited URL fails, we do not try to update them.  Try
  browsing on the keywords in the subject line or cited article leads.
  Apologies for what Office365 and SafeLinks may have done to URLs.
==> Special Offer to Join ACM for readers of the ACM RISKS Forum:
    <http://www.acm.org/joinacm1>

------------------------------

End of RISKS-FORUM Digest 31.07
************************


Current thread: