The ethics of our AI-enabled future
Artificial
intelligence is here to stay, but such paradigm shifts are not painless for
society
People
could be forgiven for thinking that the Google event in San Francisco last week
would be about new devices and the Android mobile operating system’s latest
iteration. The annual ritual is, after all, one of the touchstones of the tech
industry. But they turned out to be wrong. It wasn’t about Google’s new Pixel
phones. Nor was it about the Home smart speaker. It was about the connective
tissue binding them—Google Assistant, the company’s artificial intelligence (AI),
a beefed up version of a prior attempt, Google Now. And that raises a number of
questions that are proliferating increasingly at the intersection of technology
and ethics.
AI
routines running under the hood of various digital services and products, and
on personal computing devices—think Apple’s Siri and Microsoft’s Cortana—have
been commonplace for a few years now. But Google Assistant, along with its
predecessor and rival Amazon’s Alexa, is taking aim at something larger.
It’s
a seductive vision—ubiquitous environmental AI able to respond to voice input
anywhere in the house, sophisticated enough to understand natural language and
context and carry on a conversation with the user in the process of offering
information and performing tasks such as scheduling appointments or locking the
door. The hubs currently take the form of small speakers—Google Home and Amazon
Echo, powered by Assistant and Alexa, respectively—but this is the embryonic
stage. Google’s ambition is to have multiple hubs or portals in the house. As
the Internet of Things inches towards realization, every networked component in
the environment—from televisions to refrigerators—could be part of this
ecosystem. And others will follow suit; rumour has it Apple is working on its
own offering.
This
is a future pop culture has been preparing us for since Gene Roddenberry first
had Captain Kirk talk to the Enterprise’s computer back in the 1960s. That
might explain the ease with which consumers are taking to the concept of voice
input. Earlier this year, Mary Meeker’s annual “Internet Trends” report showed
that Google voice queries have grown 35-fold since 2008 and that Amazon’s Echo
was the fastest-selling speaker in 2015.
But
consumers’ comfort with this vision of AI-enabled lives masks issues of privacy
that warrant careful consideration. For consumer AI to offer the ease of use
that will make it attractive, it must offer as close a facsimile of having a
conversation with another person as possible. This is immensely difficult; the
ability to understand natural language conversation in context rather than
predetermined commands is the holy grail. Two components are necessary:
sophisticated algorithms and vast amounts of data. And that includes every
scrap of personal data possible.
Data
privacy concerns used to largely be the preserve of policy wonks and the
tinfoil hat crowd until a few years ago. Multiple leaks and revelations, from
WikiLeaks to Edward Snowden, changed all that. But the public focus remains for
the most part on government snooping. When it comes to private actors,
consumers are voting with their wallets; lack of privacy and surrendering
personal data is a fair price to pay for more convenient lifestyles.
Is
this necessarily a negative? That depends on the best practices tech companies
develop—Apple, for instance, is focusing on protecting users’ data in order to
differentiate its products—and on the outcome of their ongoing tussles with
governments around the world that demand access to user data for legal
purposes. It does, however, point to those ethical questions that the rise of
AI is throwing up.
The
industry is well aware of these questions, and of the need to show that it is
taking concerns on board. Thus, Amazon, Facebook, Google’s DeepMind division,
IBM and Microsoft founding a new organization called the Partnership on
Artificial Intelligence to Benefit People and Society that aims to initiate a
wide dialogue about the nature, purpose and consequences of AI. Tesla CEO Elon
Musk has also been harping on the issue for a while now; last year, he was
among the founders of OpenAI, another organization aimed at addressing such
issues.
They
have their work cut out. AI ethics have much more to address than just user
privacy. Imagine, for instance, a bank using AI to recommend or screen loan
applicants, and the algorithm using causal relationships to discriminate on the
basis of gender or caste or race. Or, for that matter, the multiple
implications of AI deployed in a military context or controlling driverless
vehicles—or the issue du jour, employment. These are also eminently
plausible scenarios.
It’s
an inevitable future; technological genies can never really be put back in the
bottle. But the rise of AI cannot be left to the industry; it demands the
involvement of everyone from social scientists to ethicists and philosophers.
Social and technological paradigm shifts are rarely painless, after all.
Is giving up privacy in exchange for
the convenience of AI a fair deal?
Source | Mint | 12 October 2016
Regards
Pralhad Jadhav
Senior Manager @ Library
Khaitan & Co
No comments:
Post a Comment