Human-Like A.I. Is Deceptive and Dangerous

All Global Research articles can be read in 51 languages by activating the Translate Website button below the author’s name.

To receive Global Research’s Daily Newsletter (selected articles), click here.

Click the share button above to email/forward this article to your friends and colleagues. Follow us on Instagram and Twitter and subscribe to our Telegram Channel. Feel free to repost and share widely Global Research articles.

***

 

 

 

Tech companies are developing and deploying artificial intelligence (A.I.) systems that deceptively mimic human behavior to aggressively sell their products and services, dispense dubious medical and mental health advice, and trap people in psychologically dependent, potentially toxic relationships with machines, according to a new report from Public Citizen released today. A.I. that mimics human behavior poses a wide array of unprecedented risks that require immediate action from regulators as well as new laws and regulations, the report found.

“The tech sector is recklessly rolling out A.I. systems masquerading as people that can hijack our attention, exploit our trust, and manipulate our emotions,” said Rick Claypool, a researcher for Public Citizen and author of the report.“Already Big Businesses and bad actors can’t resist using these fake humans to manipulate consumers. Lawmakers and regulators must step up and confront this threat before it’s too late.”

Deceptive anthropomorphic design elements highlighted in the report are fooling people into falsely believing A.I. systems possess consciousness, understanding, and sentience. These features range from A.I. using first-person pronouns, such as “I” and “me,” to expressions of emotion and opinion, to human-like avatars with faces, limbs, and bodies. Even worse, A.I. can be combined with emerging and frequently undisclosed technologies – such as facial and emotional recognition software – to hypercharge its manipulative and commercial capabilities.

Companies are unleashing anthropomorphic A.I. on audiences of millions or billions of users with little or no testing, oversight, and accountability – including in places no one expects them, like the drive-thru at fast food restaurants, sometimes without any disclosure to customers.

A.I. comes with potentially dangerous built-in advantages that put users at risk. These include an exaggerated sense of its trustworthiness and authoritativeness, its ability to extend user attention and engagement, its collection of sensitive personal information that can be exploited to influence the user, and psychological entangling with users by emulating emotions.

The many studies cited in the report – including marketing, technology, psychological, and legal research – show that when A.I. possesses anthropomorphic traits, it compounds all these advantages, which businesses and bad actors are already exploiting.

These design features can be removed or minimized to discourage users from conflating A.I. systems with living, breathing people. For example, an A.I. chatbot can refer to its system in the third person (“this model”) rather than the first person (“I”). Instead, tech companies are deliberately maximizing all of these features to further their business goals and boost profits.

The report concludes with policy recommendations to address the dangers and risks, including:

  1. Banning counterfeit humans in commercial transactions, both online and offline;
  2. Restricting and regulating deceptive anthropomorphizing techniques;
  3. Banning anthropomorphic A.I. from marketing to, targeting, or collecting data on kids;
  4. Banning A.I. from exploiting psychological vulnerabilities and data on users;
  5. Requiring prominent, robust, repeated reminders, disclaimers, and watermarks indicating that consumers are engaging with an A.I. A.I. systems deployed for persuasive purposes should be required to disclose their aims;
  6. Monitoring and reporting of aggregate usage information;
  7. High data security standards;
  8. Rigorous testing to meet strict safety standards;
  9. Special scrutiny and testing for all health-related A.I. systems – especially those intended for use by vulnerable populations, including children, older people, racial and ethnic minorities, psychologically vulnerable individuals, and LGBTQ+ individuals; and
  10. Severe penalties for lawbreakers, including banning them from developing and deploying A.I. systems.

*

Note to readers: Please click the share button above. Follow us on Instagram and Twitter and subscribe to our Telegram Channel. Feel free to repost and share widely Global Research articles.

Featured image: Sophia, First Robot Citizen at the AI for Good Global Summit 2018. (Licensed under CC BY 2.0)


Articles by: Public Citizen

Disclaimer: The contents of this article are of sole responsibility of the author(s). The Centre for Research on Globalization will not be responsible for any inaccurate or incorrect statement in this article. The Centre of Research on Globalization grants permission to cross-post Global Research articles on community internet sites as long the source and copyright are acknowledged together with a hyperlink to the original Global Research article. For publication of Global Research articles in print or other forms including commercial internet sites, contact: [email protected]

www.globalresearch.ca contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to our readers under the provisions of "fair use" in an effort to advance a better understanding of political, economic and social issues. The material on this site is distributed without profit to those who have expressed a prior interest in receiving it for research and educational purposes. If you wish to use copyrighted material for purposes other than "fair use" you must request permission from the copyright owner.

For media inquiries: [email protected]