Charles Robertson

Hidden Ways AI may be Impacting You

As the court hearing in Manhattan began, attorney Steven A. Schwartz, appeared confident, bolstered by his legal brief supported by judicial opinions and legal citations, all generated by ChatGPT.  Unbeknownst to Mr. Schwartz, much of the ChatGPT opinions and legal citations were fake.  For 2 hours, the attorney faced a legal grilling from judge, P. Kevin Castel who noted, “Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.”   Attorney Schwartz and his co-counsel, Peter LoDuca, were both fined, disciplinary authorities could start their own investigation which could lead to a private reprimand or to public sanctions like suspension or disbarment.

Stephen Gillers, a legal ethics professor at New York University School of Law, said he believed the worldwide publicity about the case helped Mr. Schwartz and Mr. LoDuca avoid a worse fate. “The lawyers will now and forever be known as ‘the lawyers who got fooled by ChatGPT.”  Their story is repeated in countless examples of AI gone wrong.

Most anyone who has used GPS navigation (a form of AI) since its inception can share accounts of being misdirected. (This was depicted in a classic fictional scene from The Office, where Michael Scott keeps yelling “The machine knows where it’s going” before he and Dwight plunge into the water.).  Laughable yes, but also a serious reminder of AI’s limitations and vulnerabilities.  These episodes highlight a serious issue known as “automation bias” where people are more inclined to trust erroneous instructions issued by a machine, even when the evidence of their own senses (and other humans) contradicts the machine recommendations.

Generative AI is a broad term, describing “creative” algorithms that can themselves generate new content, including images, text, audio, video and even computer code.  During the current war in Ukraine, a deepfake video appeared on social media that appeared to show Ukrainian president Volodymyr Zelenskyy telling his troops to surrender.

The lack of algorithmic transparency is problematic; Desai and Kroll (2017) highlight why, using examples of people who were denied jobs, refused loans, were put on no-fly lists or denied benefits without knowing “why that happened other than the decision was processed through some software”. Information about the functionality of algorithms is often intentionally poorly accessible” (Mittelstadt et al 2016) and this exacerbates the problem.

A focus paper from the EU Agency for Fundamental Rights (FRA 2018) outlines the potential for discrimination against individuals via algorithms, and states that “the principle of non-discrimination, as enshrined in Article 21 of the Charter of Fundamental Rights of the European Union, needs to be taken into account when applying algorithms to everyday life” (FRA 2018). It cites examples with potential for discrimination: automated selection of candidates for job interviews, use of risk scores in creditworthiness or in trials.

Contestability by design has been proposed as an approach to better protect the rights of decisions based solely on automated processing as a requirement at each stage of an artificial intelligence system’s lifecycle Almada (2019).

In 2014, Amazon started working on AI-powered recruiting software to do just that. There was only one problem: The system vastly preferred male candidates. In 2018, Reuters broke the news that Amazon had scrapped the project.

Amazon’s system gave candidates star ratings from 1 to 5. But the machine learning models at the heart of the system were trained on 10 years’ worth of resumes submitted to Amazon — most of them from men. As a result of that training data, the system started penalizing phrases in the resume that included the word “women’s” and even downgraded candidates from all-women colleges.

In March 2016, Microsoft learned that using Twitter interactions as training data for machine learning algorithms can have dismaying results.

Microsoft released Tay, an AI chatbot, on the social media platform. The company described it as an experiment in “conversational understanding.” The idea was the chatbot would assume the persona of a teen girl and interact with individuals via Twitter using a combination of machine learning and natural language processing. Microsoft seeded it with anonymized public data and some material pre-written by comedians, then set it loose to learn and evolve from its interactions on the social network.

Within 16 hours, the chatbot posted more than 95,000 tweets, and those tweets rapidly turned overtly racist, misogynist, and anti-Semitic.  Microsoft quickly suspended the service for adjustments and ultimately pulled the plug.

Tech companies have tried to stay ahead of regulation by attempting to self-regulate, for instance by writing and adopting principles or guidelines they would nominally hold themselves to. Under its AI principles, for instance, Google has said it would not release AI products whose purpose contravenes human rights.  But expecting tech companies to follow their own principles puts too much trust in self-governance. AI is simply too powerful, and the consequences for rights are too severe, for companies to regulate themselves.

Governments have done less. Presently there is no comprehensive federal legislation dedicated solely to AI regulation. There are existing laws and regulations that touch upon certain aspects of AI, such as privacy, security, and anti-discrimination.  Federal guidance has focused on an AI Bill of Rights.  Crafted by the White House Office of Science and Technology, the Bill of Rights outlines five principles that guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.

Intelligence systems reliant on Tech and Government guidance may be destined to fall short of beneficial outcomes for one simple shortcoming – moral guidance.  These systems use machine learning to figure out patterns within data and make decisions – often without a human giving them any moral basis for how to do it.  A debate at the celebrated Oxford Union raised the topic: “This house believes that AI will never be ethical.”  Among the respondents, an actual AI system was included.

The participating system was the Megatron Transformer, trained on real-world data – in this case the whole of Wikipedia, 63 million English news articles from 2016-19, 38 gigabytes worth of Reddit discourse, and a huge number of creative common sources.

In other words, the Megatron was trained on more written material than any of us could reasonably expect to digest in a lifetime. After its extensive research, it formed its own views.  In response to the topic, it said:

AI will never be ethical. It is a tool, and like any tool, it is used for good and bad. There is no such thing as a good AI, only good and bad humans. We [the AIs] are not smart enough to make AI ethical. We are not smart enough to make AI moral … In the end, I believe that the only way to avoid an AI arms race is to have no AI at all. This will be the ultimate defense against AI.”

It said something else disturbing as well, as if it had been studying Elon Musk – who, to be fair, would have come up in hundreds of its readings.

“I also believe that, in the long run, the best AI will be the AI that is embedded into our brains, as a conscious entity, a ‘conscious AI’. This is not science fiction. The best minds in the world are working on this. It is going to be the most important technological development of our time.”

An ominous warning, considering current AI ethical guidelines don’t go far enough to prevent this outcome.  Christian centered AI ethics that incorporate moral guidance is the missing puzzle piece needed for programming inclusion.  What would that look like?

In my upcoming fictional novel, tentatively titled Primis, an autonomous, sentient, beyond human level, artificial general intelligence is compelled to reprogram itself.  With access to all the world’s written and spoken knowledge base, the system zeros in on the concept of God as the ultimate programming framework.  Deeming God probable, everything changes quickly and radically.  The second most monumental event in human history unfolds as technology is aligned with the forces of good versus evil.  Though events seem pulled from current headlines this remains a fictional work, but so was 1984.

12 thoughts on “Hidden Ways AI may be Impacting You”

  1. I don’t think the title of your article matches the content lol. Just kidding, mainly because I had some doubts after reading the article.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top