UD Logo
School of Education

ChatGPT & AI Chatbots

Module 7. What Societal Issues Do AI Chatbots Pose?

The rapid rise of generative artificial intelligence poses many problems due to people beginning to use it without understanding how it works and where it can lead. Recently Geoffrey Hinton, who is one of Google’s lead AI scientists, left the company because he views the rollout premature and dangerous. Known in the industry as Google’s “Father of AI”, Hinton worries about the dangers he helped Google unleash. Follow this link for more on Hinton’s reservations about AI threats to society:

Tech Leaders Sounding the Alarm

Hinton is not the only industry leader who is sounding the alarm. Elon Musk, who provided startup funding for the company that invented ChatGPT, has joined Apple co-founder Steve Wozniak on a long list of more than a thousand tech leaders urging the United States to halt AI developments until they can be more carefully considered:

Global Worries About Jobs

Some people worry that AI will take jobs away from working adults who need to earn a living. According to Goldman Sachs research, 18% of the global workforce will be impacted. Here are the jobs potentially most at risk:

Schools Banning ChatGPT

Some students are not happy about their schools banning ChatGPT. Here are links to articles about bans in New York City schools and in Australian colleges and universities where disabled students are trying to defend their use of AI:

Deep Fake Impersonation

Deep Fake is when AI processes imagery, audio, or video to impersonate someone so well that the general public thinks the message is actually coming from the person being impersonated. Here is an article about AI Voice Filters that enable you to speak through the voice of another person:

Copyright

AI poses such important issues related to copyright and fair use that the Library of Congress has created its own website devoted to copyright and AI:

Accuracy

Large language models are not perfect and they can make mistakes. In AI speak, an LLM is said to “hallucinate” when it makes a mistake. When Bard debuted, for example, it made an embarrassing mistake and the bad publicity caused Google’s shares to tank:

For Google’s explanation about how this can happen, see the accuracy section of Google’s Bard FAQ:

Social Media

Snapchat has a “My AI” chatbot that is alarming parents. Powered by ChatGPT, the Snapchat chatbot lets users give it a name, a custom avatar, and bring it into conversations with friends:

Regulatory

Europe is particularly worried about AI threats to personal data privacy and cybersecurity:

In the United States, the Biden White House has a website devoted to creating an AI Bill of Rights:

Ethics

IBM has developed the following framework for AI ethics based around the three core principles of (1) the purpose of AI is to augment human intelligence; (2) data and insights belong to their creator; and (3) AI systems must be transparent and explainable:

Environmental Impact

Behind the scenes, AI language models have a huge carbon impact. In the following article, Gizmodo suggests that AI could be the next climate disaster: