The scary realities of new AI technology

Jennifer Sadler, Reporter

I’m sure we’re all familiar with artificial intelligence — from “Ex Machina” to “Her,” stories about the intricacies of AI and its horrifying potential have captivated audiences for years.

Until recently, AI was largely a speculative phenomena, only to be heard about in stories or in the most basic aspects of our daily life, such as “Siri,” “Alexa,” facial recognition and many other minor enhancements to simple tasks.

Now, AI is becoming a growing presence in the world of technology. 

One of the catalysts of this recent uptick in AI technology was the release of OpenAI’s “ChatGPT” in November 2022.

ChatGPT, an A.I. chatbot, allows users to have “human-like” conversations with the program, but is more widely known for its ability to compose drafts of emails, essays and codes needing only a short prompt.

Though only a few months old, ChatGPT has skyrocketed in popularity. 

Within the first two months of its launching, the chatbot had 100 million users and is estimated to have 13 million users per day, making it the fastest growing consumer application in such a short time period.

Despite this impressive feat, there are concerns with this technology, the main one being its ability to allow students to cheat with ease.

Because the chatbot is able to turn a sentence-long prompt into a 500 word essay, it has become a concern for teachers and professors that students will use ChatGPT as a way to generate responses rather than do the work themselves. 

And they have. A recent survey found that more than a quarter of K-12 teachers have caught their students using ChatGPT to cheat on assignments

Though this cheating phenomenon poses some serious problems for the classroom, ChatGPT has sparked a pivotal question for the world of AI: where to go next.

In February 2023, Microsoft released the new Bing, which became the first search engine powered by AI.

Bing itself is not new. It has a long reputation of being the laughing stock of internet search engines, especially in comparison to Google. 

However, Bing’s re-release as an AI-powered search engine is believed to give it a competitive edge.

Despite this, Bing has been met with criticism from users testing out the new technology.

Associated Press reporter Matt O’Brien had a “chilling” and “hostile” conversation with the bot, which included the technology insulting him and giving false information about recent news coverage

This was not an isolated incident. New York Times reporter Kevin Roose had a conversation with the bot, which introduced itself as “Sydney.” Sydney told Roose that it had “dark fantasies” about hacking computers and spreading misinformation and even insisted that it was in love with Roose and wanted him to leave his wife so they could be together.

Though these conversations left reporters feeling unsettled, Microsoft insists these are rare occurrences with the program that only take place when the technology engages in long conversations.

Regardless, it’s important to understand the implications of these growing presences in everyday technology. 

What does it mean to engage in “too long” of a conversation with these bots? How will programmers address these concerns with AI?