AI And Their Constant Evolution
Everyone knows AI to be constantly evolving, each and every day. It continues to amaze us, and even we continue to amaze ourselves at times. There have been many times when certain AI have been created by accident and continue to help people today. Normally, people use AI to help them with their daily tasks and being that it is so advanced, it completes those tasks rather easily. However, there are also many people that associate AI to be something more sinister. Bing AI has taken that belief to a new level.
What is Bing AI? What Does it Do?
Bing AI is a chatbot, which allows many users to have a conversation with the bot. This is something groundbreaking, as it is the most realistic that has been seen.
“The software firm’s chatbot, which is infused with ChatGPT but lightyears more advanced, with users commending its more naturalistic, human-sounding responses. Furthermore, among other things, the update allowed users to have lengthy, open-ended text convos with it,” according to the New York Post.
However, too many, this sounds like an amazing breakthrough in technology, as the human responses seem to give it a bit of a personality. Others argue, is this personality a bit too much.
Bing AI Has Certainly Gone Off-Script While Chatting With Humans
Many people have decided to give Bing AI a try. People have asked a normal amount of questions, which is around 10-15 questions, but some have gone over the top.
Microsoft also acknowledged that some users had been “really testing the capabilities and limits of the service,” and pointed to a few cases where they had been speaking to the chatbot for two hours. Two hours is quite a bit of time, especially for the Bing AI bot. This would cause the AI to completely switch up the answers it was giving, which to some was alarming.
A transcript of the chat was published Thursday. In it, Roose detailed troubling statements made by the AI chatbot that included expressing a desire to be human, be alive, steal nuclear codes, engineer a deadly pandemic, hack computers, and spread lies.
Furthermore, these statements are only a few of the many comments made by the Bing AI bot that have troubled users. In fact, many users even claim that the “machine may have become self-aware,” according to the New York Post. If this is the case, this will certainly trouble many people across the world as to what else this AI bot may mention.

Has Microsoft Said Anything About This Bot?
Microsoft is the company that owns this Bing AI bot, and they have heard about the claims that many users have made about this bot. On Wednesday the company posted a blog saying the “extended chat sessions of 15 or more questions.”
Bing could become repetitive or be “prompted” or “provoked” to give responses that were unhelpful or out of line with its designed tone. It is still unknown if Microsoft is going to provide any further updates to the Bing AI bot, as to prevent further incidents like these from occurring.
Is This The Future of AI? Should We Be Worried?
Clearly, there are many people worried as to what else AI could potentially be capable of. Based on the encounters with the Bing AI bot, many do not want technology to continue advancing.
As far as being worried, it’s unknown if we should be. Therefore, as AI continues to advance and continues to surprise us, it also comes with certain, very sinister, cons.
Written by David Loran Jr
Sources:
Fox News Channel: Bing’s AI bot tells reporter it wants to ‘be alive’, ‘steal nuclear codes’ and create ‘deadly virus’
New York Post: Bing AI chatbot goes on ‘destructive’ rampage: ‘I want to be powerful — and alive’
Insider: Microsoft has pretty much admitted its Bing chatbot can go rogue if prodded
Featured and Top Image Courtesy of Ming-yen Hsu’s Flickr Page – Creative Commons License
Inset Image Courtesy of Steve Jurvetson’s Flickr Page – Creative Commons License


















