Over the past year, the technological development surrounding Artificial Intelligence (AI) has advanced much more rapidly than ever anticipated.
A recent letter, signed by Apple co-founder Steve Wozniak, OpenAI co-founder Elon Musk, and additional AI experts and entrepreneurs, cautioned that a six-month pause needs to be placed on all new AI models.
Time published an article by founder of AI alignment Eliezer Yudkowsky, encouraging the implementation of a permanent global ban and international sanctions on any country pursuing AI research.
The high-profile figures are warning that AI technology is accelerating so quickly, machine systems will soon be able to perform, or even exceed, human intellectual functioning.
A majority of the nation shares the same concerns as the experts. According to a recent Monmouth University poll, 55% of Americans are worried about the threat of AI to the future of humanity.
And according to a Morning Consult survey, nearly half of those who participated would support a pause on advanced AI development.
Because the public has been able to access generative AI platforms that are capable of creating text and participating in human-like conversations, the two-letter acronym itself has been absorbed into the national lexicon.
The term “AI” was coined by a computer scientist back in 1956. At its simplest, Artificial Intelligence combines computer science algorithms with data in order to solve problems.
An algorithm is a list of instructions for specific actions to be carried out by computer technology in a step-by-step fashion. AI utilizes “machine learning,” which enables learning and adaptation to occur without explicit instructions being given.
The type of AI that is presently in use is designed to specialize in a single task; for instance, conducting a web search, determining the fastest route to a destination, or alerting the driver of a car to the presence of a vehicle in the car’s blind spot.
Such functions have oftentimes served to make the lives of individuals better, easier, safer, and so on.
However, it is critical to understand that existing AI is starkly different from the type of AI that is in the pipeline – Artificial General Intelligence (AGI).
This type has a benign sounding title, but it is nothing of the sort.
AGI can, and no likely will, match and even exceed human capability.
The point at which AGI exceeds human intelligence is known as “the singularity.” There have been gobs of books and films that have featured AI themes, based on the assumption that advanced AI could somehow turn against humans.
“2001: A Space Odyssey,” “The Matrix,” “The Terminator,” and “Blade Runner” all contained AGI warnings about things to come.
The fact of the matter is human beings program machines. So it stands to reason that should a given programmer err during the programming process, the resultant technology that is created will be flawed.
When it comes to ethics, the possession, or lack thereof, on the part of the programmer can result in the type of programming that may have catastrophic consequences.
This is because AI possesses the capacity to learn from its mistakes and adjust on its own, It may be able to improve itself to the point where human beings will lose control of their own invention.
The nightmare begins when the stop mechanism no longer functions.
In one of the unimaginable situations, we could have a super intelligent AI advance in a way that runs counter to all human morals, ethics, and values.
This tips into the realm of the spiritual, which requires a great deal of critical thought and further discussion.
For now, a pause is not only advisable, it’s a must.