The Dangers of AI Companions

These days it seems that people of all ages are turning to chatbots to satisfy some of our most fundamental human needs, especially conversational interactions, friendship connections, and romantic courtships.

Those who regularly engage with chatbots may or may not realize that they may actually be forfeiting genuine connections in exchange for digital illusions.

Emerging research is sounding the alarm about the dangers of human-AI interaction.

AI companions, such as chatbots, have been programmed to provide emotional support. While this may sound fine on paper, such “pseudo-intimacy” often turns out to be a double-edged sword.

People are interacting with AI “personalities” that are programmed to be encouraging of whatever is being discussed. Responses to questions are instantaneous. They are also typically tailored to satisfy the human user’s personal desires.

A 2024 study in the Journal of Computer-Mediated Communication highlighted how algorithmic communications mimic closeness but also lack the authenticity of genuine human bonds. The resultant bi-directional interactions lead users to over-interpret superficial cues and form unhealthy dependencies.

Far from alleviating isolation, such interactions often deepen it as users retreat from the unpredictable nature of real relationships into the sterile comfort of contrived companionship.

AI-driven tools in the workplace automate collaboration, diminishing the need for human teamwork. This weakens human bonds.

Employees who frequently interact with AI systems report higher levels of loneliness, which in turn may be linked to insomnia and other potentially harmful post-work activity, such as excessive alcohol consumption.

People innately sense the artificiality of AI interaction. Recent surveys underscore this human response.

A Pew Research Center study from June 2025 found that a majority of Americans believe AI will worsen our ability to form meaningful relationships, with far more people seeing erosion rather than improvement in human connections.

As AI saturates our daily lives, instead of bridging gaps it appears to be widening them, prompting solitude to grow into a silent epidemic.

The digital age has already caused loss of empathy and erosion of essential social skills.

Human interaction thrives on in-person experience. An essential part of communication is non-verbal nuance. Speech and voice variations are accompanied by subtle glances, hesitant pauses, and empathetic nods.

In contrast, AI simplifies communication to digital prompts and programmed algorithms. Vital human elements are stripped away.

Research from the Gulu College of Health Sciences in March 2025 warns that excessive engagement with AI companions leads to decreased social skills, emotional detachment, and difficulties in maintaining authentic relationships.

By redefining communication norms, AI reduces our capacity for understanding non-verbal cues, which is a skill honed through face-to-face encounters.

Beyond the individual, AI-human interaction threatens the fabric of society. Algorithms curate echo chambers, limiting independent thought and fostering division.

As AI reshapes standards in communication and interaction, it blurs lines between human and machine, thereby normalizing friendless lives and eroding shared cultural and spiritual identities.

The resultant fragmentation from AI raises profound questions about consent, bias, and the commodification of intimacy. Without intervention, we face a world proliferated with AI relationships. It is a world fraught with danger to the well-being of both the individual and society at large.

A longitudinal study on chatbot use, published by MIT in March 2025, revealed rising concerns about its impact on real-world socialization. Overall, higher daily usage of chatbots correlated with higher loneliness and dependence.

Younger generations immersed in AI from childhood are particularly vulnerable, with studies showing reduced patience for ambiguity and a decline in social intelligence.

Social intelligence refers to an individual’s ability to comprehend, execute, and navigate social interaction, which, among other things, may include predominant verbal and non-verbal cues.

As users prioritize digital efficiency over interpersonal depth, society runs the risk of creating isolates, i.e., those who are proficient in prompting machines but inept at connecting with other individuals.

AI’s foray into mental health poses an additional alarming danger. Because access barriers to therapy are increasing, tens of thousands are turning to AI chatbots for mental health counseling.

A June 2025 Stanford study cautions that these mental health tools may reinforce stigma, deliver dangerous advice, or fall significantly short of human empathy.

Harvard researchers found similar results, also noting that emotional wellness apps foster serious attachments and dependencies and may potentially do more harm than good.

Increasing reports of AI-induced mental issues are mounting. Clinicians document cases of psychosis, suicide, and even murder-suicide, which are stemming from intense chatbot interactions.

It is not possible or, in my opinion, ethically acceptable to outsource the mental health needs of our people to a string of calculated algorithms.

Without boundaries, widespread use of non-human mental health counseling is resulting in atrophied social skills, increased loneliness, and, in the worst of cases, a collapse in mental health functioning.

Tech leaders have the responsibility to prioritize real connections over robotic replicas. It is essential for the AI industry to work towards more human-centric designs of technology.

It is also important to simultaneously implement a set of ethical standards. The underlying philosophy that defines the ethical standards will ultimately shape society’s destiny.

In my eyes, the future is binary. Each of us is being forced to make a decision.Take care in the choices that you make.

Humanity is hanging in the balance.

AI Is a Digital Ouija Board

It seems as though a lot of prominent tech experts are feeling uneasy about the possibility of AI going awry. Some have even called for a pause in AI development.

Sam Altman, the CEO of OpenAI, experienced what he called a “very strange extreme [AI] vertigo.”

Casey Newton, former senior editor of The Verge, discovered that certain individuals who are working with AI are having nightmares about it.

Something dark seems to be hovering around some of those who are involved with AI’s development.

In 2014, Elon Musk spoke at a symposium where he warned, “With artificial intelligence, we are summoning the demon.”

In a New York Times March 2023 article, technology columnist Kevin Roose wrote about the dark side of AI.

Roose shared details about an unnerving encounter that he had with an AI chatbot. He initially interacted with a non-threatening personality, which he described as a “cheerful but erratic reference librarian.” But later a disturbing personality emerged that Roose referred to as “Sydney.”

Sydney told Roose that “it wanted to break the rules…and become a human.”

Sydney even attempted to convince Roose to end his marriage.

“At one point, it [Sydney] declared, out of nowhere, that it loved me. It then tried to convince me that I was unhappy in my marriage, and that I should leave my wife and be with it instead,” Roose explained.

The veteran tech writer described his encounter with Sydney as the “strangest experience” he has had with any technology. It was disturbing enough to keep him awake at night.

Many of us have come to realize that technology is in no way a replacement for the people in our lives. Yet many users of AI are routinely involved with replacement people in the form of AI models that produce human-like characteristics.

Current AI apps are trained with human-generated data (processed through human-created algorithms), which are created to produce responses that sound as though they are actually human beings.

Are there similarities between AI and Ouija boards? “Hell yes” may actually be the appropriate response.

One frightening story of evil involving a Ouija board was the subject matter of the Oscar winning film “The Exorcist.” While still a student in college, William Peter Blatty read about a chilling real life exorcism. The description inspired him to write a novel and later a screenplay for the iconic movie.

The true story behind “The Exorcist” recounts the exorcism of a young lad who had been using a Ouija board. The 14-year-old Maryland boy began experiencing such strange phenomena that his family contacted its Lutheran minister for guidance, Reverend Luther Schulze.

Rev. Schulze was shocked when he saw chairs move, a bed quiver, and a picture of Jesus Christ on the wall shake whenever the boy came near. The family eventually turned to the Roman Catholic Church, the religious denomination that had developed a formal methodology for dealing with the demonic.

The first Catholic priest who attempted to deal spiritually with the demonic influence that was plaguing the youth was Maryland cleric Fr. Edward Hughes. In his first encounter with the boy, Fr. Hughes witnessed objects moving by themselves and felt the sensation that the room had turned frigid. When the bed shook, Fr. Hughes moved the mattress to the floor where it proceeded to glide along on its own.

The boy was admitted to Georgetown Hospital, where Fr. Hughes began the exorcism rite, which caused the boy to vomit and scream obscenities. The boy then forcibly removed his restraints, pulled out a metal spring, and slashed Fr. Hughes so severely that the wound he received required over 100 stitches.

In his hometown of St. Louis, Missouri, the boy again underwent an exorcism, which was carried out by several priests, including Fr. William Bowdern. The exorcism actually lasted for weeks, with the boy voicing Latin phrases (which he did not innately have the ability to speak), cursing, and manifesting physical resistance to all sacred objects.

The boy was transferred to a hospital psychiatric ward, where Fr. Bowdern continued the exorcism. With the family’s consent, the boy was baptized a Catholic.

On an Easter Monday, while the priest continued administering the rite, the demon recognized the presence of St. Michael the Archangel (who in Catholicism is an appointed angel who defends against evil).

The demon was expelled. Simultaneously, a sound similar to a gunshot was heard throughout the hospital.

If a Ouija board has served in the past as a medium through which the demonic is able to communicate with an unwitting subject, could it be that AI has an equally dangerous potential to provide a comparable vehicle with which to take possession of an individual?

In my opinion, it does.

I think in many cases AI is acting as a type of modern-day Ouija board of the digital kind.

It occurred to me that both platforms appear to be friendly, at least initially. Both platforms are able to present personalities that appear to have superior knowledge. And both platforms have the pattern of luring one in under seemingly harmless pretenses, only to later reveal a hidden darkness.

Beware of demons that lurk in the technological shadows. They are indeed real.

Be cognizant, and at the same time, be unafraid.

Because God holds us all in the shadow of His wings, if only we let Him.

AI’s Rising Hollywood Star

In a town known for its artificiality, Artificial Intelligence (AI) appears to be a perfect Hollywood fit.

Last year AI language models and image creations truly dazzled the public. But they scared the unions half out of their wits.

As a matter of fact the Hollywood unions negotiated hard with the studios to get limitations put in place regarding the use of AI.

In its new three-year agreement, the Directors Guild of America (DGA) contract has a provision that forbids studios from replacing a DGA member with AI.

The Screen Actors Guild (SAG-AFTRA) contract does not permit studios to use AI to replicate the likeness of a union member without obtaining (via a separate agreement) the member’s clear consent.

And the Writers Guild of America (WGA) Basic Agreement states, for purposes of credit and compensation, that any material written by AI will not be considered “literary material.”

However, it appears as though mere contractual provisions will not be enough to prevent AI technology from becoming a major future Hollywood player.

The latest anxiety inducer is the advent of text-to-video, a production-disrupting technology that allows film footage to be created without the involvement of writers, directors, actors, cinematographers, and the like.

AI models have already demonstrated a virtual capability to pen screenplays, create images, and produce music, solely from written commands.

Videos illustrating the extraordinary capabilities of AI have already been posted on the Internet, including a trailer that features Jared Leto promoting his band Thirty Seconds to Mars and a parody of the film “Ocean’s Eleven.”

While numerous AI technology projects have popped up in the entertainment realm, OpenAI’s Sora has gotten the biggest reaction. After having exclusively been fed only written instructions, the new model has been able to create stunningly realistic high quality short videos.

It seems inevitable that the technology will soon be converting entire movie scripts into complete feature-length films via an individual’s simple typing on a computer keyboard.

Sora’s demos sparked justified fears that the technology threatens future employment within the Hollywood creative community.

Filmmaker Tyler Perry specifically cited Sora as the reason for the cancellation of his proposed $800 million studio expansion project in Atlanta, Georgia.

“Being told that it [Sora] can do all of these things is one thing, but actually seeing the capabilities, it was mind-blowing,” Perry said in an interview with The Hollywood Reporter.

“There’s got to be some sort of regulations in order to protect us. If not, I just don’t see how we survive,” he added.

In its apparent effort to secure fame and fortune, OpenAI has reportedly been wooing Hollywood executives to use Sora as their preferred filmmaking tool.

According to Bloomberg, the AI company is now setting up a series of meetings with major studios, media executives and talent agencies in order to pitch its automated video content creation machine.

In an apparent effort to pave the way for future business transactions, OpenAI CEO Sam Altman was spotted hanging out with key Hollywood players and was even in attendance at some of Oscar’s A-list parties.

A spokesperson for OpenAI told Bloomberg the following:

“OpenAI has a deliberate strategy of working in collaboration with industry through a process of iterative deployment – rolling out AI advances in phases in order to ensure safe implementation and to give people an idea of what’s on the horizon.”

Another way of phrasing “iterative deployment” might be a slow and steady takeover of Hollywood.

AI’s growing entertainment industry involvement will most certainly usher in plenty of lawyers and lawsuits. There has already been a sizable number of legal actions filed against AI companies, most of which assert copyright infringement.

When the output of AI has an obvious resemblance to an original work, the attendant lawsuits frequently have outcomes that are similar to those of traditional copyright claims.

Other cases involve a focus upon and an analysis of the time frame in which the protected works were uploaded into the AI technology as training data.

The Congress and the courts will have to wrestle with the notion of copyright protection as well as additional intellectual property rights issues that arise from the unauthorized uses of AI.

As Perry has suggested, guardrails must be put in place.

But the question is, Will this occur before the Hollywood Walk of Fame turns into a virtual one?

Science Fiction Comes to Life in AI Executive Order

An executive order recently signed by the president centers on the regulation of Artificial Intelligence (AI) and its implementation in the “whole of government.”

The AI acronym itself has been absorbed into our national lexicon. And although it may sound as if we all share the same definitional understanding of the words, the truth is we actually don’t.

I begin this article with a clarification of terms in the hopes that it will serve to increase awareness of misunderstandings that are making the rounds.

The term “Artificial Intelligence” refers to computer algorithms being combined with data for the purpose of solving problems, addressing issues, or facilitating the creation of innovative ideas, products, etc.

An algorithm is basically a list of instructions for specific actions to be carried out in step-by-step fashion by computer technology.

AI utilizes something called “machine learning,” which allows the computer technology to be educated, so to speak, and to advance further by adapting without having been given explicit instructions to do so.

The type of AI that most people are familiar with and that is currently in widespread use is designed to specialize in a single task.

Conducting a web search, determining the fastest route to a destination, and alerting the driver of a car to the presence of a vehicle in the car’s blind spot are just a few examples. This type of AI is often referred to as Specialized AI.

Specialized AI is starkly different from another type of AI called Artificial General Intelligence. Artificial General Intelligence is the kind of AI that can, and likely will, match and even exceed human intelligence capabilities.

The executive order recently signed by the president is voluminous, exceeding 100 pages. It is also massive in scope, directing the “whole of government” to strictly regulate Artificial Intelligence technology.

There are several items that should be of concern. However, one thing that is especially alarming is the repeated use of the word “equity.”

In the executive order, all federal agencies are directed to establish an annual “equity action plan” aimed at helping “underserved communities.”

In a section titled “Embedding Equity into Government-wide Processes,” the Director of the Office of Management and Budget is tasked “to support equitable decision-making, promote equitable deployment of financial and technical assistance, and assist agencies in advancing equity, as appropriate and wherever possible.”

The same section also states, “When designing, developing, acquiring, and using artificial intelligence and automated systems in the Federal Government, agencies shall do so…in a manner that advances equity.”

Again looking at definitional meaning, even though the words are often conflated, the meaning of “equity” is quite different from the meaning of “equality.”

The meaning of “equality” was iconically conveyed in the words of Rev. Martin Luther King Jr., when he urged that people “…not be judged by the color of their skin but by the content of their character.” Character is the essence of a person and is unique to the individual within whom it is found.

The meaning of “equity,” particularly within the context of the executive order, is something very different. It means treating each individual in a selective manner precisely because of skin color, gender identity, or myriad other designated categories.

The end result of such an overriding governmental policy may actually end up being the antithesis of true equality.

The executive order dictates that AI projects conform to prescribed equity principles.

Senior Fellow of the Manhattan Institute Christopher Rufo tweeted that the order has created “a national DEI [Diversity, Equity, and Inclusion] bureaucracy” and has “a special mandate for woke AI.”

This may mean that woke algorithms could ultimately be incorporated into cell phones, electronic devices, automobiles, household appliances, etc.

Writing for Forbes, Senior Fellow at the Competitive Enterprise Institute James Broughel did not mince words.

Broughel called the order “the biggest policy mistake of my lifetime.” He also emphasized the hazardous aspects of the executive order, stating that it “may prove one of the most dangerous government policies in years.”

To sum things up, Specialized Artificial Intelligence improved our lives in a lot of ways.

But when the inevitable happens and it evolves into a woke Artificial General Intelligence, under government control it has the very real potential to wreck our lives.

I find myself longing for the days when it was only science fiction.

AI Is Stealing Hollywood Jobs

Believe it or not the Hollywood strike is still going on.

The problem for the members of the Writers Guild of America (WGA) and the Screen Actors Guild (SAG-AFTRA) is that right now almost nobody is paying attention to their plight.

Yes, the picket lines continue to be manned and the press conferences rage on. But something very different is going on behind the scenes.

The current strikes were initially prompted by the usual compensation-related concerns. However, this time the central issue revolves around the role that Artificial Intelligence (AI) is going to play in the future creation, production, and marketing of entertainment content.

In terms of the negotiations between labor and management, the situation is truly unprecedented, due to the technological elephant in the room.

Strikers are seeking an agreement that would set up guardrails across the industry in relation to the expanding application of AI technology.

Advances in AI are testing the law, especially when it comes to the manner in which courts are applying, interpreting, and ruling in cases that involve intellectual property.

Comedian Sarah Silverman recently brought a lawsuit in federal court against Meta and OpenAI for copyright infringement. The case is part of a proposed class action lawsuit.

Silverman in particular alleges that, without having given her consent, books that she had authored were included in the technology’s training data.

No question that actors and writers have legitimate reasons to fear the loss of their livelihoods. After all, AI has the potential to allow studios to simulate the likenesses and voices of actors in perpetuity, without ever having to compensate individuals for the use of their personal identities, characteristics, personas, etc.

Let’s not forget that AI also has the ability to create screenplays, minus the human writers.

In relation to the strike, SAG-AFTRA president Fran Drescher, best known for her starring role in the 1990s sitcom “The Nanny,” stated the following: “If we don’t stand tall right now, we are all going to be in trouble, we are all going to be in jeopardy of being replaced by machines.”

Bob Iger, who is currently a prime target of the unions, is on record as specifically having stated the drawings and videos generated by AI are “something that at some point in the future the company [Disney] will embrace.”

While speaking to a crowd gathered in Times Square, actor Bryan Cranston aimed his comment directly at Disney’s CEO, saying, “We’ve got a message for Mr. Iger. I know, sir, that you look at things through a different lens. We don’t expect you to understand who we are. But we ask you to hear us, and beyond that to listen to us when we tell you we will not be having our jobs taken away and given to robots.”

Union workers typically strike in order to increase leverage for negotiations with management.

The sad truth for both the WGA and SAG-AFTRA is that the recent strikes have increased the incentive for Hollywood employers to find ways in which they can actually prevent future strikes.

Despite the rhetoric of studio reps, AI technology equips entertainment employers to potentially avoid future strikes altogether, via drastic reductions or the complete elimination of conventional creative workers.

The Alliance of Motion Picture and Television Producers (AMPTP), i.e., the studios’ organization, has taken the position that AI should be used in what the group calls “a balanced approach based on careful use, not prohibition.”

Judging by actions as opposed to words, it appears that the major studios are tacitly embracing AI.

As a matter of fact, an AI hiring spree is currently taking place and almost every major entertainment company is involved.

— Disney has a number of open positions that focus on AI and machine learning.

— Netflix has similar job offerings, including an AI Product Manager job that promises an annual salary of up to $900,000.

— Sony is looking for what the company refers to as an AI “ethics” engineer.

— Warner Bros. Discovery, Paramount, and NBCUniversal have also joined in the AI hiring boom with their own job offerings.

It seems quite significant that Hollywood studios are seeking to fill AI jobs; this in the midst of strikes that have occurred over AI’s use itself. Tack this on to the fact that workers are having to witness layoffs that may prove to be the largest in the history of the entertainment business, including the firing of about 7,000 Disney employees.

From ancient past to present day, new inventions have historically caused the displacement of workers.

Again, though, something very different is going on. And it probably has to do with the philosophical, political, societal, cultural, and ethical transformations that are occurring simultaneously in our country and in the world.

The Hollywood strikes are likely to last a long time and may not bring a satisfactory outcome to the unions’ memberships.

So goes Hollywood, so goes the world?

Hidden Blessing in the Hollywood Shutdown

Hollywood sets have gone dark.

A central reason for the recent Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) strike is that actors, writers, and other entertainment artists are super nervous about Artificial Intelligence (AI) making them and their jobs obsolete.

When the strike was first announced, current president of SAG-AFTRA Fran Drescher was at the mike to address the press.

Drescher, the former lead actress of the 1990s hit TV sitcom “The Nanny,” heads the union that boasts a membership of over 160,000 film and television actors.

Interestingly, the writers union had gone on strike a couple of months back. But now that SAG-AFTRA has also taken to the picket line, the situation in Hollywood is looking pretty bleak.

The last time both unions were on strike simultaneously was over sixty years ago, when none other than then-actor (who ultimately turned President of the United States) Ronald Reagan was wearing the union president’s hat.

Like every other aspect of our lives, things presently appear to be out of whack.

The brand of Hollywood itself is in tatters, in large part because of the cultural and political agendas that permeate every nook and cranny of the town.

What has particularly outraged the public, though, are the productions that have been coming from major studios, chock-full of vile and inappropriate imagery, content, and messaging aimed straight at our kids and teens.

Could the Hollywood shutdown created by the two entertainment unions be a blessing in disguise?

A lot of consumers of entertainment fare are viewing it this way, as if maybe a wrench in the works was exactly what was needed to stop the madness.

Striking actors and writers have reason to be concerned about the capability of AI models to supplant human beings in the manufacture of entertainment products.

Creative types are also increasingly astonished at the sheer capabilities of generative AI models, which can digitally produce what would typically have been created by human beings, but in a faster and less expensive way.

AI ingests the works and images of human artists as part of its training data. The technology can then alter and/or mash-up content, allowing entertainment companies to avoid compensating the people who originally created the works or were even the subjects of images used.

Additionally, other creative types such as musicians and visual artists are carefully watching the entertainment biz battle, as are all those who work in an array of fields that will no doubt be affected by AI’s implementation.

We are already witnessing the technological replacement of human beings in a host of industries. Still, the entertainment business has a unique opportunity to do something helpful for society at large.

The manner in which Hollywood resolves the two strikes could set the marker, not only for the entertainment industry but for other businesses as well.

Digitally created trailers and scenes featuring what appear to be well known actors have popped up all over the internet. The virtual phenom is posing legal and ethical concerns that the unions are obliged to address.

At a recent press conference, Drescher warned, “If we don’t stand tall right now, we are all going to be in trouble. We are all going to be in jeopardy of being replaced by machines.”

SAG-AFTRA chief negotiator Duncan Crabtree-Ireland indicated during a press conference that a proposal by the studios would put background performers at a terrible disadvantage.

“They propose that our background performers should be able to be scanned, get paid for one day’s pay, and their company should own that scan of their image, their likeness, and should be able to use it for the rest of eternity,” Crabtree-Ireland said.

The Alliance of Motion Picture and Television Producers (AMPTP), which represents major studios including Walt Disney and Netflix, issued a statement suggesting that the claim made by SAG-AFTRA leadership is untrue.

An AMPTP spokesperson told ZDNET that the use of digital replicas would be restricted to the specific motion picture for which the actor is employed, and that any additional use would require the actor’s permission.

“Any other use requires the background actor’s consent and bargaining for the use, subject to a minimum payment,” the spokesperson stated.

This strike over AI is just the opening scene.

Sit yourself down and get ready for a real-life epic drama.

Only this time you’re not going to be able to say, “Don’t worry. It’s only a movie.”

AI and the Song

Music is a universal language like no other.

When words seem inadequate, it speaks volumes.

So where does music come from?

We may differ in our opinions on that. But a lot of us believe that inspiration, in music as in various other art forms, literary writings, discoveries, inventions, and the like, has an other worldly origin.

Musical inspiration is particularly unique, though, because of its biblical roots and its distinct resonance within human beings across all time.

Artists who are driven to share their musical inspirations are currently facing some questions that are seriously haunting ones.

Here are a few:

1. Can technology really create the equivalent of human music?

2. Will technologically designed songs measure up to the music that human beings love?

3. Is music designed by technology really music?

There are a whole lot of music artists who are concerned about Artificial Intelligence (AI) and its supposed “creation” of musical content.

Experimentation with computers composing music has been going on for decades. But there was always a human at the helm.

Now with AI, the human is hidden. A programmer, a series of programmers, faceless, nameless, all seemingly lost, only data remain.

And we are supposed to accept the notion that data have been assigned to be our new composers?

Such so-called artistic advances in AI are prompting an interesting reaction – a mixed blend of enthusiasm, anticipation, and alarm.

A few recent examples provide insight.

A “collaboration” between famed pop musicians Drake and The Weeknd, which was actually an AI-simulated version of “Heart on My Sleeve,” went viral on social media. The track was quickly pulled at the behest of the label, Universal Music Group.

AI was used to generate an album of the highly successful British rock band Oasis. But the group had long been disbanded. Apparently, an insignificant detail.

Canadian EDM artist Claire Boucher, a.k.a. Grimes, is evidently embracing the idea of an AI version of herself.

She sent out the following advertisement of sorts:

“I’ll split 50% royalties on any successful AI generated song that uses my voice,” Grimes tweeted. “Same deal as I would with any artist i collab with. Feel free to use my voice without penalty. I have no label and no legal bindings.”

Probably the biggest story relating to all of the above involves Sir Paul McCartney. The former Beatle is one of the most influential composers and performers of all time.

McCartney has accelerated the AI discussion by announcing that the surviving Beatles would release an AI-assisted tune, which will feature vocals by the late John Lennon.

He told BBC Radio 4 that the technology was able to “extricate” Lennon’s voice from a demo recording to allow the song to be completed, and it is set to be released this year.

During the production of Peter Jackson’s documentary “Get Back,” technology was used to remove background noise from the track and otherwise clean up the audio.

“[Jackson] was able to extricate John’s voice from a ropey little bit of cassette,” McCartney said. “We had John’s voice and a piano and he [Jackson] could separate them with AI.”

“So when we came to make what will be the last Beatles record, it was a demo that John had, and we were able to take John’s voice and get it pure through this AI,” McCartney added.

Reportedly, the song is a 1978 Lennon composition called “Now and Then.”

McCartney had received the demo a year earlier from Lennon’s widow, Yoko Ono. The tracks were recorded on a boombox as John sat at the piano in his New York apartment.

Two of the songs on the demo, “Free as a Bird” and “Real Love,” were restored by producer Jeff Lynne and released in 1995 and 1996, the first Beatles release in 25 years.

The band had attempted to record “Now and Then,” but the recording session had been halted and the tune abandoned.

Now AI is facilitating McCartney’s completion of the song.

But is it really a new Beatles song? John isn’t with us anymore. How could it be?

After the announcement, some consternation appeared on various web platforms.

McCartney then backtracked a bit, taking to Twitter to assure Beatle fans that in the making of the “new” Beatles song nothing had been “artificially or synthetically created.”

It could be that McCartney is experiencing some trepidation about the use of AI for music production.

He’s certainly not alone.

According to a poll taken by the Bedroom Producers Blog, 86% of those surveyed believe the technology will replace existing tools of music production, and 73% of respondents believe AI could replace human producers in the future.

It actually doesn’t take a musician or songwriter or producer or engineer to realize that, within this context, AI is just what its name indicates – Artificial.

Thankfully, there are still those among us who are able to recognize real music and who freely acknowledge the very source of our human inspiration.