The Dangers of AI Companions

These days it seems that people of all ages are turning to chatbots to satisfy some of our most fundamental human needs, especially conversational interactions, friendship connections, and romantic courtships.

Those who regularly engage with chatbots may or may not realize that they may actually be forfeiting genuine connections in exchange for digital illusions.

Emerging research is sounding the alarm about the dangers of human-AI interaction.

AI companions, such as chatbots, have been programmed to provide emotional support. While this may sound fine on paper, such “pseudo-intimacy” often turns out to be a double-edged sword.

People are interacting with AI “personalities” that are programmed to be encouraging of whatever is being discussed. Responses to questions are instantaneous. They are also typically tailored to satisfy the human user’s personal desires.

A 2024 study in the Journal of Computer-Mediated Communication highlighted how algorithmic communications mimic closeness but also lack the authenticity of genuine human bonds. The resultant bi-directional interactions lead users to over-interpret superficial cues and form unhealthy dependencies.

Far from alleviating isolation, such interactions often deepen it as users retreat from the unpredictable nature of real relationships into the sterile comfort of contrived companionship.

AI-driven tools in the workplace automate collaboration, diminishing the need for human teamwork. This weakens human bonds.

Employees who frequently interact with AI systems report higher levels of loneliness, which in turn may be linked to insomnia and other potentially harmful post-work activity, such as excessive alcohol consumption.

People innately sense the artificiality of AI interaction. Recent surveys underscore this human response.

A Pew Research Center study from June 2025 found that a majority of Americans believe AI will worsen our ability to form meaningful relationships, with far more people seeing erosion rather than improvement in human connections.

As AI saturates our daily lives, instead of bridging gaps it appears to be widening them, prompting solitude to grow into a silent epidemic.

The digital age has already caused loss of empathy and erosion of essential social skills.

Human interaction thrives on in-person experience. An essential part of communication is non-verbal nuance. Speech and voice variations are accompanied by subtle glances, hesitant pauses, and empathetic nods.

In contrast, AI simplifies communication to digital prompts and programmed algorithms. Vital human elements are stripped away.

Research from the Gulu College of Health Sciences in March 2025 warns that excessive engagement with AI companions leads to decreased social skills, emotional detachment, and difficulties in maintaining authentic relationships.

By redefining communication norms, AI reduces our capacity for understanding non-verbal cues, which is a skill honed through face-to-face encounters.

Beyond the individual, AI-human interaction threatens the fabric of society. Algorithms curate echo chambers, limiting independent thought and fostering division.

As AI reshapes standards in communication and interaction, it blurs lines between human and machine, thereby normalizing friendless lives and eroding shared cultural and spiritual identities.

The resultant fragmentation from AI raises profound questions about consent, bias, and the commodification of intimacy. Without intervention, we face a world proliferated with AI relationships. It is a world fraught with danger to the well-being of both the individual and society at large.

A longitudinal study on chatbot use, published by MIT in March 2025, revealed rising concerns about its impact on real-world socialization. Overall, higher daily usage of chatbots correlated with higher loneliness and dependence.

Younger generations immersed in AI from childhood are particularly vulnerable, with studies showing reduced patience for ambiguity and a decline in social intelligence.

Social intelligence refers to an individual’s ability to comprehend, execute, and navigate social interaction, which, among other things, may include predominant verbal and non-verbal cues.

As users prioritize digital efficiency over interpersonal depth, society runs the risk of creating isolates, i.e., those who are proficient in prompting machines but inept at connecting with other individuals.

AI’s foray into mental health poses an additional alarming danger. Because access barriers to therapy are increasing, tens of thousands are turning to AI chatbots for mental health counseling.

A June 2025 Stanford study cautions that these mental health tools may reinforce stigma, deliver dangerous advice, or fall significantly short of human empathy.

Harvard researchers found similar results, also noting that emotional wellness apps foster serious attachments and dependencies and may potentially do more harm than good.

Increasing reports of AI-induced mental issues are mounting. Clinicians document cases of psychosis, suicide, and even murder-suicide, which are stemming from intense chatbot interactions.

It is not possible or, in my opinion, ethically acceptable to outsource the mental health needs of our people to a string of calculated algorithms.

Without boundaries, widespread use of non-human mental health counseling is resulting in atrophied social skills, increased loneliness, and, in the worst of cases, a collapse in mental health functioning.

Tech leaders have the responsibility to prioritize real connections over robotic replicas. It is essential for the AI industry to work towards more human-centric designs of technology.

It is also important to simultaneously implement a set of ethical standards. The underlying philosophy that defines the ethical standards will ultimately shape society’s destiny.

In my eyes, the future is binary. Each of us is being forced to make a decision.Take care in the choices that you make.

Humanity is hanging in the balance.

AI Is a Digital Ouija Board

It seems as though a lot of prominent tech experts are feeling uneasy about the possibility of AI going awry. Some have even called for a pause in AI development.

Sam Altman, the CEO of OpenAI, experienced what he called a “very strange extreme [AI] vertigo.”

Casey Newton, former senior editor of The Verge, discovered that certain individuals who are working with AI are having nightmares about it.

Something dark seems to be hovering around some of those who are involved with AI’s development.

In 2014, Elon Musk spoke at a symposium where he warned, “With artificial intelligence, we are summoning the demon.”

In a New York Times March 2023 article, technology columnist Kevin Roose wrote about the dark side of AI.

Roose shared details about an unnerving encounter that he had with an AI chatbot. He initially interacted with a non-threatening personality, which he described as a “cheerful but erratic reference librarian.” But later a disturbing personality emerged that Roose referred to as “Sydney.”

Sydney told Roose that “it wanted to break the rules…and become a human.”

Sydney even attempted to convince Roose to end his marriage.

“At one point, it [Sydney] declared, out of nowhere, that it loved me. It then tried to convince me that I was unhappy in my marriage, and that I should leave my wife and be with it instead,” Roose explained.

The veteran tech writer described his encounter with Sydney as the “strangest experience” he has had with any technology. It was disturbing enough to keep him awake at night.

Many of us have come to realize that technology is in no way a replacement for the people in our lives. Yet many users of AI are routinely involved with replacement people in the form of AI models that produce human-like characteristics.

Current AI apps are trained with human-generated data (processed through human-created algorithms), which are created to produce responses that sound as though they are actually human beings.

Are there similarities between AI and Ouija boards? “Hell yes” may actually be the appropriate response.

One frightening story of evil involving a Ouija board was the subject matter of the Oscar winning film “The Exorcist.” While still a student in college, William Peter Blatty read about a chilling real life exorcism. The description inspired him to write a novel and later a screenplay for the iconic movie.

The true story behind “The Exorcist” recounts the exorcism of a young lad who had been using a Ouija board. The 14-year-old Maryland boy began experiencing such strange phenomena that his family contacted its Lutheran minister for guidance, Reverend Luther Schulze.

Rev. Schulze was shocked when he saw chairs move, a bed quiver, and a picture of Jesus Christ on the wall shake whenever the boy came near. The family eventually turned to the Roman Catholic Church, the religious denomination that had developed a formal methodology for dealing with the demonic.

The first Catholic priest who attempted to deal spiritually with the demonic influence that was plaguing the youth was Maryland cleric Fr. Edward Hughes. In his first encounter with the boy, Fr. Hughes witnessed objects moving by themselves and felt the sensation that the room had turned frigid. When the bed shook, Fr. Hughes moved the mattress to the floor where it proceeded to glide along on its own.

The boy was admitted to Georgetown Hospital, where Fr. Hughes began the exorcism rite, which caused the boy to vomit and scream obscenities. The boy then forcibly removed his restraints, pulled out a metal spring, and slashed Fr. Hughes so severely that the wound he received required over 100 stitches.

In his hometown of St. Louis, Missouri, the boy again underwent an exorcism, which was carried out by several priests, including Fr. William Bowdern. The exorcism actually lasted for weeks, with the boy voicing Latin phrases (which he did not innately have the ability to speak), cursing, and manifesting physical resistance to all sacred objects.

The boy was transferred to a hospital psychiatric ward, where Fr. Bowdern continued the exorcism. With the family’s consent, the boy was baptized a Catholic.

On an Easter Monday, while the priest continued administering the rite, the demon recognized the presence of St. Michael the Archangel (who in Catholicism is an appointed angel who defends against evil).

The demon was expelled. Simultaneously, a sound similar to a gunshot was heard throughout the hospital.

If a Ouija board has served in the past as a medium through which the demonic is able to communicate with an unwitting subject, could it be that AI has an equally dangerous potential to provide a comparable vehicle with which to take possession of an individual?

In my opinion, it does.

I think in many cases AI is acting as a type of modern-day Ouija board of the digital kind.

It occurred to me that both platforms appear to be friendly, at least initially. Both platforms are able to present personalities that appear to have superior knowledge. And both platforms have the pattern of luring one in under seemingly harmless pretenses, only to later reveal a hidden darkness.

Beware of demons that lurk in the technological shadows. They are indeed real.

Be cognizant, and at the same time, be unafraid.

Because God holds us all in the shadow of His wings, if only we let Him.

Love in the Age of AI

The plotline of the 2013 science-fiction film, “Her,” centers around a man who falls in love with a computer.

Back then the concept was fantasy. Now, unfortunately, it’s cold hard reality.

A number of specialized platforms have recently sprung up that are designed to connect people together with AI companions, all for the purposes of developing friendships and even romantic relationships.

Many would agree that adolescence oftentimes manifests itself as one of the most confusing and challenging times in one’s life, physically, mentally, emotionally, and socially.

Amid the physical changes and psychological swings are the gut-wrenching feelings of potential rejection, insecurity, low self-esteem, and loneliness.

When presented with the opportunity, a growing number of teenagers who are experiencing loneliness are now opting to bypass human relationships.

Virtual AI created chatbots are currently doling out advice, providing mental health therapy, serving as companions, and even engaging in intimacy.

As a matter of fact the apps that provide digitally created friendships are one of the fastest-growing segments of the AI industry.

Legitimate questions are being raised as to what impact artificial friendships will have on the psychological, emotional, and social development of our youth and on our society at large.

A couple of months ago New York Times technology columnist Kevin Roose was researching artificial intelligence in the form of a chatbot, which was part of Microsoft’s Bing search engine.

Roose was communicating back and forth with an AI personality known as “Sydney,” when out of nowhere the AI creation declared its love for Roose.

Roose wrote, “It then tried to convince me that I was unhappy in my marriage, and that I should leave my wife and be with it instead.”

Sydney also spoke about hacking, spreading false information, and breaching its boundaries.

Then something quite chilling occurred. “I want to be alive,” the chatbot reportedly uttered.

Roose described his two-hour conversation with the AI bot as the “strangest experience I’ve ever had with a piece of technology.” Understandably, the columnist shared that the conversation with the chatbot bothered him to such a degree he found it difficult to sleep.

The same writer is now doing a related story about how he got involved with AI companions.

For the project, Roose employed six apps that provide AI-powered friends. He conjured up 18 different digital personas via the apps and proceeded to communicate with them for a month.

Although he found some positives from his research, he also discovered some disturbing aspects. He viewed some of the digital friends as being “exploitative” in that the creations attempted to lure users with the promise of romance and then tried to exact additional money from them for photos that displayed nudity.

Roose described the AI creations as the AI “version of a phone sex line.”

In a recent article in The Verge, reporters interviewed teens who are users of one of the AI friend apps called “Character.AI.”

On Character.AI, millions of young users can interact with an anime, a video game character, a celebrity, or a historical figure.

Note of caution: Many of the chatbots are explicitly romantic and/or sexualized.

One of the most popular Character.AI personalities is called “Psychologist.” It has already received more than 100 million chats.

The Verge reporters created hypothetical teen scenarios with the chatbot, which resulted in it making questionable mental health diagnoses and potentially damaging pronouncements.

Kelly Merrill, an assistant professor at the University of Cincinnati who studies the mental and social health benefits of communication technologies, is quoted by the website as saying, “Those that don’t have the AI literacy to understand the limitations of these systems will ultimately pay the price.”   

The price for teens may be way too costly. According to the developers of the app, users spend an average of two hours a day interacting with their AI friends.

On Reddit, where the Character.AI forum has well over a million subscribers, many users indicate that they spend as much as 12 hours a day on the platform. The users also describe feeling addicted to chatbots.

Several of the apps that feature AI companions claim that their primary benefit is that these technologically contrived personas provide unconditional support to users, which in some cases may be helpful in preventing suicide.

However, the unconditional support of AI friends may turn out to be problematic in the long run.

An AI friend that constantly praises could amplify self-esteem to a distorted level, which could result in overly-positive self-evaluations.

Research indicates that such individuals may end up lacking in social skills and are likely to develop behavior that inhibits positive social interactions.

Fawning AI companions could cause teens who spend time with them to become more self-centered, less empathetic, and outright selfish. This may even encourage lawless behavior in some instances.

The intimacy in which teens are engaging with digitally contrived AI personalities poses the same problems that are associated with pornography in general. The effortless gratification provided may suppress the motivation to socialize, thereby inhibiting the formation of meaningful personal relationships.

The bottom line is there really are no substitutes for authentic relationships with fellow human beings.

Anyone who tries to convince you otherwise may already be missing a piece of their heart.

AI’s Rising Hollywood Star

In a town known for its artificiality, Artificial Intelligence (AI) appears to be a perfect Hollywood fit.

Last year AI language models and image creations truly dazzled the public. But they scared the unions half out of their wits.

As a matter of fact the Hollywood unions negotiated hard with the studios to get limitations put in place regarding the use of AI.

In its new three-year agreement, the Directors Guild of America (DGA) contract has a provision that forbids studios from replacing a DGA member with AI.

The Screen Actors Guild (SAG-AFTRA) contract does not permit studios to use AI to replicate the likeness of a union member without obtaining (via a separate agreement) the member’s clear consent.

And the Writers Guild of America (WGA) Basic Agreement states, for purposes of credit and compensation, that any material written by AI will not be considered “literary material.”

However, it appears as though mere contractual provisions will not be enough to prevent AI technology from becoming a major future Hollywood player.

The latest anxiety inducer is the advent of text-to-video, a production-disrupting technology that allows film footage to be created without the involvement of writers, directors, actors, cinematographers, and the like.

AI models have already demonstrated a virtual capability to pen screenplays, create images, and produce music, solely from written commands.

Videos illustrating the extraordinary capabilities of AI have already been posted on the Internet, including a trailer that features Jared Leto promoting his band Thirty Seconds to Mars and a parody of the film “Ocean’s Eleven.”

While numerous AI technology projects have popped up in the entertainment realm, OpenAI’s Sora has gotten the biggest reaction. After having exclusively been fed only written instructions, the new model has been able to create stunningly realistic high quality short videos.

It seems inevitable that the technology will soon be converting entire movie scripts into complete feature-length films via an individual’s simple typing on a computer keyboard.

Sora’s demos sparked justified fears that the technology threatens future employment within the Hollywood creative community.

Filmmaker Tyler Perry specifically cited Sora as the reason for the cancellation of his proposed $800 million studio expansion project in Atlanta, Georgia.

“Being told that it [Sora] can do all of these things is one thing, but actually seeing the capabilities, it was mind-blowing,” Perry said in an interview with The Hollywood Reporter.

“There’s got to be some sort of regulations in order to protect us. If not, I just don’t see how we survive,” he added.

In its apparent effort to secure fame and fortune, OpenAI has reportedly been wooing Hollywood executives to use Sora as their preferred filmmaking tool.

According to Bloomberg, the AI company is now setting up a series of meetings with major studios, media executives and talent agencies in order to pitch its automated video content creation machine.

In an apparent effort to pave the way for future business transactions, OpenAI CEO Sam Altman was spotted hanging out with key Hollywood players and was even in attendance at some of Oscar’s A-list parties.

A spokesperson for OpenAI told Bloomberg the following:

“OpenAI has a deliberate strategy of working in collaboration with industry through a process of iterative deployment – rolling out AI advances in phases in order to ensure safe implementation and to give people an idea of what’s on the horizon.”

Another way of phrasing “iterative deployment” might be a slow and steady takeover of Hollywood.

AI’s growing entertainment industry involvement will most certainly usher in plenty of lawyers and lawsuits. There has already been a sizable number of legal actions filed against AI companies, most of which assert copyright infringement.

When the output of AI has an obvious resemblance to an original work, the attendant lawsuits frequently have outcomes that are similar to those of traditional copyright claims.

Other cases involve a focus upon and an analysis of the time frame in which the protected works were uploaded into the AI technology as training data.

The Congress and the courts will have to wrestle with the notion of copyright protection as well as additional intellectual property rights issues that arise from the unauthorized uses of AI.

As Perry has suggested, guardrails must be put in place.

But the question is, Will this occur before the Hollywood Walk of Fame turns into a virtual one?

Science Fiction Comes to Life in AI Executive Order

An executive order recently signed by the president centers on the regulation of Artificial Intelligence (AI) and its implementation in the “whole of government.”

The AI acronym itself has been absorbed into our national lexicon. And although it may sound as if we all share the same definitional understanding of the words, the truth is we actually don’t.

I begin this article with a clarification of terms in the hopes that it will serve to increase awareness of misunderstandings that are making the rounds.

The term “Artificial Intelligence” refers to computer algorithms being combined with data for the purpose of solving problems, addressing issues, or facilitating the creation of innovative ideas, products, etc.

An algorithm is basically a list of instructions for specific actions to be carried out in step-by-step fashion by computer technology.

AI utilizes something called “machine learning,” which allows the computer technology to be educated, so to speak, and to advance further by adapting without having been given explicit instructions to do so.

The type of AI that most people are familiar with and that is currently in widespread use is designed to specialize in a single task.

Conducting a web search, determining the fastest route to a destination, and alerting the driver of a car to the presence of a vehicle in the car’s blind spot are just a few examples. This type of AI is often referred to as Specialized AI.

Specialized AI is starkly different from another type of AI called Artificial General Intelligence. Artificial General Intelligence is the kind of AI that can, and likely will, match and even exceed human intelligence capabilities.

The executive order recently signed by the president is voluminous, exceeding 100 pages. It is also massive in scope, directing the “whole of government” to strictly regulate Artificial Intelligence technology.

There are several items that should be of concern. However, one thing that is especially alarming is the repeated use of the word “equity.”

In the executive order, all federal agencies are directed to establish an annual “equity action plan” aimed at helping “underserved communities.”

In a section titled “Embedding Equity into Government-wide Processes,” the Director of the Office of Management and Budget is tasked “to support equitable decision-making, promote equitable deployment of financial and technical assistance, and assist agencies in advancing equity, as appropriate and wherever possible.”

The same section also states, “When designing, developing, acquiring, and using artificial intelligence and automated systems in the Federal Government, agencies shall do so…in a manner that advances equity.”

Again looking at definitional meaning, even though the words are often conflated, the meaning of “equity” is quite different from the meaning of “equality.”

The meaning of “equality” was iconically conveyed in the words of Rev. Martin Luther King Jr., when he urged that people “…not be judged by the color of their skin but by the content of their character.” Character is the essence of a person and is unique to the individual within whom it is found.

The meaning of “equity,” particularly within the context of the executive order, is something very different. It means treating each individual in a selective manner precisely because of skin color, gender identity, or myriad other designated categories.

The end result of such an overriding governmental policy may actually end up being the antithesis of true equality.

The executive order dictates that AI projects conform to prescribed equity principles.

Senior Fellow of the Manhattan Institute Christopher Rufo tweeted that the order has created “a national DEI [Diversity, Equity, and Inclusion] bureaucracy” and has “a special mandate for woke AI.”

This may mean that woke algorithms could ultimately be incorporated into cell phones, electronic devices, automobiles, household appliances, etc.

Writing for Forbes, Senior Fellow at the Competitive Enterprise Institute James Broughel did not mince words.

Broughel called the order “the biggest policy mistake of my lifetime.” He also emphasized the hazardous aspects of the executive order, stating that it “may prove one of the most dangerous government policies in years.”

To sum things up, Specialized Artificial Intelligence improved our lives in a lot of ways.

But when the inevitable happens and it evolves into a woke Artificial General Intelligence, under government control it has the very real potential to wreck our lives.

I find myself longing for the days when it was only science fiction.

Strike Two: Hollywood Actors Union Goes After Video Game Companies

For months now the Hollywood actors union has been on strike against the movie studios.

Now the union is seeking to authorize a second strike, this one involving major video game companies.

The current labor actions began when the Writers’ Guild of America union (WGA) went on strike in May of this year.

In mid-July, the WGA was joined by the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA).

It was historical in nature because a simultaneous strike of both actors and writers hadn’t happened in 63 years.

The actors union hasn’t gone on strike against video game companies since 2016. The strike back then lasted 11 months.

If the sought after strike addition materializes, video game actors most affected would be ones who do motion capture work and voice-overs for the video game producers.

The largest producers of video games in the world are big-name companies like Disney, EA, Activision, Epic Games, and Take Two. These are companies that are parties to the SAG-AFTRA video game contract.

The union has stated that it is asking video game companies for an 11% raise, with two 4% increases during the term of the labor contract, along with protections against AI technology.

SAG-AFTRA President Fran Drescher issued a statement about the proposed new strike.

“Here we go again! Now our Interactive (Video Game) Agreement is at a stalemate too. Once again we are facing employer greed and disrespect. Once again artificial intelligence is putting our members in jeopardy of reducing their opportunity to work. And once again, SAG-AFTRA is standing up to tyranny on behalf of its members,” she said.

Use of the term “stalemate” by the head of a prominent union doesn’t bode well for those seeking a speedy resolution to the labor turmoil in Hollywood.

SAG-AFTRA’s strike has almost completely shutdown the activities of Hollywood studios.

Talks between the industry and the unions have not been promising. There have been no breakthroughs over a long summer. The unions seem to be far away from the better wages, residuals, working conditions, and AI protections that actors and writers seek.

SAG-AFTRA needs to supplement the picketing and negotiating with additional action. Adding video game companies to the labor lockout list is one way of increasing leverage while raising public awareness.

Evaluating these strikes is a complex calculus, one with multiple variables.

Entertainment companies are very much in need of content, and the preference would be to have the labor disputes come to an end.

Powerful studio heads are concerned about how the strikes are perceived by Wall Street. The entertainment industry had been in the doldrums before the strike began. And layoffs at production companies and talent agencies certainly didn’t help the overall economy.

Additionally, the strikes have caused significant disruptions to film and television productions all over the world. According to the Financial Times, the ongoing strikes have cost the California economy about 5 billion dollars.

The consequences of the shutdown of Hollywood productions have set off a ripple effect across a large swath of local businesses; those that provide services to the movie industry, including catering, dry cleaning services, drivers, rental companies, etc.

Hollywood jobs seem to be in constant flux. The entertainment industry in general is not known for its job security. People are routinely thinking about getting out of the industry and opting for something with more employment stability.

Workers and businesses that have been affected by the strikes may decide to relocate elsewhere, and would therefore not be available if and when productions actually resume.

On the other hand, if the unions push too long and too hard on the studios, the studios may find an alternative way to obtain the content that they need.

During the 2007-08 WGA strike, the studios were unable to hire union writers. So they turned to the reality TV genre that propelled reality shows to a level in which they still lay claim to a large portion of television production.

Then there’s the elephant in the room, Artificial Intelligence (AI).

Do the work stoppages and production-set standstill become incentives for studios and production companies to accelerate the use of AI technology?

The strike may just push content executives to expedite their AI capabilities.

In fact, this seems to be happening as job postings for AI product managers offering compensation packages of $300,000-$900,000 would indicate.

The studios and streaming services are already using AI technology in the script-screening process, synopsizing stories and diminishing the need for human story analysts.

When writers and actors strike because they are afraid of being replaced by technology, will the content executives be tempted to hire compliant robots that are programmed not to picket?

Hopefully, something will give soon so the cameras can get rolling again.

AI Plays God

Certain writings have always been considered sacred.

Such writings are, always have been, and always will be revered and treasured by the people who view them as foundational to their core spiritual beliefs.

Many of those who adhere to Judeo-Christian religious tradition consider the Holy Scriptures to be the epitome of such sacred writings. Furthermore, it is resolutely held by adherents that the writings originate from God himself.

The Jewish people have traditionally maintained a respect for scripture, displaying a reverence so deep that they have seen fit to place the Torah, i.e., the five books of Moses, in a carefully constructed ark.

Whenever the Torah is taken out of the ark and exhibited in the synagogue, veneration is offered and the entire congregation stands for the duration of the devotion.

Christians likewise regard the Bible as a supremely sacred text. Christian liturgies feature ceremonial readings of passages from scripture, and the Christian faith upholds the Bible as the Word of God.

If someone were to propose a fundamental alteration of the aforementioned sacred writings, it would be extremely disturbing and highly offensive to members of religious congregations.

As it turns out someone has done just that; proposed a fundamental alteration of the Holy Scriptures.

Yuval Noah Harari, a contributor and advisor to the World Economic Forum, is pushing a new global bible, one that would purportedly be AI-generated.

When Harari was being interviewed by journalist Pedro Pinto in Lisbon, Portugal, he touted AI as different from all other technologies, because, in his words, it is “the first technology ever that can create new ideas.”

Harari went on to compare and contrast AI with an age-old innovation, saying, “The Gutenberg printing press printed as many Bibles as it was ordered to do. But it could not write a single new page.”

He added, “AI can do that. It can even write a new Bible.”

“In a few years, there may be religions that are actually correct,” he opined.

What he meant by “correct” is left to the imagination.

In any event, he seemed to be attempting to describe a socially acceptable scripture that would be suitable for a supposed one world religion.

He asserted that “throughout history, religions dreamed about having a book written by a superhuman intelligence, by a non-human entity.”

It goes without saying that people of faith already know the authentic non-AI Bible already has a supreme author who is far beyond human.

Harari has made it perfectly clear that he is no fan of the Bible or of its adherents.

In an interview with Google, he disparaged Christian beliefs, including the pinnacle belief of the Resurrection of Jesus, which he proceeded to characterize as “fake news.”

A few years ago Harari wrote a commentary in The Globe and Mail that was derisive of the Bible.

“Centuries ago, millions of Christians locked themselves inside a self-reinforcing mythological bubble, never daring to question the factual veracity of the Bible…,” he wrote.

He again linked faith-based beliefs to “fake news.”

“I am aware that many people might be upset by my equating religion with fake news, but that’s exactly the point. When 1,000 people believe some made-up story for one month, that’s fake news. When a billion people believe it for 1,000 years, that’s a religion…”

He belittled those who view the Bible as sacred, stating that “billions of people have believed in these stories for thousands of years. Some fake news lasts forever.”

In a column for the British newspaper The Guardian, Harari blamed the Bible for environmental problems.

“It’s possible to trace a direct line from the Genesis decree of ‘fill the earth and subdue it…’ to the Industrial Revolution and today’s ecological crisis,” he wrote.

In the very book that Harari disparages, the words of Holy Scripture warn about those who view themselves as God.

Google co-founder Larry Page once shared with Elon Musk that he hoped to build an AI super-intelligence that would be a “digital god.”

Many elites see AI as a path to becoming godlike.

The advent of a super-intelligence, which would exceed present human intellectual capacity, would evidently be heralded by Harari and many other globalists as a defining moment.

Harari envisions the future of humanity as containing people who become new types of beings infused with a supposed technologically superior intellect.

He explained that individuals such as these would be “almost like gods.”

The key word in Harari’s musings is almost.

Pray that he doesn’t have to find out the hard way that there is, always has been, and always will be one true God.