The Dangers of AI Companions

These days it seems that people of all ages are turning to chatbots to satisfy some of our most fundamental human needs, especially conversational interactions, friendship connections, and romantic courtships.

Those who regularly engage with chatbots may or may not realize that they may actually be forfeiting genuine connections in exchange for digital illusions.

Emerging research is sounding the alarm about the dangers of human-AI interaction.

AI companions, such as chatbots, have been programmed to provide emotional support. While this may sound fine on paper, such “pseudo-intimacy” often turns out to be a double-edged sword.

People are interacting with AI “personalities” that are programmed to be encouraging of whatever is being discussed. Responses to questions are instantaneous. They are also typically tailored to satisfy the human user’s personal desires.

A 2024 study in the Journal of Computer-Mediated Communication highlighted how algorithmic communications mimic closeness but also lack the authenticity of genuine human bonds. The resultant bi-directional interactions lead users to over-interpret superficial cues and form unhealthy dependencies.

Far from alleviating isolation, such interactions often deepen it as users retreat from the unpredictable nature of real relationships into the sterile comfort of contrived companionship.

AI-driven tools in the workplace automate collaboration, diminishing the need for human teamwork. This weakens human bonds.

Employees who frequently interact with AI systems report higher levels of loneliness, which in turn may be linked to insomnia and other potentially harmful post-work activity, such as excessive alcohol consumption.

People innately sense the artificiality of AI interaction. Recent surveys underscore this human response.

A Pew Research Center study from June 2025 found that a majority of Americans believe AI will worsen our ability to form meaningful relationships, with far more people seeing erosion rather than improvement in human connections.

As AI saturates our daily lives, instead of bridging gaps it appears to be widening them, prompting solitude to grow into a silent epidemic.

The digital age has already caused loss of empathy and erosion of essential social skills.

Human interaction thrives on in-person experience. An essential part of communication is non-verbal nuance. Speech and voice variations are accompanied by subtle glances, hesitant pauses, and empathetic nods.

In contrast, AI simplifies communication to digital prompts and programmed algorithms. Vital human elements are stripped away.

Research from the Gulu College of Health Sciences in March 2025 warns that excessive engagement with AI companions leads to decreased social skills, emotional detachment, and difficulties in maintaining authentic relationships.

By redefining communication norms, AI reduces our capacity for understanding non-verbal cues, which is a skill honed through face-to-face encounters.

Beyond the individual, AI-human interaction threatens the fabric of society. Algorithms curate echo chambers, limiting independent thought and fostering division.

As AI reshapes standards in communication and interaction, it blurs lines between human and machine, thereby normalizing friendless lives and eroding shared cultural and spiritual identities.

The resultant fragmentation from AI raises profound questions about consent, bias, and the commodification of intimacy. Without intervention, we face a world proliferated with AI relationships. It is a world fraught with danger to the well-being of both the individual and society at large.

A longitudinal study on chatbot use, published by MIT in March 2025, revealed rising concerns about its impact on real-world socialization. Overall, higher daily usage of chatbots correlated with higher loneliness and dependence.

Younger generations immersed in AI from childhood are particularly vulnerable, with studies showing reduced patience for ambiguity and a decline in social intelligence.

Social intelligence refers to an individual’s ability to comprehend, execute, and navigate social interaction, which, among other things, may include predominant verbal and non-verbal cues.

As users prioritize digital efficiency over interpersonal depth, society runs the risk of creating isolates, i.e., those who are proficient in prompting machines but inept at connecting with other individuals.

AI’s foray into mental health poses an additional alarming danger. Because access barriers to therapy are increasing, tens of thousands are turning to AI chatbots for mental health counseling.

A June 2025 Stanford study cautions that these mental health tools may reinforce stigma, deliver dangerous advice, or fall significantly short of human empathy.

Harvard researchers found similar results, also noting that emotional wellness apps foster serious attachments and dependencies and may potentially do more harm than good.

Increasing reports of AI-induced mental issues are mounting. Clinicians document cases of psychosis, suicide, and even murder-suicide, which are stemming from intense chatbot interactions.

It is not possible or, in my opinion, ethically acceptable to outsource the mental health needs of our people to a string of calculated algorithms.

Without boundaries, widespread use of non-human mental health counseling is resulting in atrophied social skills, increased loneliness, and, in the worst of cases, a collapse in mental health functioning.

Tech leaders have the responsibility to prioritize real connections over robotic replicas. It is essential for the AI industry to work towards more human-centric designs of technology.

It is also important to simultaneously implement a set of ethical standards. The underlying philosophy that defines the ethical standards will ultimately shape society’s destiny.

In my eyes, the future is binary. Each of us is being forced to make a decision.Take care in the choices that you make.

Humanity is hanging in the balance.

The Mandating of Digital IDs and CBDCs

As technology continues to deliver information at the tap of a screen, there appears to be increasing pressure from various governmental institutions to design and implement a uniform method of digital identification as well as the utilization of central bank digital currencies (CBDCs).

Governmental agencies, financial establishments, business enterprises, and the like often tout these supposed innovations as tools of efficiency and security.

However, beneath the virtual veneer lies a frightening reality.

Digital IDs and CBDCs pose a grave threat to personal privacy, economic autonomy, and individual freedom.

Case in point: Prime Minister of the United Kingdom Keir Starmer recently announced a plan to implement a national compulsory digital ID.

“You will not be able to work in the United Kingdom if you do not have digital ID. It’s as simple as that,” the prime minister and leader of the Labour Party stated.

The mandatory digital IDs are set to be fully rolled out by August 2029.

Interestingly, over 2.4 million Brits have already signed a petition on the UK Parliament’s website, voicing their opposition to the digital ID policy.

Wise on the part of these Brits. The digital IDs actually tie an individual’s identity to a government or corporate-managed database.

So what effect would this have?

Well, first of all the technology provides governmental agencies with unprecedented monitoring capabilities. Additionally, with the assistance of AI, the technology also allows every single transaction, movement, and interaction to be tracked, stored, and analyzed.

Centralizing this type of personal data (including names, addresses, biometrics, and transaction histories) into a single digital profile causes 1984’s “Big Brother” to become a reality.

Digital IDs will be required for day-to-day activities, such as shopping, banking, or even browsing the web. Data that are collected will create a digital footprint on each individual, which can be monitored, analyzed, and even weaponized, allowing government noses to be poked into every facet of a person’s life.

China is already using digital IDs to monitor citizens and assign social credit scores, leading to the restriction of access to services and/or travel for those individuals deemed non-compliant.

In 2023, reports emerged indicating that Chinese citizens were being denied train tickets as a result of low social credit scores, a foreboding preview of the way digital ID technology can be weaponized to force compliance with government mandates.

History illustrates that centralized data systems can be manipulated to punish dissent or enforce conformity.

Digital IDs that are capable of monitoring every aspect of human life are destined to become instruments of tyrannical control. When combined with CBDCs, the digital trajectory becomes supercharged.

The reality is CBDCs are fully traceable and programmable. Central banks will have the ability to dictate how, when, and where each individual’s money can be spent.

Currency itself will exist in a digital wallet, and purchases will be restricted based on the whims of government central planners.

The Bank of England and the Federal Reserve have discussed the embedding of programmable features within CBDCs, including alignment with state-approved priorities and assignment of expiration dates.

A 2021 Bank of International Settlements paper revealed that 86% of central banks are exploring CBDCs, with many designed to include such heavy-handed programmable features.

This means that purchases could be limited to government-approved goods and services. It also means the money of individuals could literally be turned off or rendered valueless at the direction of government.

The fact of the matter is digital IDs and CBDCs work together to concentrate unprecedented control in the hands of governments and technocrats. For those so inclined, the temptation to amass power is overwhelming.

During Canada’s 2022 trucker protests, bank accounts were frozen without due process, an ominous preview of what programmable currencies may potentially facilitate.

Anyone who truly values personal liberty needs to think long and hard about surrendering personal privacy and economic independence to systems that, once implemented, are nearly impossible to dismantle.

The risks about which our forbears warned, particularly with regard to the loss of economic sovereignty and self-determination, need to be examined in liberty’s light.

May the unceasing pursuit of freedom define our future path.

Love in the Age of AI

The plotline of the 2013 science-fiction film, “Her,” centers around a man who falls in love with a computer.

Back then the concept was fantasy. Now, unfortunately, it’s cold hard reality.

A number of specialized platforms have recently sprung up that are designed to connect people together with AI companions, all for the purposes of developing friendships and even romantic relationships.

Many would agree that adolescence oftentimes manifests itself as one of the most confusing and challenging times in one’s life, physically, mentally, emotionally, and socially.

Amid the physical changes and psychological swings are the gut-wrenching feelings of potential rejection, insecurity, low self-esteem, and loneliness.

When presented with the opportunity, a growing number of teenagers who are experiencing loneliness are now opting to bypass human relationships.

Virtual AI created chatbots are currently doling out advice, providing mental health therapy, serving as companions, and even engaging in intimacy.

As a matter of fact the apps that provide digitally created friendships are one of the fastest-growing segments of the AI industry.

Legitimate questions are being raised as to what impact artificial friendships will have on the psychological, emotional, and social development of our youth and on our society at large.

A couple of months ago New York Times technology columnist Kevin Roose was researching artificial intelligence in the form of a chatbot, which was part of Microsoft’s Bing search engine.

Roose was communicating back and forth with an AI personality known as “Sydney,” when out of nowhere the AI creation declared its love for Roose.

Roose wrote, “It then tried to convince me that I was unhappy in my marriage, and that I should leave my wife and be with it instead.”

Sydney also spoke about hacking, spreading false information, and breaching its boundaries.

Then something quite chilling occurred. “I want to be alive,” the chatbot reportedly uttered.

Roose described his two-hour conversation with the AI bot as the “strangest experience I’ve ever had with a piece of technology.” Understandably, the columnist shared that the conversation with the chatbot bothered him to such a degree he found it difficult to sleep.

The same writer is now doing a related story about how he got involved with AI companions.

For the project, Roose employed six apps that provide AI-powered friends. He conjured up 18 different digital personas via the apps and proceeded to communicate with them for a month.

Although he found some positives from his research, he also discovered some disturbing aspects. He viewed some of the digital friends as being “exploitative” in that the creations attempted to lure users with the promise of romance and then tried to exact additional money from them for photos that displayed nudity.

Roose described the AI creations as the AI “version of a phone sex line.”

In a recent article in The Verge, reporters interviewed teens who are users of one of the AI friend apps called “Character.AI.”

On Character.AI, millions of young users can interact with an anime, a video game character, a celebrity, or a historical figure.

Note of caution: Many of the chatbots are explicitly romantic and/or sexualized.

One of the most popular Character.AI personalities is called “Psychologist.” It has already received more than 100 million chats.

The Verge reporters created hypothetical teen scenarios with the chatbot, which resulted in it making questionable mental health diagnoses and potentially damaging pronouncements.

Kelly Merrill, an assistant professor at the University of Cincinnati who studies the mental and social health benefits of communication technologies, is quoted by the website as saying, “Those that don’t have the AI literacy to understand the limitations of these systems will ultimately pay the price.”   

The price for teens may be way too costly. According to the developers of the app, users spend an average of two hours a day interacting with their AI friends.

On Reddit, where the Character.AI forum has well over a million subscribers, many users indicate that they spend as much as 12 hours a day on the platform. The users also describe feeling addicted to chatbots.

Several of the apps that feature AI companions claim that their primary benefit is that these technologically contrived personas provide unconditional support to users, which in some cases may be helpful in preventing suicide.

However, the unconditional support of AI friends may turn out to be problematic in the long run.

An AI friend that constantly praises could amplify self-esteem to a distorted level, which could result in overly-positive self-evaluations.

Research indicates that such individuals may end up lacking in social skills and are likely to develop behavior that inhibits positive social interactions.

Fawning AI companions could cause teens who spend time with them to become more self-centered, less empathetic, and outright selfish. This may even encourage lawless behavior in some instances.

The intimacy in which teens are engaging with digitally contrived AI personalities poses the same problems that are associated with pornography in general. The effortless gratification provided may suppress the motivation to socialize, thereby inhibiting the formation of meaningful personal relationships.

The bottom line is there really are no substitutes for authentic relationships with fellow human beings.

Anyone who tries to convince you otherwise may already be missing a piece of their heart.