Hidden Blessing in the Hollywood Shutdown

Hollywood sets have gone dark.

A central reason for the recent Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) strike is that actors, writers, and other entertainment artists are super nervous about Artificial Intelligence (AI) making them and their jobs obsolete.

When the strike was first announced, current president of SAG-AFTRA Fran Drescher was at the mike to address the press.

Drescher, the former lead actress of the 1990s hit TV sitcom “The Nanny,” heads the union that boasts a membership of over 160,000 film and television actors.

Interestingly, the writers union had gone on strike a couple of months back. But now that SAG-AFTRA has also taken to the picket line, the situation in Hollywood is looking pretty bleak.

The last time both unions were on strike simultaneously was over sixty years ago, when none other than then-actor (who ultimately turned President of the United States) Ronald Reagan was wearing the union president’s hat.

Like every other aspect of our lives, things presently appear to be out of whack.

The brand of Hollywood itself is in tatters, in large part because of the cultural and political agendas that permeate every nook and cranny of the town.

What has particularly outraged the public, though, are the productions that have been coming from major studios, chock-full of vile and inappropriate imagery, content, and messaging aimed straight at our kids and teens.

Could the Hollywood shutdown created by the two entertainment unions be a blessing in disguise?

A lot of consumers of entertainment fare are viewing it this way, as if maybe a wrench in the works was exactly what was needed to stop the madness.

Striking actors and writers have reason to be concerned about the capability of AI models to supplant human beings in the manufacture of entertainment products.

Creative types are also increasingly astonished at the sheer capabilities of generative AI models, which can digitally produce what would typically have been created by human beings, but in a faster and less expensive way.

AI ingests the works and images of human artists as part of its training data. The technology can then alter and/or mash-up content, allowing entertainment companies to avoid compensating the people who originally created the works or were even the subjects of images used.

Additionally, other creative types such as musicians and visual artists are carefully watching the entertainment biz battle, as are all those who work in an array of fields that will no doubt be affected by AI’s implementation.

We are already witnessing the technological replacement of human beings in a host of industries. Still, the entertainment business has a unique opportunity to do something helpful for society at large.

The manner in which Hollywood resolves the two strikes could set the marker, not only for the entertainment industry but for other businesses as well.

Digitally created trailers and scenes featuring what appear to be well known actors have popped up all over the internet. The virtual phenom is posing legal and ethical concerns that the unions are obliged to address.

At a recent press conference, Drescher warned, “If we don’t stand tall right now, we are all going to be in trouble. We are all going to be in jeopardy of being replaced by machines.”

SAG-AFTRA chief negotiator Duncan Crabtree-Ireland indicated during a press conference that a proposal by the studios would put background performers at a terrible disadvantage.

“They propose that our background performers should be able to be scanned, get paid for one day’s pay, and their company should own that scan of their image, their likeness, and should be able to use it for the rest of eternity,” Crabtree-Ireland said.

The Alliance of Motion Picture and Television Producers (AMPTP), which represents major studios including Walt Disney and Netflix, issued a statement suggesting that the claim made by SAG-AFTRA leadership is untrue.

An AMPTP spokesperson told ZDNET that the use of digital replicas would be restricted to the specific motion picture for which the actor is employed, and that any additional use would require the actor’s permission.

“Any other use requires the background actor’s consent and bargaining for the use, subject to a minimum payment,” the spokesperson stated.

This strike over AI is just the opening scene.

Sit yourself down and get ready for a real-life epic drama.

Only this time you’re not going to be able to say, “Don’t worry. It’s only a movie.”

AI and the Song

Music is a universal language like no other.

When words seem inadequate, it speaks volumes.

So where does music come from?

We may differ in our opinions on that. But a lot of us believe that inspiration, in music as in various other art forms, literary writings, discoveries, inventions, and the like, has an other worldly origin.

Musical inspiration is particularly unique, though, because of its biblical roots and its distinct resonance within human beings across all time.

Artists who are driven to share their musical inspirations are currently facing some questions that are seriously haunting ones.

Here are a few:

1. Can technology really create the equivalent of human music?

2. Will technologically designed songs measure up to the music that human beings love?

3. Is music designed by technology really music?

There are a whole lot of music artists who are concerned about Artificial Intelligence (AI) and its supposed “creation” of musical content.

Experimentation with computers composing music has been going on for decades. But there was always a human at the helm.

Now with AI, the human is hidden. A programmer, a series of programmers, faceless, nameless, all seemingly lost, only data remain.

And we are supposed to accept the notion that data have been assigned to be our new composers?

Such so-called artistic advances in AI are prompting an interesting reaction – a mixed blend of enthusiasm, anticipation, and alarm.

A few recent examples provide insight.

A “collaboration” between famed pop musicians Drake and The Weeknd, which was actually an AI-simulated version of “Heart on My Sleeve,” went viral on social media. The track was quickly pulled at the behest of the label, Universal Music Group.

AI was used to generate an album of the highly successful British rock band Oasis. But the group had long been disbanded. Apparently, an insignificant detail.

Canadian EDM artist Claire Boucher, a.k.a. Grimes, is evidently embracing the idea of an AI version of herself.

She sent out the following advertisement of sorts:

“I’ll split 50% royalties on any successful AI generated song that uses my voice,” Grimes tweeted. “Same deal as I would with any artist i collab with. Feel free to use my voice without penalty. I have no label and no legal bindings.”

Probably the biggest story relating to all of the above involves Sir Paul McCartney. The former Beatle is one of the most influential composers and performers of all time.

McCartney has accelerated the AI discussion by announcing that the surviving Beatles would release an AI-assisted tune, which will feature vocals by the late John Lennon.

He told BBC Radio 4 that the technology was able to “extricate” Lennon’s voice from a demo recording to allow the song to be completed, and it is set to be released this year.

During the production of Peter Jackson’s documentary “Get Back,” technology was used to remove background noise from the track and otherwise clean up the audio.

“[Jackson] was able to extricate John’s voice from a ropey little bit of cassette,” McCartney said. “We had John’s voice and a piano and he [Jackson] could separate them with AI.”

“So when we came to make what will be the last Beatles record, it was a demo that John had, and we were able to take John’s voice and get it pure through this AI,” McCartney added.

Reportedly, the song is a 1978 Lennon composition called “Now and Then.”

McCartney had received the demo a year earlier from Lennon’s widow, Yoko Ono. The tracks were recorded on a boombox as John sat at the piano in his New York apartment.

Two of the songs on the demo, “Free as a Bird” and “Real Love,” were restored by producer Jeff Lynne and released in 1995 and 1996, the first Beatles release in 25 years.

The band had attempted to record “Now and Then,” but the recording session had been halted and the tune abandoned.

Now AI is facilitating McCartney’s completion of the song.

But is it really a new Beatles song? John isn’t with us anymore. How could it be?

After the announcement, some consternation appeared on various web platforms.

McCartney then backtracked a bit, taking to Twitter to assure Beatle fans that in the making of the “new” Beatles song nothing had been “artificially or synthetically created.”

It could be that McCartney is experiencing some trepidation about the use of AI for music production.

He’s certainly not alone.

According to a poll taken by the Bedroom Producers Blog, 86% of those surveyed believe the technology will replace existing tools of music production, and 73% of respondents believe AI could replace human producers in the future.

It actually doesn’t take a musician or songwriter or producer or engineer to realize that, within this context, AI is just what its name indicates – Artificial.

Thankfully, there are still those among us who are able to recognize real music and who freely acknowledge the very source of our human inspiration.

AI Plays God

Certain writings have always been considered sacred.

Such writings are, always have been, and always will be revered and treasured by the people who view them as foundational to their core spiritual beliefs.

Many of those who adhere to Judeo-Christian religious tradition consider the Holy Scriptures to be the epitome of such sacred writings. Furthermore, it is resolutely held by adherents that the writings originate from God himself.

The Jewish people have traditionally maintained a respect for scripture, displaying a reverence so deep that they have seen fit to place the Torah, i.e., the five books of Moses, in a carefully constructed ark.

Whenever the Torah is taken out of the ark and exhibited in the synagogue, veneration is offered and the entire congregation stands for the duration of the devotion.

Christians likewise regard the Bible as a supremely sacred text. Christian liturgies feature ceremonial readings of passages from scripture, and the Christian faith upholds the Bible as the Word of God.

If someone were to propose a fundamental alteration of the aforementioned sacred writings, it would be extremely disturbing and highly offensive to members of religious congregations.

As it turns out someone has done just that; proposed a fundamental alteration of the Holy Scriptures.

Yuval Noah Harari, a contributor and advisor to the World Economic Forum, is pushing a new global bible, one that would purportedly be AI-generated.

When Harari was being interviewed by journalist Pedro Pinto in Lisbon, Portugal, he touted AI as different from all other technologies, because, in his words, it is “the first technology ever that can create new ideas.”

Harari went on to compare and contrast AI with an age-old innovation, saying, “The Gutenberg printing press printed as many Bibles as it was ordered to do. But it could not write a single new page.”

He added, “AI can do that. It can even write a new Bible.”

“In a few years, there may be religions that are actually correct,” he opined.

What he meant by “correct” is left to the imagination.

In any event, he seemed to be attempting to describe a socially acceptable scripture that would be suitable for a supposed one world religion.

He asserted that “throughout history, religions dreamed about having a book written by a superhuman intelligence, by a non-human entity.”

It goes without saying that people of faith already know the authentic non-AI Bible already has a supreme author who is far beyond human.

Harari has made it perfectly clear that he is no fan of the Bible or of its adherents.

In an interview with Google, he disparaged Christian beliefs, including the pinnacle belief of the Resurrection of Jesus, which he proceeded to characterize as “fake news.”

A few years ago Harari wrote a commentary in The Globe and Mail that was derisive of the Bible.

“Centuries ago, millions of Christians locked themselves inside a self-reinforcing mythological bubble, never daring to question the factual veracity of the Bible…,” he wrote.

He again linked faith-based beliefs to “fake news.”

“I am aware that many people might be upset by my equating religion with fake news, but that’s exactly the point. When 1,000 people believe some made-up story for one month, that’s fake news. When a billion people believe it for 1,000 years, that’s a religion…”

He belittled those who view the Bible as sacred, stating that “billions of people have believed in these stories for thousands of years. Some fake news lasts forever.”

In a column for the British newspaper The Guardian, Harari blamed the Bible for environmental problems.

“It’s possible to trace a direct line from the Genesis decree of ‘fill the earth and subdue it…’ to the Industrial Revolution and today’s ecological crisis,” he wrote.

In the very book that Harari disparages, the words of Holy Scripture warn about those who view themselves as God.

Google co-founder Larry Page once shared with Elon Musk that he hoped to build an AI super-intelligence that would be a “digital god.”

Many elites see AI as a path to becoming godlike.

The advent of a super-intelligence, which would exceed present human intellectual capacity, would evidently be heralded by Harari and many other globalists as a defining moment.

Harari envisions the future of humanity as containing people who become new types of beings infused with a supposed technologically superior intellect.

He explained that individuals such as these would be “almost like gods.”

The key word in Harari’s musings is almost.

Pray that he doesn’t have to find out the hard way that there is, always has been, and always will be one true God.

AI Is Set to Take Over Hollywood

Generative AI is a type of Artificial Intelligence technology that has the capacity to almost instantly produce text, images, audio, and video.

Understandably, the entertainment community is in an uproar over the prospect of AI wiping out a huge chunk of the longstanding industry.

While a segment of Hollywood is actually enthused about the idea that AI might free creators from some of the typically tiresome tasks and also help to avoid the hefty price tag that frequently accompanies big budget projects, others are scared to pieces.

It’s fairly easy to convince a portion of the entertainment community that AI is an overall plus. Use of the technology has become common practice within the biz.

The late Carrie Fisher was digitally cast via AI (with permission from her daughter) in the film “The Rise of Skywalker.”

In another instance, in order to make it seem as if 80-year-old actor Harrison Ford were still in his thirties, Disney-owned Lucasfilm used images of Ford’s face, taken from the “Indiana Jones” films of the 1980s, and blended them into the fifth Indiana Jones film, “Indiana Jones and the Dial of Destiny.”

During an interview with late-night host Stephen Colbert in which he talked about his AI-restored on-screen image, Ford said, “It’s fantastic.”

Actor James Earl Jones, who is now 92 years old, authorized an AI version of his famous voice, which he had supplied for the Darth Vader character in the “Star Wars” franchise series, so that the character could continue on.

Reportedly, a digital version of the late actor Christopher Reeve will be included in a cameo appearance in the upcoming movie “The Flash.”

AI technology is routinely being used to alter mouth movements, so as to more accurately sync words in dubbed films of a different language. It is also regularly being used to create cinematic music and soundscapes.

Paul Schrader, screenwriter of “Taxi Driver” and director of “American Gigolo,” did a Facebook post about something that he called a “dirty little secret composers know.”

“AI is already scoring filmed entertainment and has been for some time,” Schrader wrote.

Lately the actors and writers unions have been forced to confront the dark side of AI, and they don’t like what they see coming.

Generative AI is one of the main reasons the Writers Guild of America (WGA) has been on strike for weeks and the Screen Actors Guild‐American Federation of Television and Radio Artists (SAG-AFTRA), of which this author is a member, has been threatening to strike as well.

Both unions are seeking to limit the use of AI in the industry.

Digital doppelgangers in fake movie trailers have been popping up, making entertainment content without the assistance of Hollywood creatives.

AI-generated trailers, which have appeared on the Internet for what seems to be director Wes Anderson’s films, typically include well known actors such as Bill Murray and Scarlett Johansson. The trailers implement Anderson’s characteristic style, and they feature fake adaptations of popular franchises such as “Star Wars,” “Harry Potter,” and “The Lord of the Rings.”

A video of Ryan Reynolds selling Teslas was recently shared on Twitter but has since been removed. Reynolds’s production company responded with another AI-generated video, with Twitter owner Elon Musk endorsing gin made by a Reynolds-owned company. This video has also been removed.

A-listers, including Tom Cruise and Keanu Reeves, have actually been the victims of unauthorized AI-generated deep fake videos.

Then there’s the world of voice actors, which has also been shaken in a major way. So-called voice cloning is easily conjured up by AI technology.

The reality is AI technology is capable of improving itself. The phenomenon is known as “emergence.” In the not-too-distant future, entertainment content will be created by simply giving prompts to AI technology without actors, writers, directors, or cameras having to be involved.

This means that an individual with minimal resources but with access to AI can create professional looking videos that feature famous actors and characters, minus their personal consent or involvement.

Actors already have a degree of legal protection, through existing prohibitions, from unauthorized use of their names, images, and/or likenesses.

However, things start getting really murky when it comes to AI technology’s training data. The rights of the previous performances of individual actors being used for the purposes of AI training will likely be an issue in union negotiations.

Duncan Crabtree-Ireland, SAG-AFTRA’s chief negotiator, has spoken out about maintaining control over the AI-created lookalikes of actors and the issue of fairness when it comes to using personas.

“The performer’s name, likeness, voice, persona – those are the performer’s stock and trade,” Crabtree-Ireland said. “It’s really not fair for companies to attempt to take advantage of that and not fairly compensate performers when they’re using their persona in that way.”

Writers in turn possess intellectual property rights to their works. But under the present law they will have a difficult burden to prove.

In order to protect their rights in court, they must prove that the AI work is either a reproduction of their own work or a derivative of it.

In the real world, AI will likely be trained with a multitude of scripts, making this burden of proof all but impossible.

In Schrader’s opinion, “The WGA position on AI is a fascinating conundrum. The guild doesn’t fear AI as much as it fears not getting paid.”

Notwithstanding the dangers that the technology poses, the director predicts that AI “will become a force in film entertainment.”

Both SAG-AFTRA and the WGA want reasonable safeguards before AI capabilities proliferate within the industry.

“Family Ties” actress, computer science graduate, and former SAG board member Justine Bateman is unequivocally against the use of AI tech for entertainment content.

“I think AI has no place in Hollywood at all. To me, tech should solve problems that humans have,” Bateman said, adding that its use will “have an incredibly bad effect — disastrous effect on the entertainment business.”

The actress views the use of AI as a backward looking “automatic imitation” through which creativity will be stifled.

“What’s the next genre in film? What’s the next genre in music? You’re never going to see anything like that if we’re all using AI,” Bateman said.

She stated that she didn’t “want to live in that world,” echoing the sentiments of many actors, writers, directors, and musicians.

FYI: The above written article was created by means of the author’s un-Artificial Intelligence.

AI’s Potentially Fatal Flaw

Plenty of discussions have been taking place about the dangers surrounding Artificial Intelligence (AI) and its existing application, the positives and negatives, and possible misuses and/or abuses.

However, a problem has popped up that seems to be causing a real stir.

It turns out that AI can actually lie.

Tech experts refer to inaccuracies and falsehoods produced by AI as “hallucinations.”

This term is typically used to describe incidents whereby AI provides solutions to problems; however, the solutions contain fictitious material that was not part of the original training data used during the programming process.

Tech experts don’t actually understand AI’s hallucination phenomenon.

When AI first became available in the form of so-called large language models (LLMs), aka, chatbots, AI hallucinations just surfaced on their own.

Early users of LLMs noticed that hallucinations seemed to “sociopathically” embed plausible sounding fabrications in the generated content.

A number of experts have used the words “very impressive-sounding answer that’s just dead wrong” to describe an AI hallucination.

An early example of the phenom happened in August of 2022.

Facebook’s owner Meta warned that its newly released LLM, BlenderBot 3, was prone to hallucinations, which Meta described as “confident statements that are not true.”

In November of 2022, Meta unveiled a demo of another LLM, Galactica, which also came with the following warning: “Outputs may be unreliable! Language Models are prone to hallucinate text.”

Within days Meta withdrew Galactica.

December of 2022 saw the release to the public of OpenAI’s LLM, ChatGPT, in its beta-version. This is the AI that is most widely used and one with which the public has the greatest familiarity.

Wharton Professor Ethan Mollick seemed to humanize ChatGPT, when he compared the LLM to an “omniscient, eager-to-please intern who sometimes lies to you.”

Lies were exactly what were generated when the Fast Company website attempted to use ChatGPT to author a news piece on Tesla. In writing the article, ChatGPT just went ahead and made up fake financial data.

When CNBC asked ChatGPT for the lyrics to a song called “The Ballad of Dwight Fry,” instead of supplying the actual lyrics the AI bot provided its own hallucinated ones.

A top Google executive recently stated that reducing AI hallucinations is a central task for Bard, Google’s competitor to ChatGPT.

Senior Vice President of Google Prabhakar Raghavan described an AI hallucination as occurring when the technology “expresses itself in such a way that a machine provides a convincing but completely made-up answer.”

The executive stressed that one of the fundamental tasks of Google’s AI project is to keep the hallucination phenom to a minimum.

In fact, when Google’s parent company Alphabet Inc. first introduced Bard, the software shared inaccurate information in a promotional video. The gaffe cost the company $100 billion in market value.

In a recent “60 Minutes” interview, Google CEO Sundar Pichai acknowledged that AI hallucinations remain a mystery.

“No one in the field has yet solved the hallucination problems,” Pichai said.

Admitting that the phenomenon is very widespread in the AI world, he stated, “All models do have this as an issue.”

When the subject of the potential spread of disinformation was brought up, Pichai said, “AI will challenge that in a deeper way. The scale of this problem will be much bigger.”

He noted that there are even additional problems with combinations of false text, images, and even “deep fake” videos, warning that “on a societal scale, you know, it can cause a lot of harm.”

Twitter and Tesla owner Elon Musk recently alluded to the potential harm that AI poses to the political process.

In an appearance on Tucker Carlson’s prior Fox show, Elon said, “If a technology is inadvertently or intentionally misrepresenting certain viewpoints, that presents a potential opportunity to mislead users about actual facts about events, positions of individuals, or their reputations more broadly speaking,” Elon explained to the host.

Elon then gave his perspective, taking into account the intellectual prowess of AI.

He asked, “…If AI’s smart enough, are they using the tool or is the tool using them?”

The answer is yes.

The Real Dangers of Artificial Intelligence

Over the past year, the technological development surrounding Artificial Intelligence (AI) has advanced much more rapidly than ever anticipated.

A recent letter, signed by Apple co-founder Steve Wozniak, OpenAI co-founder Elon Musk, and additional AI experts and entrepreneurs, cautioned that a six-month pause needs to be placed on all new AI models.

Time published an article by founder of AI alignment Eliezer Yudkowsky, encouraging the implementation of a permanent global ban and international sanctions on any country pursuing AI research.

The high-profile figures are warning that AI technology is accelerating so quickly, machine systems will soon be able to perform, or even exceed, human intellectual functioning.

A majority of the nation shares the same concerns as the experts. According to a recent Monmouth University poll, 55% of Americans are worried about the threat of AI to the future of humanity.

And according to a Morning Consult survey, nearly half of those who participated would support a pause on advanced AI development.

Because the public has been able to access generative AI platforms that are capable of creating text and participating in human-like conversations, the two-letter acronym itself has been absorbed into the national lexicon.

The term “AI” was coined by a computer scientist back in 1956. At its simplest, Artificial Intelligence combines computer science algorithms with data in order to solve problems.

An algorithm is a list of instructions for specific actions to be carried out by computer technology in a step-by-step fashion. AI utilizes “machine learning,” which enables learning and adaptation to occur without explicit instructions being given.

The type of AI that is presently in use is designed to specialize in a single task; for instance, conducting a web search, determining the fastest route to a destination, or alerting the driver of a car to the presence of a vehicle in the car’s blind spot.

Such functions have oftentimes served to make the lives of individuals better, easier, safer, and so on.

However, it is critical to understand that existing AI is starkly different from the type of AI that is in the pipeline – Artificial General Intelligence (AGI).

This type has a benign sounding title, but it is nothing of the sort.

AGI can, and no likely will, match and even exceed human capability.

The point at which AGI exceeds human intelligence is known as “the singularity.” There have been gobs of books and films that have featured AI themes, based on the assumption that advanced AI could somehow turn against humans.

“2001: A Space Odyssey,” “The Matrix,” “The Terminator,” and “Blade Runner” all contained AGI warnings about things to come.

The fact of the matter is human beings program machines. So it stands to reason that should a given programmer err during the programming process, the resultant technology that is created will be flawed.

When it comes to ethics, the possession, or lack thereof, on the part of the programmer can result in the type of programming that may have catastrophic consequences.

This is because AI possesses the capacity to learn from its mistakes and adjust on its own, It may be able to improve itself to the point where human beings will lose control of their own invention.

The nightmare begins when the stop mechanism no longer functions.

In one of the unimaginable situations, we could have a super intelligent AI advance in a way that runs counter to all human morals, ethics, and values.

This tips into the realm of the spiritual, which requires a great deal of critical thought and further discussion.

For now, a pause is not only advisable, it’s a must.

Lessons Learned from the PayPal Debacle

PayPal is currently in an existential crisis.

The company recently issued an updated “Acceptable Use Policy” (AUP), which was set to go into effect on November 3 of this year.

Among other things, the AUP included a $2,500 fine, which was to be imposed on users of PayPal if said users transmitted speech that the digital financial service company deemed unacceptable.

The type of speech that would have triggered the policy included “the sending, posting, or publication of any messages, content, or materials” that “promote misinformation.”

Debits taken directly from users’ PayPal accounts would have been the means used to collect the hefty fines.

Having already suffered the loss of free expression at the hands of the reigning misinformation police currently patrolling our society’s virtual and real worlds, a whole lot of people reacted swiftly and forcefully.

A tsunami-sized backlash against the AUP ensued in the conventional media, social media and elsewhere. This forced PayPal to backtrack big time.

Up until now the multinational technology company had been the world’s preeminent online payment system, ranking 143rd in revenue on the 2022 Fortune 500 list.

Originally founded in 1998 by Max Levchin, Peter Thiel and Luke Nosek as a company called Confinity, it went through a merger in 2000 with X.com, an online financial services company co-founded in 1999 by Elon Musk.

Musk directed X.com to focus its resources on the online payment business. Musk was subsequently replaced by Thiel as CEO of X.com, which was renamed PayPal, and ultimately went public in 2002. The former wholly owned subsidiary of eBay became an independent company again in 2015.

Like way too many other large tech enterprises, PayPal’s management ultimately swerved into speech regulating territory, banning in 2018 radio host Alex Jones, along with Jones’s website.

Three years later PayPal announced a plan to collaborate with the Anti-Defamation League (ADL) and the League of United Latin American Citizens (LULAC), as well as other nonprofits, to scrutinize users’ transactions for purported investigative purposes relating to extremism groups. The results were intended to be shared with law enforcement and other entities.

ADL CEO Jonathan Greenblatt indicated that a better understanding of how extremist groups use PayPal could potentially “help disrupt those activities.”

In September of 2022, PayPal closed the accounts of a British social commentator and two related groups, the Free Speech Union and The Daily Sceptic website.

The accounts were apparently terminated because of alleged misinformation about the COVID-19 vaccine. A few days later, however, PayPal reversed its decision.

During the same month, the company threatened to withdraw its sponsorship of the Phoenix Suns, if the basketball team’s owner Robert Sarver failed to be removed from the franchise.

Sarver is presently under a one-year suspension from the Suns, following an internal investigation that found he allegedly used “hostile” words and slurs against women and minorities.

The company also recently banned Gays Against Groomers, a group composed of LGBT-identifying individuals. Simultaneously, PayPal’s subsidiary Venmo also blocked the organization.

Ian Miles Cheong, an independent journalist who reports on the promotion of transgenderism to minors, has also been banned.

After facing media scrutiny and a viral wave of criticism, including some chiding from its former president David Marcus and one of the company’s founders Musk, lo and behold, the company stretched credibility by claiming the change in policy had gone out to the public by mistake.

A PayPal spokesperson reportedly told the following to National Review:

“An AUP notice recently went out in error that included incorrect information. PayPal is not fining people for misinformation and this language was never intended to be inserted in our policy. Our teams are working to correct our policy pages. We’re sorry for the confusion this has caused.”

— Marcus had used his Twitter account to slam the original AUP policy change.

“It’s hard for me to openly criticize a company I used to love and gave so much to. But PayPal’s new AUP goes against everything I believe in,” PayPal’s former CEO tweeted. “A private company now gets to decide to take your money if you say something they disagree with. Insanity.”

— Musk tweeted a single word reply, “Agreed.”

— Another Twitter user, Andrea Stroppa, had shared an article on the policy change and added, “Worrying. That’s why we need the X platform more than ever.”

Musk responded with an emoji, “💯,” meaning total agreement. Stroppa appeared to be referring to a new platform that Musk recently said he wanted to create.

Professionals in tech and media know quite well, when dealing with Terms of Service, these kinds of changes are reviewed and signed off on by skilled executives and attorneys before they are implemented.

— Intrepid reporter and social media influencer Jack Posobiec didn’t mince words when he posted, “#BankruptPaypal no one is buying their walkback. We know what their plan is. They’re just mad they got caught.”

— “Well, well… looks like PayPal spread misinformation about itself,” Christina Pushaw, campaign spokeswoman for Gov. Ron DeSantis, tweeted. “Maybe they should pay a $2,500 fine to all of us?”

Other furious users had simply closed their accounts and taken to Twitter to share their thoughts.

— Commentator and impactful influencer Candace Owens had tweeted, “Just moved all money I had in my PayPal account out of it. And I very must suggest you do the same. This is serious… #PayPal is dead.”

— Sen. Tim Scott, R – S.C., had posted his desire to investigate the matter.

“Allowing private companies to become thought police would be egregious and illegal overreach,” Sen. Scott tweeted. “My office will be looking into the validity of PayPal’s new policy and taking any necessary action to stop this type of corporate activism.”

It remains to be seen the extent to which PayPal has damaged itself with the attempted curb on freedom of expression and the unconvincing withdrawal.

In any event, the PayPal saga serves as an object lesson for corporations still wishing to dabble in viewpoint discrimination.

A big warning sign now hangs at the entrance to the internet, which reads,

CAUTION: Censor at your own risk.