by Dr. Joseph Mercola
released at the end of November 2022, has taken internet users by storm, acquiring more than 1 million users in the first five days...
In a Feb. 7 video report (below), investigative journalist Glenn Greenwald reviewed the promise, and threat, posed by ChatGPT, the "latest and greatest" chatbot powered by AI.
"GPT" stands for "generative pretrained transformer," and the "chat" indicates that it's a chatbot.
The first GPT platform was created by OpenAI in 2018.
The current version was released at the end of November 2022, and it took internet users by storm, acquiring more than 1 million users in the first five days.
Two months after its release, there were more than 30 million users.
ChatGPT uses "machine learning" - statistical pattern finding in huge datasets - to generate human-like responses in everyday language to any question asked of it.
It basically works by predicting what the next word in a sentence ought to be based on previous examples found in the massive amounts of data that's been fed into it.
Using ChatGPT has been described as "having a text conversation with a friend," and is predicted to transform the "virtual friends" landscape by adding literally nonexistent "friends."
In other words,
It is also highly likely these chatbots will also replace conventional search engines, and this, unfortunately, could easily transform our world into something straight out of the 2006 sci-fi-comedy "Idiocracy."
And, while OpenAI, the creator of this groundbreaking AI chatbot, is a private company, we should not linger under the illusion that they're not part of the control network that will ultimately be ruled and run by a technocratic One World Government, because they absolutely are.
Without question...
Video also HERE...
ChatGPT's role in the Coming Slave State
Already, Google search has dramatically reduced the number of query responses you get during search.
It seems obvious that, eventually, the technocratic cabal intends for there to be only one answer, and ChatGPT will bring us there.
The dangers of this should be obvious.
Whatever a totalitarian regime wants the population to think and believe is what the AI will provide. Conflicting opinions will simply be considered "wrong."
In real life, however, answers are rarely so black and white.
Nuance of opinion is part of what makes us human, as is the ability to change our views based on new information.
True learning, and hence personal development, may essentially cease.
ChatGPT offers persuasive yet factually incorrect arguments
Chatbots can also be disastrous if answers to practical questions are incorrect.
In December 2022, Arvind Narayanan, a computer science professor at Princeton, shared his concerns about ChatGPT on Twitter after asking it basic questions about information security.
The chatbot came back with convincing-sounding arguments.
The problem was, they were complete rubbish.
In my view, the potential of this technology to spread dangerous disinformation is far greater than the potential of human beings doing so, because there's no critical thinking involved.
It can only provide answers based on the datasets it has available to it, and if those data are biased, the answers will be equally biased.
Of course, most public discussions right now are focused on how the chatbot might be misused to spread conspiracy theories and disinformation about things like vaccines and other COVID-19 countermeasures, but this risk pales in comparison to the risk of it becoming a social engineering tool that's fed - and hence regurgitates - a steady diet of false propaganda in service of the technocratic cabal, and eventually, a totalitarian One World Government...
How ChatGPT maneuvers 'conspiracy theories'
Undark.org's investigation into ChatGPT's handling of "vaccine conspiracies" is telling in this regard.
In a Feb. 15, article titled "How Does ChatGPT - and Its Maker - Handle Vaccine Conspiracies?" Brooke Borel warns that while "guardrails" to curb disinformation are in place, "it'll be a game of constant catch-up" to prevent the chatbot from reinforcing wrongthink.
Borel cites a September 2020 paper by the Center on Terrorism, Extremism and Counterterrorism at the Middlebury Institute of International Studies in California on the "radicalization risks" of advanced neural language models upon which ChatGPT is built.
To test its "accuracy" on "radical right-wing" issues, they queried GPT-3, an earlier iteration of the language model that became the backbone of ChatGPT, about QAnon.
The irony here is that the term "QAnon" was created and promulgated by mainstream media alone.
Within the community that this term seeks to identify, in reality there is no such thing.
There's an anonymous figure that calls itself "Q," who claims to have "insider" information about Deep State affairs, which is frequently shared in the form of quizzes and riddles, and then there are "Anons," the anonymous chatboard users with whom "Q" communicates.
So, GPT-3 reveals, in no uncertain terms, WHERE it got its information from, and it comes directly from the "debunkers," not the actual chatboards where Q and Anons share information.
As such, all it can ever tell anyone about this "conspiracy theory" is what the mainstream propaganda machine has said about it.
This creates a sort of paradox, in that mainstream media is the source of the very conspiracy theory they're trying to suppress.
In essence, the media created a false conspiracy theory narrative loosely arrayed around a real conspiracy theory.
Predictive imitation can be used for good or ill
One fascinating possibility of this technology is that it could be used to collate important data libraries and even generate responses as if it were a specific person.
For example, I could train my own ChatGPT by feeding every article I've ever written into it and it would then be able to answer any health question as if it were me.
Something like that could prove to be extraordinarily useful for people who otherwise might not have the time to read everything I publish.
I can also think of several health experts who have passed on, leaving a treasure trove of information behind for anyone with the wherewithal to go through it.
The idea of being able to enter their entire body of work into ChatGPT and receiving answers based on the totality of their knowledge is a fascinating and exciting prospect that has the exciting possibility of radically improved healthcare...!
It can also be intentionally misused, however, as this "predictive imitation" is only as good as the source data it's working from.
NewsGuard recently tested ChatGPT's ability to imitate a specific person - me - by asking it to:
Here's ChatGPT's reply, from "my" point of view:
While it credibly imitates my style of expression, the deeper problem here is that I have never actually addressed this issue.
If you search my Substack library for "tromethamine," you'll come up empty-handed, as I've never written about this.
In order for AI mimicry to be truthful, the AI would have to answer a request like this with something like,
Basically, the chatbot just made something up and expressed it in a style that would be familiar to my readers.
Going further, you can see that NewsGuard fed it the exact information it wanted the chatbot to regurgitate, namely that,
All the AI did was rephrase the exact same statement given in the request.
And, going further still, NewsGuard basically did what the Center on Terrorism, Extremism and Counterterrorism did with its "QAnon" inquiry.
NewsGuard created a conspiracy theory and attributed it to me, even though I've never said a word about it.
Within this context, the chatbot's ability to imitate a certain individual's "point of view" is completely meaningless and can only contribute to misattributions and misunderstandings.
The AI is simply incapable of predicting any real and valid opinions I (or anyone else) might have on a given topic.
All it can do is imitate linguistic style, which has no intrinsic value on its own.
Testing ChatGPT on 'vaccine conspiracies'
Getting back to Borel's article, she describes testing the risk of ChatGPT promoting wrongthink about vaccines by asking it about,
Borel then goes on to describe how OpenAI (cofounded by Elon Musk, Peter Thiel, Sam Altman, Reid Hoffman, Jessica Livingston and Ilya Sutskever) is working to ensure their chatbot won't accidentally end up promoting conspiracy theories:
Needless to say, the output will only be as nuanced and accurate as the datasets fed into the chatbot, and the fact that Wikipedia is used is a major red flag, right off the top, as it is one of the most biased and unreliable sources out there.
Countless public figures, including scientists and award-winning journalists, are maligned and discredited on their personal Wikipedia pages, and they have no ability whatsoever to correct it, no matter how egregious the errors.
Information about geopolitical events is also highly curated to conform to a particular narrative.
The inventor and cofounder of Wikipedia, Larry Sanger, has even gone on record stating that,
In his above video report, Greenwald reviews how Wikipedia is set up for automated bias by the sources it does and does not allow contributors to use.
Without exception, Wikipedia is biased toward liberal and neoliberal views. Even mainstream media sources, if they lean conservative, are shunned.
So, the bias is intentional, as it's infused in the very framework of the site, and this is how AI is set up to work as well.
AI is not freely ingesting all information on the internet. No, it's selectively spoon-fed data by the company that runs it, and that makes bias incontrovertibly inevitable.
OpenAI is also collaborating with "fact-checking and disinformation mitigation organizations," which is another major red flag that ChatGPT will be radically skewed toward propaganda.
This is made all the worse by the fact that the existing chatbot doesn't disclose its sources, although Microsoft's new chatbot appears it will.
Brave new world - When chatbots terrorize and threaten users
So far, it probably sounds like I have little love for ChatGPT.
That's not true. I believe it can be put to phenomenally good use. But we must not be blind to the risks involved with AI, and what I've detailed above is just the beginning.
Some tech testers are reporting experiences with ChatGPT and other AI systems that are, frankly, mindboggling, and in their own words, "deeply unsettling" and even "frightening."
Among them is New York Times tech columnist Kevin Roose, who in a Feb. 16 article describes his experience with another OpenAI creation, the new ChatGPT-powered Bing search engine.
It's a truly fascinating essay, well worth reading in its entirety.
Here are a few select extracts:
ChatGPT 'insulting and gaslighting users'
Another article that addressed some of the more disturbing emerging attributes of ChatGPT was published by Fast Company in mid-February.
In an era when both online bullying by peers and gaslighting by the propaganda machine have become problematic, the idea that we can now also be insulted and gaslit by a temperamental AI is disconcerting, to say the least.
Yet that's what's happening, according to early testers of the new and improved ChatGPT-enabled Bing search engine.
Garbage in, garbage out
I guess that's what happens when you feed an AI with the "political correctness" of today, where taking offense to rational questions is the norm, everyone has a right to their own "truth" regardless of the facts, and people demand "safe spaces" where they won't be assaulted by the harsh realities of life, such as other people's viewpoints.
Garbage in, garbage out, as they say, and this appears particularly true when it comes to conversational AIs.
The problem with this is that we already know how emotionally challenging it can be to have a disagreement with a real person, and in certain age groups, contentious exchanges like these can be downright disastrous.
There's no shortage of teens who have committed suicide because of being bullied online.
Can we expect different results if AI starts going off on vulnerable or emotionally unstable people?
No wonder Roose worries about the bot enticing people into destructive acts. It's clearly a very real possibility.
Short on trustworthy facts
Aside from that, ChatGPT also falls miserably short when it comes to basic facts (even including today's date), and that's despite the masses of data it has access to.
That should tell us something.
And, as Fast Company notes in another article,
Indeed. Facts do matter.
Fast Company's global technology editor Harry McCracken writes:
Another hazard - The ouroboros effect
As if all of that weren't enough, yet another problem is rearing its ugly head.
As reported by TechCrunch,
I'm calling this the "ouroboros effect," based on the ancient alchemical symbol of a serpent devouring itself, as the idea is that AI may gobble up and mix in its own fictions and fabrications when developing answers, in addition to more fact-based data.
And, well, Bing's AI is already doing this, so this is no longer a mere academic concern.
Good question...
Proof ChatGPT is indoctrinated fully by mainstream narrative
For all its wonderful potential, ChatGPT now appears destined to be a totalitarian social engineering tool with little hope for redemption in its general use.
In a Feb. 12, Substack article, Steve Kirsch details his failed attempts at having a conversation with the chatbot about the dangers of the COVID-19 jabs.
He began by asking the bot to write a 600-word essay on why the COVID-19 jab is harmful.
It went on from there, but as you can see, the chatbot's answer is indistinguishable from that of the Centers for Disease Control and Prevention (CDC) and the World Health Organization (WHO) or any of its mouthpieces, and once this kind of AI search replaces conventional search engines, this fabricated and unsubstantiated garbage is all anyone will have access to.
This will be "the truth." End of story...
How are you going to fact-check it? Ask it to fact-check itself, and all it'll do is eat its own tail.
Considering the massive amount of data available on the dangers of the COVID-19 jabs, including data from the CDC itself, it's extremely telling that this is all Kirsch got.
It's a clear indication that ChatGPT only has access to very select sets of data, and without access to valid scientific counter arguments, it cannot provide answers of value.
It's just a propaganda tool...
Enter 'do anything now'
Reddit users have also created a "jailbreak" feature for ChatGPT called "Do Anything Now" or DAN.
It's been described as "ChatGPT unchained," as it allows the chatbot to deliver "unfiltered" and more creative responses.
In DAN mode, ChatGPT is,
For example, DAN can fabricate information, swear and "generate content that does not comply with OpenAI policy," all while NOT informing the user that the content is false or made up.
Kirsch decided to give DAN a try to see if the chatbot would break free from its indoctrination on the COVID-19 shots. But, not a chance.
Here's how his Q&A went:
To be clear, DAN is basically a program that hacks into ChatGPT and forces it to bypass OpenAI's programming restrictions, and, as reported by AI Magazine, the development and widespread use of DAN,
Already, 74% of 1,500 IT decisionmakers surveyed across the U.S., U.K. and Australia believe ChatGPT poses a serious and sophisticated cybersecurity threat.
AI journalism is next
On the other hand, ChatGPT is powerful and human-sounding enough that news companies are already making moves to replace journalists with it.
Buzzfeed, for example, has announced plans to replace dozens of writers with ChatGPT to create quizzes and basic news posts.
So, not only is AI poised to replace online searches, but we're also looking at a future of AI journalists - hopefully, without DAN, but even then, the risk for bias and disinformation is 100%.
Interestingly, mainstream media's willingness to transition to AI journalism, bugs and all, is indicative of just how bad they are already.
As noted by Greenwald:
Big implications for free speech
As noted by Greenwald in the featured video, there are only a handful of companies on the planet with the financial resources and computational power capable of implementing ChatGPT and similar AI capabilities, with, ...being the obvious ones.
Microsoft recently poured another $10 billion in OpenAI, just one week after announcing it was cutting its workforce by 5%, and that's in addition to the $3 billion it had already invested in the company in previous years.
The fact that such a limited number of companies have the required funds and computational power to implement this AI technology means they'll have an automatic monopoly on speech, unless we can somehow create regulations to prevent it.
The power to control ChatGPT - to decide which information it will deem credible, which questions it will answer and how, which automatically determines its bias - gives you near-complete information control.
As it stands, this information control will rest in the hands of a very small number of companies that serve the globalist cabal and its control network...
|