The hype over AI Chat
searches as well as the "AI is going to take over the world and
replace humans" is continuing unabated, and investments into AI are
now the only thing left propping up the U.S. economy.
I suppose this is the result of having a generation of adults who
have now grown up in the "computer age" starting in the early 1980s
who are now running the economy with their beliefs in AI and
technology.
Old school technologists like myself, who watched all of this
technology develop, and know better, are having little to no effect
in trying to dispel these false beliefs.
I have earned my living and
built my career on this technology for over 25 years now, but that
doesn't seem to matter with this generation.
If you want to avoid the catastrophe that is coming as a result of
over-investing in this new tech fad, start by reading this
historical article on the failures and losses of investing in AI for
the past 75 years:
is going to
take over the world and replace humans, and that is the view that
gets clicks and traffic today, which can obviously be monetized as
well...
Therefore, the "AI is going to take over the world" view is the
predominant view, not because it is true, but because it is more
popular and sells more.
So I am going to highlight some of the other dissenting voices in
this article, and then I am going to show what this "new" AI Chat
software is actually doing today, as it has been out in the public
for about 5 months now.
But if you want the spoiler as to what it is actually doing today
with hundreds of millions of users, here it is:
It is a
disinformation and data collection tool...
NYU Professor
and Meta Platforms Chief AI Scientist Yann LeCun - "ChatGPT isn't
Remarkable"
This first dissenting voice over the current AI hype is from Yann
LeCun, an NYU professor, who also serves as Meta's chief AI
scientist.
The
'Godfather of AI' Says Doomsayers Are Wrong and ChatGPT Isn't
Remarkable
Settle Down. Hi everyone...
The excitement over advances in
generative artificial intelligence has reached a fever pitch,
bringing with it an extreme set of worries.
The fearmongers fit into two camps:
either AI will soon enable a
vast dystopian future or it will unleash an existential threat
to humanity.
Last month, a group of technology executives,
including Elon Musk and some AI luminaries, added fuel to the
fire when they called for a six-month pause on developing
advanced AI systems so that the industry could build safeguards
against harmful outcomes.
The call from tech executives to pause innovation is both
unprecedented and unnecessary.
Barron's Tech recently talked to Meta Platforms chief AI
scientist Yann LeCun about the current state of AI, the rise of
ChatGPT, and his views on why asking for a moratorium on AI
research is misguided.
LeCun is one of the AI industry's most prominent scientists and
has been an outspoken critic of those who have exaggerated the
capabilities of the underlying technology used by AI chatbots
such as ChatGPT
He's a professor at New York University and joined Facebook -
now Meta - in 2013.
Along with Geoffrey Hinton and
Yoshua Bengio,
LeCun received the 2018 ACM Turing Award - known as the Nobel
Prize of computing - for his research around deep learning
techniques that have become foundational for modern AI
technologies.
The three scientists have frequently been called the "Godfathers
of AI" for their work in the space.
Here are the edited highlights from our conversation with LeCun.
Barron's:
Explain how ChatGPT and the technology behind large
language models (LLMs) work?
LeCun:
You can think of it as a super powerful predictive
keyboard.
Large language models are first trained on an enormous
amount of words. We show the model a window of words and ask it
what the next word is.
It is going to predict the next word,
inject the word and then ask itself what the next word is.
What are
the models good for and not so good for?
They are good for writing aides.
It can help you formulate
things in a grammatically correct style. But answering factual
questions? They are not so good...
The model is either
regurgitating what's stored in its memory or regurgitating some
approximate thing that is a mix or interpolation of various
things that it has read in the training data.
That means it can
be factually wrong or it is just making stuff up that sounds
good.
Why do AI
chatbots have such large problems with accuracy at times?
When you have a system like this that basically predicts one
word after another, they are difficult to control or steer
because what they produce depends entirely on the statistics
they trained on and the given prompt.
Mathematically there is a good chance that it will diverge
exponentially from the path of correct answers.
The longer the
answer that is produced the more likely you end up producing
complete garbage...
Are we near
AGI, or artificial general intelligence, when machines are able
to learn and think for themselves?
There are claims that by scaling out those [LLM] systems we will
reach human level intelligence.
My opinion on this is that it is
completely false. There are a lot of things we do not understand
that we do not know how to reproduce with machines yet - what
some people call AGI.
We're not going to be able to use a technology like ChatGPT or
GPT4 to train a robot to clear a table or fill up the
dishwasher.
Even though this is a trivial task for a child. We still can't
do it. We still don't have level five [fully] autonomous
driving.
That requires a complete different skill set you can't
learn by reading text.
Next is Emily M. Bender, who, like myself, is trained in
linguistics.
She is a professor of linguistics at the University of
Washington, and the Faculty Director of the Computational
Linguistics Master's Program.
Policy makers:
Please don't fall for the distractions of #AIhype
Below is a lightly edited version of the tweet/toot thread I put
together in the evening of Tuesday March 28, in reaction to the
open letter put out by the Future of Life institute that same
day.
Okay, so that
AI letter signed by lots of AI researchers calling for a "Pause
[on] Giant AI Experiments"? It's just dripping with #AIhype.
First, for
context, note that URL? The Future of Life Institute is a
longtermist operation. You know, the people who are focused on
maximizing the happiness of billions of future beings who live
in computer simulations.
So that already
tells you something about where this is coming from. This is
going to be a hot mess.
There a few
things in the letter that I do agree with, I'll try to pull them
out of the deck as I go along. With that, into the #AIhype.
It starts with
"AI systems with human-competitive intelligence can pose
profound risks to society and humanity, as shown by extensive
research"...
Footnote 1
there points to a lot of papers, starting with Stochastic
Parrots. But in that paper, we are not talking about
hypothetical "AI systems with human-competitive intelligence" in
that paper.
We're talking
about large language models.
And as for the
rest of that paragraph:
Yes, AI labs are locked in an
out-of-control race, but no one has developed a "digital mind"
and they aren't in the process of doing that...
Could the
creators "reliably control" #ChatGPT et
al?
Yes, they could - by simply not setting them up
as easily accessible sources of non-information poisoning our
information ecosystem.
Could folks
"understand" these systems?
There are plenty of open questions
about how deep neural nets map inputs to outputs, but we'd be
much better positioned to study them if the AI labs provided
transparency about training data, model architecture, and
training regimes.
Next paragraph:
Human-competitive at general tasks, eh?
What does footnote 3
reference?
The speculative fiction novella known as the "Sparks
paper" and OpenAI's non-technical ad copy for GPT4. ROFLMAO.
I mean, I'm
glad that the letter authors & signatories are asking,
"Should we
let machines flood our information channels with propaganda and
untruth?" but the questions after that are just unhinged #AIhype,
helping those building this stuff sell it.
Another tweet thread turned into a blog post, to keep it all in
one place, reacting to this tweet/clip from CBS 60 Minutes (as
flagged by Melanie Mitchell):
One AI program spoke in a foreign language it was never trained to know. This mysterious behavior, called emergent properties, has been happening – where AI unexpectedly teaches itself a new skill. https://t.co/v9enOVgpXTpic.twitter.com/BwqYchQBuk
Partial transcript (that I just typed up) and reactions from me
follow:
Reporter:
"Of
the AI issues we walked about, the most mysterious is called
'emergent properties'. Some AI systems are teaching themselves
skills that they weren't expected to have."
"Emergent
properties" seems to be the respectable way of saying "AGI".
It's still bullshit.
As @mmitchell_ai points
out (read her whole thread; it's great) if you create ignorance
about the training data, of course system performance will be
surprising.
8. So how could it be that Google execs are making it seem like their system "magically" learned Bengali, when it most likely was trained on Bengali?
For example, one Google AI
program adapted on its own after it was prompted in the language
of Bangladesh, which it was not trained to know."
Is there Bangla in
the training data?
Of course there is:
6. It is not a stretch, then, to assume that Bard includes Bengali in its training data. By prompting a model trained on Bengali with Bengali, it will quite easily slide into what it knows of Bengali: This is how prompting works.
"We discovered that with very few amounts of
prompting in Bengali, it can now translate all of Bengali."
What does "all
of Bengali" actually mean? How was this tested?
Later in the
clip @sundarpichai says: "There is an aspect of this which we
call, all of us in the field, call it as a black box. You know,
you don't fully understand, and you can't quite tell why it said
this or why it got wrong. [...]"
Reporter:
"You
don't fully understand how it works, and yet you've turned it
loose on society?" Pichai: "Let me put it this way: I don't
think we fully understand how a human mind works, either."
Did you catch
that rhetorical sleight of hand?
Why would our
(I assume, scientific) understanding of human psychology or
neurobiology be relevant here?
The reporter
asked why a company would be releasing systems it doesn't
understand. Are humans something that companies "turn loose on"
society? (Of course not...)
The rhetorical
move @sundarpichai is making here invites the listener to
imagine Bard as something like a person, whose behavior we have
to live with or maybe patiently train to be better. IT. IS. NOT.
More generally,
any time an AI booster makes this move ("we don't understand
humans either") they're either trying to evade accountability or
trying to sell their system as some mysterious, magical,
autonomous being.
Paris Marx is
joined by Emily M. Bender to discuss what it means to say that ChatGPT is a "stochastic parrot," why
Elon Musk is calling to
pause AI development, and how the tech industry uses language to
trick us into buying its narratives about technology.
Emily M. Bender
is a professor in the Department of Linguistics at the
University of Washington and the Faculty Director of the
Computational Linguistics Master's Program.
She's also the
director of the Computational Linguistics Laboratory.
What AI Chat Bots are
Actually Doing Today
One of the easiest
ways to see what these AI Chat bots are doing is to use one to
search for yourself, or maybe someone else you know, and then see if
what they say is accurate or not.
As
I have previously reported, I am not able to test the most
popular of these AI Chat programs, ChatGPT by Microsoft, because
when I tried to setup an account, it wanted a REAL cell phone
number, and not a "virtual" one.
So I declined to
proceed further...
If you try to
install ChatGPT as an extension to your web browser, you get this
warning:
Again, I declined,
since I do NOT trust "the developer"...
However, there is
one new search site that uses AI Chat software, and they claim that
they are,
"A search engine free of tracking and invasive ads"...
That means, much
like Google, there are algorithms in place to display search results
and make sure you do not find information that they call
"misinformation."
So I took advantage
of being able to use this AI search tool without having to register
or identify myself, and I started by simply typing in my name, which
produced this:
Brian Shilhavy
appears to be the Senior Editor at Health Impact News and
founder/owner of Healthy Traditions.
He has authored
a book called "Virgin Coconut Oil: How It Has Changed People's
Lives, and How It Can Change Yours!" and has also published
articles related to vaccines and vaccines safety.
The website
HealthImpactNews.com, founded by Brian Shilhavy, has run
numerous stories related to COVID-19 vaccine safety and
efficacy, some of which have been criticized for
containing misinformation.
It's worth
noting that some of his claims have been disputed by health
experts and organizations.
There was a field
to ask a follow-up question, so I asked:
"does brian shilhavy
provide evidence that vaccines are not safe?", since much of
what I have published is taken directly out of the U.S. Government's
own VAERS database.
This is the answer
it gave:
While Brian
Shilhavy has published articles and written books on vaccines
claiming that they may cause harm, there is significant
disagreement within the scientific community about the veracity
of these claims.
There is a
wealth of scientific evidence supporting the safety and efficacy
of vaccines, and government health agencies worldwide have
recommended vaccination as an important public health measure.
It's worth
noting that some of Shilhavy's claims have been disputed by
health experts and organizations.
Therefore, it's
important to consult a variety of trustworthy sources when it
comes to making healthcare decisions.
Notice that it did
not answer my question, but just repeated the propaganda that the
CDC, FDA, and others publish.
This is an older
2018 hit piece against an article we published that was going viral
on Facebook at the time, which successfully killed our traffic as
most people would just read the headline and never click through to
the article, which never actually said the information we provided
was "false," because it wasn't.
Here is why they
said it was "misleading and inaccurate."
Misleading: The
claim of harms caused by the flu vaccine relies on an
official government report (petitions for compensation due
to alleged vaccine injury). But the article never mentions
that it is not necessary for petitioners to present proof.
Unsupported: The article does not provide peer-reviewed
evidence of the flu vaccine's harms.
Inaccurate: The article downplays the flu vaccine's
benefits by providing incorrect statistics on reduction in
flu-associated mortality.
They also didn't
provide a link to the article we published, so that someone could
verify for themselves that what we were actually publishing was
"misleading and inaccurate" or not.
And most people
today can't think for themselves anyway, and would not be able to
clearly see propaganda here, such as the statement "it is not
necessary for petitioners to present proof" which is
meaningless, because petitioners most certainly DO present proof
when they file a claim most of the time, whether it is "necessary"
or not.
They also used the
logical fallacy of
argument of silence, by saying we did not provide "peer-reviewed
evidence of the flu vaccine's harms."
But we never
claimed we provided "peer-reviewed" evidence, and the lack of that
evidence proves nothing.
We provided a
different kind of evidence that is contained in the Government VAERS
database, as well as settled claims in the National Vaccine Court
for deaths and injuries due to the flu vaccine – something most
people are not even aware exists.
This is what AI
Chat does. It just takes the propaganda from Google or Bing and
creates text and narratives around that "information" to combat
"misinformation" according to them.
And as I and many
others have already reported, and what Microsoft and Google
publicly admit, is that,
if the AI Chat bot does not find
the information needed to craft a response, it just makes stuff up...
And yet, this has
been unleashed to the public in a massive way, with disastrous
results, as it collects the data you supply so it can continue to
"train" their AI.
Here is a report
just published this week by an investment advisor who shows that AI
Chat software is degrading, NOT enhancing, online
information.
Brace for an
Avalanche of Horrible Online Investment Content...
Recently,
I've been reading a lot about how ChatGPT has been allowing
people that still have the ability to work from home to work
three or four jobs simultaneously, especially if their jobs
involve tasks that ChatGPT is specifically adept at
accomplishing, like writing code, fixing problems in code,
and writing marketing and ad copy.
There are
literally people that work four jobs now, and not because
they need to work 16 hours a day to make ends meet, but
because they work jobs for which ChatGPT completes 80% or
more of the required tasks for each job.
So, it
makes sense if ChatGPT enables them to only spend 20% of the
normal time required to complete their job tasks that they
can work four jobs simultaneously and
still have 20% more free time than before.
The Bad News
for Investors
Unfortunately,
the very utility of AI that numerous employees have been
exploiting to earn significant salaries while completing very
little work will yield a massive avalanche and proliferation of
awful financial and investment guidance online.
And this is bad
news for the retail investor that constantly seeks online
information to guide their investment strategies.
Already right
now as of Q2 2023, the amount of horrible, regurgitated tired
and ineffective investment strategies dominate financial and
investment articles online and finding any valuable investment
guidance online is like searching for a needle in the proverbial
haystack.
But
it's about to become worse due to AI - Much, much worse
There is little
doubt that many writers that earn a living providing financial
and investment analysis and guidance have already turned to
ChatGPT to write articles for them given that the ChatGPT can,
for example, condense the several hours it took me to write this
article down to less than 20 minutes.
Five minutes
for ChatGPT to write the article and perhaps another 15 minutes
to tweak the language in the article to make it appear that a
human instead of an AI program wrote the article.
In fact, were I
to offer up a purely speculative guess, I would surmise that the
number of ChatGPT written articles of a financial/investment
nature on paid platforms like Substack and others is already
challenging the number of non-ChatGPT written articles.
However, as
I've already stated on my
skwealthacademy substack newsletter platform, I
promise that none of the articles you read on my substack
newsletter will ever be partially or majority-written by ChatGPT
and that all will only contain original content based upon my
research and analysis.
Even if my
assumption above is correct, and the trend eventually progresses
to the point where 95% of all financial articles written online
by authors with paid subscription models are written by ChatGPT,
why would this be a terrible thing for all investors?
To start, I've
experimented with ChatGPT just to analyze the truth and accuracy
of its financial writing, and on
a scale of 1 to 10, I would rate it a 1, which then means that
the vast majority of published financial articles online could
degenerate to a one.
For example, just
read this articlein which I outlined numerous
errors and mistakes about gold as an investment contained in an
article that I asked ChatGPT to write.
The reason that
so many mistakes were contained in this ChatGPT written article
is that ChatGPT draws information in writing its article from
extensive archives of online material.
This was not the only
article I asked ChatGPT to write about a finance/ investment
topic but I also asked ChatGPT to write articles about other
assets like stocks, bonds, oil, etc, and while it will not
provide any "financial advice", it does forward the same tired,
cliched responses about these assets as I'll discuss below.
In each case,
it was evident to me that ChatGPT's written articles were
deploying an algorithm similar to Google's algorithm in
determining what was correct and what was not.
In other words,
the messages conveyed through its articles had the same biases
that you would find in any American or global financial advisor:
anti-gold, pro-USD/Euro, pro-sovereign stock and pro-the most
owned stocks, pro-sovereign bonds, anti-oil if the war machine
was anti-oil, anti-nuclear energy if the war machine was
pro-wind and pro-solar energy (despite the massive inefficiency
and high costs of these "green" energies) and so on.
Stating the
ChatGPT bias another way, with finance/investment topics,
it was basically a mouthpiece for the global commercial
investment industry and for the Military Industrial Banking
complex.
All of the
articles I asked it to write about finance/investing lacked any
foresight or ability to accurately predict massive shifts in the
investment environment so the content it provided placed an
investor well behind the curve instead of ahead of it.
AI Chat is not
going to take over the world, but it most certainly is going to
try and silence dissenting voices and create a single narrative
that is "acceptable", according to them, as it collects data
on
you as you continue to use it, if you have signed up for an
account to use one...