also from HERE
'garbage in, garbage out' then this is it. And, ultimately it has all been driven by the objective of censoring information that does not fit the politically correct narrative...
The Hunter Biden laptop story is just one of many stories which were deemed by the Main Stream Media (and most academics) to be 'misinformation' but which were subsequently revealed as true...
Indeed Mark Zukerberg has now admitted that Facebook (Meta, along with the other big tech companies) were pressured into censoring the story before the 2020 US election and also subsequently pressured by the Biden/Harris administration to censor stories about Covid which were wrongly classified as misinformation.
The problem is that the same kind of people who decided what was and was not misinformation (generally people on the political Left) were also the ones who were funded to produce AI algorithms to 'learn':
Between 2016 and 2022, I attended many research seminars in the UK on using AI and Machine Learning to,
From 2020, the example of Hunter Biden's laptop was often used as,
Moreover, every presentation I attended invariably started with (and was dominated by) examples of 'misinformation' that were claimed to be based on "Trump lies" such as those among what the Washington Post claimed were the,
But many of these supposed false or misleading claims were already,
For example, they claimed that denying Trump had said,
...was disinformation, whereas even the far Left-leaning Snopes had debunked that in 2017.
Similarly, they claimed,
...was misinformation despite multiple videos showing exactly that - so, don't believe your lying eyes.
Indeed as recently as one week before Biden's dementia could no longer been hidden during his live Presidential debate performance, the mainstream media were adamant that such videos were misinformation 'cheap tricks'...!
But the academics presenting these anti-Trump, pro-Biden, and other political, examples ridiculed anybody who dared question the reliability of the 'self-appointed oracles' who determined what was and was not misinformation.
At one major conference taking place on zoom I posted in the chat:
The answer was
Sadly, most academics do not believe in freedom of thought, let alone freedom of expression when it comes to any views that challenge the 'progressive' narrative on anything.
In addition to the Biden and Trump related 'misinformation' stories which turned out to be true, there were also multiple examples of Covid related stories (such as those claiming very low fatality rates and lack of effectiveness and safety of the vaccines) classified as misinformation that also turned out to be true.
In all these cases anybody pushing these stories was classified as a,
And it is these kinds of assumptions which drive how the AI 'misinformation' algorithms that were developed and implemented by organizations like Facebook and Twitter worked.
Let me give a simplified example.
The algorithms generally start with a database of statements which are pre-classified as either 'misinformation' (even though many of which turned out to be true), or 'not misinformation' (even though many of which turned out to be false).
For example, the following were classified as misinformation:
The converse of any statement classified as misinformation was classified as 'not misinformation'.
A subset of these statements are used to "train" the algorithm and others to "test" the algorithm.
So, suppose the laptop statement is one of those used to train the algorithm and the vaccine statement is one of those used to test the algorithm.
Then, because the laptop statement is classified as misinformation, the algorithm learns that people who repost or like a tweet with the laptop statement are 'misinformation spreaders'.
Based on other posts these people make, the algorithm might additionally classify them as, for example, 'far right'.
The algorithm is likely to find that some people already classified as 'far right' or 'misinformation spreader' - or people they are connected to - also post a statement like,
In that case the algorithm will have 'learnt' that this statement is most likely misinformation.
And, hey presto,
Moreover, when presented with a new test statement such as,
...the algorithm will also 'correctly learn' that this is 'misinformation' because it has already 'learnt' that the statement,
...is misinformation and that people who claimed the latter statement- or people connected with them - also claimed the former statement.
The way I have outlined how the AI process is designed to detect 'misinformation', is also the way that 'world leading misinformation experts' set up their experiment to "profile" the "personality type" that is susceptible to misinformation.
The same methods are also now used to profile and monitor people that the academic 'experts' claim are 'far right' or racist.
Hence, an enormous amount of research was (and is still) spent on developing 'clever' algorithms which simply censor the truth online or promote lies.
Much of the funding for these AI algorithms is justified on the grounds that 'misinformation' is now one of the greatest threats to international security.
Indeed, in Jan 2024 the Word Economic Forum (WEF) declared that,
European Commission President Ursula von der Leyen also declared that,
In the UK alone, the Government has provided many hundreds of millions of pounds of funding to numerous University research labs working on misinformation.
In March 2024 the Turing Institute alone (which has several dedicated teams working on this and closely related areas) was awarded £100 million of extra Government funding - it had already received some £700 million since its inception in 2015.
Somewhat ironically, the UK HM Government 2023 National Risk Register includes as a chronic risk:
Yet it continues to prioritize research funding in AI to combat this increased risk of 'harmful misinformation and disinformation'...!
As Mike Benz has made clear in his recent work and interviews (backed up with detailed evidence), almost all of the funding for the Universities/research institutes world wide doing this kind of work, along with the 'fact checkers' that use it, comes from,
...who, in the wake of the Brexit vote and Trump election in 2016, were determined to stop the rise of 'populism' everywhere.
It is this objective which has driven the mad AI race to censor the internet.
Look at this below video in which Mike Benz walks us through an event that took place in 2019.
It was hosted by the Atlantic Council (a NATO front organization) to.
Note how they make it clear that 'misinformation' includes for them 'malinformation' which they define as information that is true but, but which might harm their own narrative.
They explain how to muzzle such 'malinformation,' especially from the (then) President Trump's social media posts in advance of the 2020 election.
Despite claims that this did not happen (and indeed any such claims were themselves classified as misinformation...) the journalists involved in this subsequently boasted very publicly that they not only did it,
|