March 31, 2023

from RT Website

 

 

 

 


Welcome screen for the

OpenAI "ChatGPT" app

© Getty Images / Leon Neal
 

 

 

The country's

data protection authority

has demanded that

the chatbot's creator OpenAI

take action or face hefty fines...

 

 


Italy's data protection watchdog has banned access to OpenAI's ChatGPT chatbot, due to alleged privacy violations.

 

The decision came after a data breach on March 20 that exposed users' conversations and payment information.

ChatGPT, which was launched in November 2022, has become popular for its ability to write in different styles and languages, create poems, and even write computer code.

However, the Italian National Authority for Personal Data Protection criticized the chatbot,

for not providing an information notice to users whose data is collected by OpenAI.

The watchdog also took issue with the "lack of a legal basis" that would justify the collection and mass storage of personal data intended to "train" the algorithms that run the platform.

Although the chatbot is intended for people over the age of 13, the Italian authorities also blasted OpenAI for failing to install any filters to verify user age, which they claim can lead to minors being presented with responses,

"absolutely not in accordance with their level of development."

The watchdog is now demanding that OpenAI "communicate within 20 days the measures undertaken" to remedy this situation or face a fine of up to 4% of its annual worldwide turnover.

 

The decision to block the chatbot and temporarily limit the processing of Italian users' data via OpenAI has taken "immediate effect," the organization added.


Meanwhile, over 1,100 AI researchers and prominent tech leaders, including Tesla CEO Elon Musk and Apple co-founder Steve Wozniak,

have signed an open letter demanding a six-month moratorium on "giant AI experiments."

The signatories claim that,

"AI systems with human-competitive intelligence can pose profound risks to society and humanity" and that the rapidly advancing technology should be "planned for and managed with commensurate care and resources."

The group has strongly cautioned against allowing an,

"out-of-control race to develop and deploy even more powerful digital minds that no one - not even their creators - can understand, predict, or reliably control."

The letter states that if the AI developers can't govern themselves, governments must step in, creating regulatory bodies capable of reigning in runaway systems, funding safety research, and softening the economic blow when super-intelligent systems start taking over human jobs.