ChatGPT Banned in Italy Over Privacy Concerns

ChatGPT Banned in Italy Over Privacy Concerns
ChatGPT Banned in Italy Over Privacy Concerns

Italy has become the first Western country to block ChatGPT, an advanced chatbot created by US start-up OpenAI and backed by Microsoft, over privacy concerns. The Italian data-protection authority stated that it would investigate OpenAI and ban ChatGPT “with immediate effect”. Millions of people have used ChatGPT since its launch in November 2022, and Microsoft has already integrated it into Bing and plans to embed it into its Office apps. However, concerns have been raised about the potential risks of artificial intelligence (AI), including its threat to jobs and the spreading of misinformation and bias. The Italian watchdog also cited a data breach involving user conversations and payment information as well as the fact that the app “exposes minors to absolutely unsuitable answers compared to their degree of development and awareness”. The ban highlights the importance of regulatory compliance for companies operating in Europe and the need for greater public scrutiny of AI systems.

Italy has become the first Western country to block the advanced chatbot, ChatGPT, created by US start-up OpenAI and backed by Microsoft. The Italian data-protection authority has raised privacy concerns and will investigate OpenAI with immediate effect. ChatGPT, which uses natural, human-like language and can mimic other writing styles, has been used by millions of people since its launch in November 2022. Microsoft has invested billions of dollars in it and recently added it to Bing. However, the potential risks of artificial intelligence (AI), including its threat to jobs, misinformation and bias, have been concerning. Other tech figures have called for these types of AI systems to be suspended.

The Italian watchdog has blocked OpenAI’s chatbot and will investigate whether it complied with General Data Protection Regulation (GDPR), which governs the way we use, process, and store personal data. The app experienced a data breach involving user conversations and payment information. The watchdog said there was no legal basis to justify the mass collection and storage of personal data for the purpose of training the algorithms underlying the platform. Additionally, it exposed minors to unsuitable answers compared to their degree of development and awareness. Google’s rival AI chatbot, Bard, is now available only to specific users over the age of 18.

The ban shows the importance of regulatory compliance for companies operating in Europe. Dan Morgan from cybersecurity ratings provider SecurityScorecard said that “businesses must prioritize the protection of personal data and comply with the stringent data protection regulations set by the EU – compliance with regulations is not an optional extra.” Consumer advocacy group BEUC called on EU and national authorities, including data-protection watchdogs, to investigate ChatGPT and similar chatbots. BEUC is concerned that it could take years before the AI Act takes effect, leaving consumers at risk of harm from a technology that is not sufficiently regulated. Ursula Pachl, deputy director general of BEUC, warned that society is currently not protected enough from the harm that AI can cause.

OpenAI said that it had disabled ChatGPT for users in Italy at the request of the Italian data protection regulator. The organization said it worked to reduce personal data in training AI systems like ChatGPT because it wanted its AI systems to learn about the world, not about private individuals. It also believes that AI regulation is necessary and looks forward to working closely with the Garante and educating them on how its systems are built and used. OpenAI said it looked forward to making ChatGPT available in Italy again soon.

#ChatGPT #OpenAI #Microsoft #privacy #AI #GDPR #Bing #artificialintelligence #Italy #regulation #data #protection #consumeradvocacy #BEUC #datawatchdogs #technology #compliance #dataprotectionlaws #cybersecurity #dataprotectionregulation #dataprotection #personaldata #trainingAI #publicscrutiny #harmfulAI #protectingprivacy #Garante #educatingregulators #AIregulation