The Italian Data Protection Watchdog, also known as Garante, has ordered a temporary ban on ChatGPT, the popular AI chatbot from US startup OpenAI, making Italy the first Western country to do so. The move comes after Garante launched an investigation into a suspected breach of Europe’s strict privacy regulations. The regulator cited a data breach at OpenAI which allowed users to view the titles of conversations other users were having with the chatbot. Garante expressed concerns over the massive collection and processing of personal data to train the algorithms, a lack of age restrictions on ChatGPT, and the potential for the chatbot to provide factually incorrect information in its responses.
OpenAI, which has Microsoft as a backer, faces a potential fine of €20 million ($21.8 million) or 4% of its global annual revenue if it fails to provide solutions to the issue within 20 days. Italy is not the only country grappling with the rapid advancement of AI and its societal implications. Other governments are creating their own regulations for AI, which will undoubtedly touch on generative AI, a set of AI technologies that produce new content based on user prompts. These large language models are trained on vast amounts of data and are more advanced than previous iterations of AI.
Although there have been longstanding calls for AI regulation, the pace of technological advancement has made it challenging for governments to keep up. Computers can now create realistic art, write entire essays, or even generate code in seconds. Concerns among regulators include job security, data privacy, equality, and the potential for advanced AI to manipulate political discourse through the creation of false information. As such, many governments are considering how to regulate general-purpose systems like ChatGPT, with some even considering joining Italy in banning the technology. Futurist and global technology innovation advisor for John Deere, Sophie Hackford, stressed the need for careful regulation to ensure technology serves humanity rather than creating a world where humans are subservient to machines.
Britain
The UK unveiled its plans for regulating AI last week. Instead of creating new regulations, the government has requested regulators in various sectors to apply existing regulations to AI. The proposals do not mention ChatGPT specifically, but they provide guidelines for companies to follow when using AI in their products. These guidelines include principles such as safety, transparency, fairness, accountability, and contestability.
At present, Britain is not proposing any restrictions on ChatGPT or any other type of AI. Instead, the country aims to ensure that companies are developing and using AI tools responsibly and providing users with adequate information on how and why decisions are made. During a speech to Parliament last Wednesday, Digital Minister Michelle Donelan stated that the rapid rise of generative AI has brought about risks and opportunities that are emerging at an unprecedented rate. The government’s non-statutory approach will allow for quick responses to AI advancements and further intervention if needed. According to Dan Holmes, a fraud prevention leader at Feedzai, which employs AI to combat financial crime, the UK’s primary objective is to establish what good AI usage entails. He told CNBC that this involves adhering to principles of transparency and fairness when using AI.
The EU
The rest of Europe is expected to take a far more restrictive stance on AI than its British counterparts, which have been increasingly diverging from EU digital laws following the UK’s withdrawal from the bloc. The European Union, which is often at the forefront when it comes to tech regulation, has proposed a groundbreaking piece of legislation on AI. Known as the European AI Act, the rules will heavily restrict the use of AI in critical infrastructure, education, law enforcement, and the judicial system. It will work in conjunction with the EU’s General Data Protection Regulation. These rules regulate how companies can process and store personal data. When the AI act was first dreamed up, officials hadn’t accounted for the breakneck progress of AI systems capable of generating impressive art, stories, jokes, poems, and songs.
As per Reuters, the European Union’s (EU) preliminary regulations view ChatGPT as a type of general-purpose AI employed in high-risk applications. The commission defines high-risk AI systems as those that may impact fundamental rights or safety. Such systems will be subjected to stringent risk assessments and must eliminate discrimination arising from the datasets feeding the algorithms. “The EU has a great, deep pocket of expertise in AI. They’ve got access to some of the top-notch talent in the world, and it’s not a new conversation for them,” said Max Heinemeyer, Chief Product Officer of Darktrace, while speaking to CNBC.
“It’s worthwhile trusting them to have the best of the member states at heart and fully aware of the potential competitive advantages that these technologies could bring versus the risks.” Meanwhile, while Brussels is developing AI laws, some EU nations are evaluating Italy’s actions against ChatGPT and considering whether to follow its lead. “In principle, a similar procedure is also possible in Germany,” said Ulrich Kelber, Germany’s Federal Commissioner for Data Protection, to the Handelsblatt newspaper. The French and Irish privacy regulators have contacted their Italian counterparts to learn more about the findings, as per Reuters. Sweden’s data protection authority has ruled out a ban. Since OpenAI doesn’t have an office in the EU, Italy can proceed with such action. Ireland is typically the most active regulator when it comes to data privacy since most U.S. tech giants like Meta and Google have their offices there.
U.S.
The U.S. hasn’t yet proposed any formal rules to bring oversight to AI technology. The country’s National Institute of Science and Technology put out a national framework that gives companies using, designing or deploying AI systems guidance on managing risks and potential harms. But it runs on a voluntary basis, meaning firms would face no consequences for not meeting the rules. So far, there’s been no word of any action being taken to limit ChatGPT in the U.S. Last month, the Federal Trade Commission received a complaint from a nonprofit research group alleging GPT-4, OpenAI’s latest large language model, is “biased, deceptive, and a risk to privacy and public safety” and violates the agency’s AI guidelines. The complaint could lead to an investigation into OpenAI and suspension of commercial deployment of its large language models. The FTC declined to comment.
China
ChatGPT isn’t available in China, nor in various countries with heavy internet censorship like North Korea, Iran, and Russia. It is not officially blocked, but OpenAI doesn’t allow users in the country to sign up. Several large tech companies in China are developing alternatives. Baidu, Alibaba, and JD.com, some of China’s biggest tech firms, have announced plans for ChatGPT rivals.
China has been keen to ensure its technology giants are developing products in line with its strict regulations. Last month, Beijing introduced first-of-its-kind regulation on so-called deepfakes, synthetically generated or altered images, videos or text made using AI. Chinese regulators previously introduced rules governing the way companies operate recommendation algorithms. One of the requirements is that companies must file details of their algorithms with the cyberspace regulator. Such regulations could, in theory, apply to any kind of ChatGPT-style technology.
Author: Stefani Reynolds