After Europe’s landmark provisional AI agreement, analysts and critics alike of artificial intelligence are warning of the lack of regulation regarding government AI use. The European deal does include some governance on how and when governments can utilize AI, especially when they’re using it on civilians. However, critics argue this isn’t enough.
Countries and world powers have recently lagged in their quest to regulate AI, especially after ChatGPT was introduced to the world on Nov. 30, 2022. Europe’s latest deal in AI governance has already been touted as historic by many, though those associated with AI development and those worried about the use of machine intelligence have criticized the overall deal.
However, the European Union remains the biggest world power to finally try to regulate AI and put laws in place to keep potential bad actors at bay from using artificial intelligence in a harmful way. While President Joe Biden has made a few moves to regulate AI, the U.S. government is far behind truly governing its use.
The debate about whether artificial intelligence can even be regulated to protect people remains popular. Reportedly, Nokia chief Rolf Werner recently wrote, “Only AI can protect against AI.”
Many analysts worried about how machine intelligence could be used by the government against its own citizens have become public in their disagreement with both the EU and the United States legislation.
The EU’s deal currently allows governments to use AI, but only in certain, specific situations. Governments cannot use systems in an untargeted way. However, they can use it if they’re attempting to find severely dangerous criminals, or if they’re trying to stop a potential terrorist attack.
EU members and lawmakers reached a preliminary agreement on a deal that aims to regulate AI technologies, including ChatGPT and governments' use of AI in biometric surveillance. https://t.co/IGredhwlsD
— DW News (@dwnews) December 9, 2023
Reporting has confirmed that Europe is debating just what governments can and cannot do with AI. Many governments are trying to ensure that the military is not banned from using artificial intelligence. This has resulted in many critics saying that governments and militaries should not have the ability to utilize these systems without also having regulations put on them.
The use of biometric surveillance has greatly worried analysts, especially as the current European deal doesn’t fully regulate governments from using this type of intelligence. Instead, biometric surveillance is allowed, yet only in real-time, and only when used in a targeted way.
For those concerned about countries surveilling regular citizens, this governance isn’t enough. However, the EU agreement isn’t finalized, and debates will likely continue to work out final legislation.
Other critics of AI in general have said that the lack of regulation in countries such as Russia and China, which have long taken actions against the U.S. in the form of election propaganda and cyber attacks, is worrisome.