03/18/2024 / By Laura Harris
The European Parliament has finally approved the world’s first comprehensive legislation that will set rules and restrictions for artificial intelligence (AI) developers after months of negotiations and political deliberations with European Union (EU) member states.
The legislation, dubbed the EU AI Act of 2021, bans certain AI uses, introduces new transparency rules and requires risk assessments for AI systems that are deemed high-risk. AI tools are categorized based on their risk levels, ranging from “unacceptable” practices, which warrant outright prohibition, to different degrees of hazard. (Related: U.S., Canadian AI companies COLLABORATE with Chinese experts to shape international AI policy.)
“The AI Act has pushed the development of AI in a direction where humans are in control of the technology, and where the technology will help us leverage new discoveries for economic growth, societal progress, and to unlock human potential. The AI Act is not the end of the journey but, rather, the starting point for a new model of governance built around technology. We must now focus our political energy in turning it from the law in the books to the reality on the ground,” said Dragos Tudorache, one of the lawmakers who oversaw the EU negotiations.
Moreover, the law applies to all AI products within the EU market, regardless of their origin. Violations of the law could result in fines of up to seven percent of the company’s global revenue.
“Anybody that intends to produce or use an AI tool will have to go through that rulebook,” said Guillaume Couneson, a partner at law firm Linklaters.
The EU AI Act, which received 523 votes in favor, 46 against and 49 abstentions, is set to become law in May or June. After some final steps and approval from EU countries, the legislation will officially start.
However, different parts of the law will come into effect at different times. Six months after it becomes law, countries must stop using banned AI systems. Rules for everyday AI systems, like chatbots, will begin a year later. By mid-2026, all the rules, including those for high-risk AI, will be in full force.
The legislation sets a new benchmark for comprehensive regulation in the AI industry. In turn, other countries have also introduced or are considering new rules for AI.
For instance, in the United States, President Joe Biden signed an executive order regarding AI in October, which is expected to become law soon. The order seeks to balance the needs of AI companies with national security and consumer rights to create an early set of guardrails that could be fortified by legislation and global agreements. But in the meantime, lawmakers in seven other U.S. states are working on their own AI regulations.
In China, President Xi Jinping suggested a plan called the Global AI Governance Initiative to ensure AI is used fairly and safely. The Chinese government also made temporary rules for controlling AI tools that create text, pictures, audio, video and other stuff for people in China.
Other countries like Brazil and Japan and big groups like the United Nations (UN) and Group of Seven nations are also making rules to control AI.
Watch the video below as Paul McGuire talks about how AI is taking over everything.
This video is from the PAUL McGUIRE channel on Brighteon.com.
AI and genetic engineering could trigger a “super-pandemic,” warns AI expert.
NSA launches AI security center to protect the U.S. from AI-powered cyberattacks.
Elon Musk announces creation of new AI company after spending YEARS criticizing rapid AI development.0
Sources include:
Tagged Under:
AI, artificial intelligence, big government, Big Tech, computing, conspiracy, cyber war, cyborg, dangerous, EU, Europe, European Parliament, future science, future tech, Glitch, information technology, inventions, robotics, robots, tech giants, technocrats
This article may contain statements that reflect the opinion of the author
COPYRIGHT © 2017 INFORMATIONTECHNOLOGY.NEWS