08/27/2021 / By Mary Villareal
The rapid development of artificial intelligence can pose some risks, which is why it should be strictly and ethically controlled. AI developers are now discussing how to place limits on technology to ensure that robots will always act in the interest of humanity because giving them their own independent personalities could be dangerous.
AI consultant Matthew Kershaw said that it may even be possible for technology to reach worrying heights within this lifetime “if you’re young enough.”
Professor Stephen Hawking, theoretical physicist and cosmologist said before his death in 2018, “The primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race.” (Related: EU proposing legislation to restrict facial recognition tech and “high-risk” artificial intelligence applications.)
SpaceX founder Elon Musk agreed with Hawking’s statement. He said, “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful.”
AI in itself can be dangerous, but what can we expect from AGI? Put simply, artificial general intelligence can be defined as the ability of a machine to perform any task that a human can. While this highlights the ability of machines to perform tasks with greater efficacy than humans, they are, as of this moment, not generally intelligent. For instance, they are very good at a single function while they have no capabilities in anything else not programmed into them. Thus, while they are effective as a hundred trained humans in performing one task, it can lose to children over literally anything else.
However, Kershaw believes that true AGI will require the use of computers that are powerful enough to hold a comprehensive model of the world, and this won’t be anytime soon. “Given that we don’t really understand what it means to be conscious ourselves, I think it’s unlikely that AGI will be a reality anytime soon. We just don’t know what it actually means to be ‘conscious.'”
Further, AI has to be trained in any function using massive volumes of data, while humans can learn with significantly less. “A human child doesn’t need to see more than five cars to learn how to recognize a car. A computer would need to see thousands,” Kershaw said.
In 1950, computing pioneer Alan Turing proposed that a computer can be said to possess artificial intelligence if it can mimic human responses under specific conditions. Turing devised a test to determine whether or not an AI could pass for a human, and as of yet, there has been no system that passed it, although few have come close. The closest was in 2018, when Google’s duplex AI telephoned a hairdresser salon and successfully made an appointment.
However, Duplex was working on a very specific task, and a true AGI would have been able to chat with the hairdresser.
Artificial intelligence is everywhere today, but while those systems are good at their dedicated missions, none of them so far were able to learn to do something without the help of humans.
What scientists call “artificial general intelligence” remains theoretical for now. To enable these artificial systems, machines must learn from experience, adjust to new inputs, and perform human-like tasks.
Experts are developing new technologies that can be close to the inflection point where they can develop general intelligence. However, most AI experts still believe that we will be able to see AGI by the end of the century, with the most optimistic estimates being around 2040 to 2080. Others also believe that artificial consciousness will never be achieved because we, as humans don’t understand our own.
There are drawbacks to artificial intelligence as well, some of which include the following:
Job loss. Automation of jobs can spur massive job loss. Job automation is an immediate concern. It is no longer a matter of if AI can replace certain job types, it’s a matter of the degree that they can. Many industries are already becoming automated today, especially in areas where tasks are repetitive. Before long, tasks from retail sales to market analysis to labor can be done using AI.
Privacy concerns. Malicious use of AI could also threaten digital security by way of hacking, physical security by weaponizing consumer drones and even political security through surveillance and profiling. AI can affect privacy and security similar to China’s “Orwellian” use of facial technology in offices, schools, and venues.
Stock market instability. Due to algorithmic, high-frequency trading, entire financial systems could be brought down. Algorithmic trading occurs when a computer could execute trades based on pre-programmed instructions, and they can make high-volume, high-frequency and high-value trades that can lead to extreme market volatility.
Read more about artificial intelligence and the wonders of technology at Computing.news.
Sources include:
Tagged Under: AI, Alan Turing, artificial general intelligence, computers, cyborg, dangerous, future tech, information technology, machines, robots, science and technology
COPYRIGHT © 2017 INFORMATIONTECHNOLOGY.NEWS