If you believe in the fictional AI of films– the kind that seeks, destroys, and launches the apocalypse –you may question the continued development of the technology. Though many experts don’t expect super-intelligent or human-level intelligent AI to develop for centuries, some anticipate its arrival before 2050. Already, researchers have made breakthroughs in AI technology decades before former estimates. Apple’s 2015 acquisition of the AI company Vocal IQ indicates the extent to which the technology is becoming part of our lives.
This technology could benefit us with increased access to information, healthcare, finance, and assistance, but it also comes with certain risks. Luckily, we can take measures to avoid them.
Risk One: Conflict
Neil Jacobson, an AI consultant who has worked for the U.S. Military, GM, and Ford believes that AI technology could lead to an “abuse of power”, where huge tech companies like Google and Apple bogart the tech and leave other countries and companies in the dust. This advantage could also make war extremely one-sided, creating one superpower that dominates others.
If the wrong people get their hands on weapons with artificial intelligence systems, they could devastate their enemies. Additionally, the desire to level the playing field with AI weapons might begin a terrifying arms race.
To prevent this, transparency is crucial as the technology continues to develop. The more information shared between companies and countries, the less likely they will become alienated from each other. Companies such as Tesla CEO Elan Musk’s non-profit OpenAI strive to keep AI research out in the open.
Risk Two: Trust
Very few AI experts worry about it turning “evil” but do concern themselves with how to align its goals with ours and improve its accuracy. AI can still make mistakes, even if the coding is, to our knowledge, flawless. When shown a series of yellow and black lines, AI in 2015 saw a school bus and believed 99% that it was right.
Additionally, when AI follows guidelines in designing products, does it create safe products or simply rule-abiding ones? In simulations of a fertilizer designing scenario, the AI bypassed protections after it was provided delayed-release agents that would get it through the inspection.
Can we trust AI with power when one little mistake could have disastrous consequences?
The answer is simple: more safeguarding research must be done before researchers can apply AI tech in these ways. The human experts must take precautions now, before it becomes completely intertwined with our lives.
Risk Three: Jobs
The increase in AI usage will contribute to a loss of 5.1 million jobs in 15 leading countries over the next five years. Over two-thirds of the damages will come from office and administrative jobs, which will increasingly rely on AI for routine operations.
The report predicted greatest job losses in health care, energy, and financial services, with developing countries expected to take the biggest hit. Despite the losses, the necessity for data analysts and specialized jobs will most likely increase. AI is also expected to benefit researchers by performing basic data scraping tasks, quickly collecting information that would take humans hours.
A few countries have already taken preventative action by implementing basic income programs, and the movement continues to pick up steam. These programs distribute income by giving people a monthly allowance for basic needs like food, shelter, and clothing. In the States, several Silicon Valley companies are also developing basic income programs. Many workers may have to train for jobs that were not made redundant by AI.
The increased involvement of AI in our daily lives now seems an inevitability. In the meantime, we can prevent the negative consequences through expanded research, transparency, and social programs.
Written by Lindsey Patterson
Comment this news or article