Artificial Intelligence (AI) threatens existence of Humanity? Elon Musk thinks so
AI or Artificial Intelligence has always been seen as something sci-fi or kinda impossible. But we as human being have long walked on earth and evolved to learn and make things and AI may not be as impossible as we would think.
AI is not like our normal computer instead, a complete men in machine which can think and act based on its desires and assessments unlike computers today which in reality need us to give some inputs to get an output or programed in such away that we can control it . However, at least some people in the science and technology sector have started to think about it seriously and they also consider the threat that may pose to humanity. And even entrepreneur like Elon Musk thinks it is the biggest existential threat to us humans.
“I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful,” said Musk. “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”
Elon Musk was talking to students from Massachusetts Institute of Technology (MIT) during an interview at the AeroAstro Centennial Symposium and made the comment above.
He also went on to say that we are summoning demon with the artificial intelligence. “With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like – yeah, he’s sure he can control the demon. Doesn’t work out,” Musk added.
Listening to Musk may confuse you about his investment in AI research. But he had recently described his investment in AI as “keeping an eye on what’s going on”, rather than expecting a return on the capital.
Musk also spoke about his space exploration technologies Space X and talked about getting to Mars.
“What matters is being able to establish a self-sustaining civilisation on Mars, and I don’t see anything being done but SpaceX. I don’t see anyone else even trying,” said Musk.
Meanwhile, Future of Humanity Institute at the University of Oxford, UK specializes in looking at the ‘big-picture’ future of the human race. The risks that the Institute considers also include potential threat posed by artificial intelligence. See what Stuart Armstrong, a philosopher and Research Fellow at the institute has to say about AI and its potential risks.
“One of the things that makes AI risk scary is that it’s one of the few that is genuinely an extinction risk if it were to go bad. With a lot of other risks, it’s actually surprisingly hard to get to an extinction risk,” Armstrong explains. “You take a nuclear war for instance, that will kill only a relatively small proportion of the planet. You add radiation fallout, slightly more, you add the nuclear winter you can maybe get 90%, 95% – 99% if you really stretch it and take extreme scenarios – but it’s really hard to get to the human race ending. The same goes for pandemics, even at their more virulent.”
“The thing is if AI went bad, and 95% of humans were killed then the remaining 5% would be extinguished soon after. So despite its uncertainty, it has certain features of very bad risks.”
So the point Elon Musk makes is very serious and certainly not something we would like to happen. However, there are also people like Google’s Director of Engineering, Ray Kurzweil who see AI as part of great human future and has an optimistic view of organic and cybernetic lifeforms.
So what do you think about AI? do you see AI as part of our great future and want to stop working by allowing intelligent machines take over your jobs and work for you or see it as great threat just like Elon Musk worries?