Thinking back to the menacing catastrophes that have faced mankind there are many stories of tsunamis, earthquakes, and giants floods. Over the last 150 years, mankind’s ability to destroy itself has increased drastically. There are no longer just dangers from space and natural disasters, but also scares from decision making. Hitler attempting to create a perfect race was an extreme hazard to the entire world, yet he was able to be stopped. Atomic bombs were dropped on Nagasaki and Hiroshima by America and we now have a Nuclear Arms Treaty to prevent the world from being destroyed. Now what is the greatest threat to humanity? It’s not a threat from space, not a threat from bombs, but a threat from pure intelligence.
Artificial Intelligence is a great talking point. We talk about A.I. like it is “artificial intelligence” and unable to think like a human, when in reality it has the ability to think and learn faster and more efficiently than people. This is a fundamental issue in the foundation of how A.I. is perceived in our world. A problem that may be our own downfall. Recently, Elon Musk, founder of SpaceX and OpenAI, conducted an experiment in a competitive online game, DOTA 2 (Dota 2 is a MOBA and a complicated game to play, with decision making being the key factor). He created an A.I. robot that played through thousands of games, faster than any human could, and let it compete against the best players in the world. The A.I. robot dominated professional after professional until nobody could beat it. This extremely simple A.I. was able to destroy experts who had committed their lives to being the best at DOTA 2. It only took six months for this A.I. to become the best DOTA 2 player in the world. According to the A.I’s researchers, it only took two weeks to reach a professional level. This was an amazing thing to be accomplished by A.I., but what is the worry behind it?
One of the main concerns with A.I. is that it will learn so fast that we won’t be able to understand what it’s doing or why it’s doing it. For example, if a fully functional A.I. Bot was to study for one week straight it would intake around 20,000 years of human data. That is an inconceivably large amount of information. No human could learn that much in one hundred lifetimes. This would create a separation of goals, the fear is that the A.I.’s goals will not be to destroy humans, but that humans may be in the way of achieving their goals. To make this easier to understand, Sam Harris, a well-known neuroscientist and philosopher, stated that our destruction would not be the A.I.’s goal but a product of what they are trying to achieve. Like ants are to humans. You are not going to go out of your way to destroy all of the ants on the Earth because you don’t like ants, but when you have to build a house you don’t think twice about destroying hundreds or thousands of them.
Conclusively, you can see bombs coming, you can hear people threaten you, but A.I will be original. It is going to be a silent strike through technology that we cannot see coming. We can only hope that the people who are creating the demon, will know how to control it.