Future of AI: More Iron Man, Less Terminator

Sep 3, 2017 00:00 · 921 words · 5 minutes read artificial-intelligence robots

We’ve all seen at least one of the Avengers movies. The most classic of superheroes battling it out against the evils of the universe. My favorite Avenger is by far Iron Man, because I’m obsessed with tech.

But more than Iron Man and the Avengers I love the Terminator movies. I’ve seen them over and over again. They provide a sort of eerie glimpse into a dystopian future caused by AI and Robotics gone out of control. Not only that but the world that the Terminator movies depict doesn’t seem that far from a plausible reality.

This makes a lot of people nervous and there are three aspects to that. First is the recent introduction of “killer robots” that shown to be active in the military, amongst other places. Next is a lack of understanding on what makes up AI and ML and how far we have come with these technologies. Finally, this is a topic often debated with scary headlines and experts talking about “very real possibilities”. Often the media will omit discussing the proactive counter measures we are putting in place.

Don’t get me wrong, I’m not arguing against the possibility of a Terminator like future. It’s a very real possibility and with that robots and the future of AI should be taken very seriously and very carefully. Although in the direction we are heading, we’re more likely to end up with Jarvis and Iron man rather than a Terminator apocalypse.

Killer robots

You’ve probably seen the term “killer robots” in headlines and blog posts a fair bit. It’s a topic that goes through waves of popularity every couple of months. With headlines like: “Prepare for rise of ‘killer robots’ says former defence chief” by The Telegraph and, more recently, “Killer robots are almost inevitable, former defence chief warns” from the Independent.

I recently came across this video of Neil deGrasse Tyson talking about ‘killer robots’:

“Our machines have been killing us ever since we’ve ever had machine.”

Most of us operate a machine everyday, when we drive our car. And it’s a very risky machine to operate which leads to many injuries and deaths. The WHO reported that 1.25 million people died worldwide due to traffic related deaths. And that’s just cars. Even as I’m writing this post I see an this article published by the BBC reporting a man had been crushed by a tractor in Devon, UK. Yet another instance of a machine related death.

Neil explains that if you build a robot and it turns on you. Well you built the robot so you can un-build the robot. Now of course it isn’t as simple as that in all cases but for the most part it makes sense. If you are thinking back to the Terminator movies in which the robots managed to replicate and enhance themselves, well we’ll get to that soon.

You can apply this similar sort of thinking to robots used by the military as well. The military has the ability to use missiles, tanks, and everything in between. So now that they’ve started using “killer robots”, it’s another weapon used in the same manner. A person still has to tell the robot to pull the trigger, or launch the rocket. When that changes it becomes a different ethical argument.

Where are we now

The other day came across this tweet:

I think it sums up our position nicely. It’s still early days. Yes, we have technology and raw computational power advancing at an increasing rate. We also have quantum computers making their way into the game and advanced machine learning techniques able to do amazing things. But we still have a long way to go before we manage to build a robot anywhere near the same level of advancement as a Terminator.

Still, it’s never to early to start preparing. And I agree 110%, we should be proactive with counter measures for how dangerous these machines might one day be.

Proactive counter measures

A little while ago, end of 2015, a non-profit AI research company was founded called OpenAI. It got a lot of press having attained individual sponsors such as Elon Musk, Sam Altman, Reid Hoffman and Peter Thiel. And rightly so, their mission is very relevant in these matters:

“OpenAI is a non-profit AI research company, discovering and enacting the path to safe artificial general intelligence.”

The general idea is that the software that will be apart of the future of robotics and AI will be open source. If one day robots start fighting back, then we can work out what they’re thinking. And if these robots manage to enhance themselves then we’ll have the code that caused that to happen.

Depending on the code itself, we may even be able to remove the ‘bad bits’ and create a robot army to fight on our behalf. But only the future knows if that’s a good idea or not.

There are other organisations, and people, who are much smarter than you or I working on these issues as well. They are putting in safe guard and applying software validation techniques for the next generation of AI. Just as cars require rigorous safety checks and regulations, future robots are going to need the same.

Sources