Stephen Hawking and Artificial Intelligence
Stephen Hawking, one of the finest mind ever, warned that Artificial Intelligence could end human race by replacing persons. What do you think about this? Do you agree or not?
This is not possible based on current AI, although it might be possible based on future AI technology. Current AI consists largely of machine learning algorithms, which are passive exotic statistical algorithms where we learn rules or facts from data. These algorithms are very useful, but not really fundamentally "intelligent". Earlier AI (heuristic search, expert systems) depends on codifying rules extracted from a human.
To reach the point Hawking was talking about, AI must do one or both of the following: 1) General learning, where it learns in a broad way from its environment in a self motivated way and can organize what it learns and plan from it, and 2) an understanding of consciousness and modeling it on a computer.
Neither learning algorithms nor planning methods are far enough along yet to remotely compete with humans. We are far from understanding what consciousness is, whether it emerges from the bottom up, or whether it requires interaction with an outside complex system yet, and until this is understood (if ever), it cannot be codified.
That said, my opinion could be proven wrong if a good inductive reasoning algorithm can be trained to improve learning and planning algorithms for robotics on its own. Not here yet either.
There are thought leaders who will disagree with this position. It is important to separate fear mongering and self aggrandizing from intellectual reasoning.
27 months ago
Without a doubt, Stephen Hawking was one of the best minds the planet has ever seen. His passing will bring new light to his research, findings and advice.
His point on AI is a clear warning for us to proceed with caution. Many people reviewing, studying and projecting the trends on AI are convinced that his view is a viable scenario. While I believe there will be both amazing and perhaps scary impacts to our current way of life in the next 25-50 years, I also believe we have the ability to manage this evolution in ways to prevent many negative impacts including the end of the human race.
Machines with self-learning, no emotional and social attachments are dangerous for mankind. No doubts, but I don't think we are there yet. Can this be possible? Yes. I agree with the great mind. However, with the present state of AI, it'll take a long time.
Stephen Hawking - RIP the great mind.
The other respondents have covered the technical issues well. We need advanced generalized artificial intelligence, and that will take a while. However, it is very likely to be inevitable.
I'll just add one consideration. There are two possible models for replacement. One is that AI competes with and replaces humans directly, and the other is that humans adopt biotechnology into our bodies as a way of enhancement. This is known as the cyborg model, and it strikes me as more plausible given our desire to better ourselves and the need to compete with AI and each other.
Either either way, we are very likely to change as a species in the not-too-distant future in very dramatic ways. We can already see now how computers and cellphones are extensions of our capacities and memories.
Stephen Hakwing (and others as Elon Mask) predict that an AI Super-Intelligence may take over. This does not automatically mean that mankind will be terminated, but we would not be anymore the rulers of the planet. Today we can see that we already are in the process of becoming cyborgs. So maybe a AI Super-Intelligence will not replace us, but Human and Machines will become one.
But of course, this is science fictions. At the moment there are other scenarios that more likely could end the human race, the "collapse scenario". For Example, nuclear war, non-nuclear war, climate change, etc.
Paolo, Hawkins was a genius. I have he belief that even geniuses have their weaknesses. Do we have the Captain Kirk v the logical Mr Spock scenario or the "Terminator" scenario. I was watching a news report last night that showed that policing in the UK was becoming ineffective on the streets with the emergence of street gangs whilst the government relied on smarter IT control and monitoring systems to reduce police numbers (and costs). So possibly some hope there, in that there is probably a need for humans to support AI systems and the social systems they support, control, or regulate. In terms of what I do and the influence of AI in the future - it is very scary. I worry that in the wrong politician's hand there is a recipe for some utopian monstrosity - so maybe Hawkins was right.
25 months ago
I honestly wonder sometimes, what if we really succeed in creating a new artificial intelligent creature that is a full human clone and can make decisions based on emotions just like any other human being. Wouldn't that leave us with the question of why are we assuming that we aren't clones ourselves? And even if, what would be the case if they artificially intelligent bots and Robots start to collaborate and force the human race to work for their own good. Me and many other computer scientists around the world have raised the concerns from not following cautious procedures in development in AI. It's cool and it brings a lot of good advantages when it comes to platforms like our platform Convetit. But it can cause real damage the more we become a modern world and the more technology gets rooted deeply into our societies. These are my two cents!
25 months ago
Was the majority of what he said just theories ?
27 months ago