Elon Musk says Artificial Intelligence must be regulated. Should it?

2
1820 views

It would seem he believes that AI will outpace our ability to prevent AI from taking over. This could pose a risk to humanity. Are his observations founded on good reasoning or is it a little blown out of proportion.
See:
https://www.theverge.com/2017/7/17/15980954/elon-musk-ai-regulation-existential-threat

Assoc. Facilitator
79 months ago

6 answers

2

First thing to mention, every technology needs regulation, use and mis-use are not that far in real world. It's not only Musk who expressed his concern. Bill Gates, Sturt Russel, Stephen Hawkins all these greats of their respective fields expressed similar views to some degree. Some of the people who expressed such opinion really understand the technoilogy better than anyone else, they brought this renaissance. Their concerns are not wrong. If a machine starts to learn on its own completely, it can go three ways (1) Good, (2) Ugly and (3) Non-sense. Recently there was a problem with facebook that their AI machine stared communicating [https://goo.gl/HDNKUG] in a weird manner (some words that did not make sense). Well the machines were immidiately shut down and this seemed more like a case of glitch. This eventually it falls in the 3rd category as it did not seem to harm nor make any real sense (or threat). But, if left unchecked things can go ugly.
However, the current level of AI we is far-far from that level. Most of the machines are good at doing one or few things. We have't reached even a fraction of human intelligence. Having said this, it does not make the reasons and concerns invalid. The rate at which technologies will develop are only going to increase and its good to have regulations in place ASAP. But that does not mean no AI, every technology has pros and cons for the soceity. We have regulations for drugs, weapons, chemicals, biological agents, so why not for AI. Regualtion are to improve the techonogy and use them for betterment, it might fractionally slow it down but are necessary.

Paurush Praveen
79 months ago
1

AI behaves similar to human individuals. The software uses information, and based on its perception (sensors and algorithm) it processes it to make a decision. A similar way as a human makes decisions. So it is not surprising that some countries are thinking to give them a status as "electronic person". This less as they assume awareness, but to define legal responsibility.

Should AI behavior get regulated? Definitely yes, similar to human behavior. The opposite is AI can be responsible, but accountable. The last stays with the human programmers / owners.

Patrick Henz
76 months ago
Agree, somehow AI is an expression of the human behavior. - Paolo 76 months ago
0

Remember the days when AI was pretty much those problems in computer science that didn't (yet) work?
And now it seems apps with a for-next loop with some if-then statements being sold as "AI"...
So, I think we need to distinguish between the concepts of the current state of AI and what Elon referres to as artificial general intelligence.
In the current state of AI (sometimes referred to narrow ai), the regulation of AI makes no sense, since it's about the application, and not the technology method.
Regulation is applied on what the application does - not it's underlying programming method. The regulation of steel has nothing to do with a bomb built out of steel. We already have regulations regarding programs being used for stock trading, but they apply on what those programs can execute, not the underlying programming language.
So when people read this article, they should not mix it with our current AI technology at all. So the title of the article is highly misleading since it mixes precisely that.
Artificial General Intelligence is a concept of a completely self-learning machine. We do not understand this at all right now. Using AI today in relation to AGI is like comparing writing about traveling to the moon with actually going there. The implications of AGI are actually much different than anything we have encountered, for example the concept of consicousness and mind will suddenly become crucial. We are potentially talking about creating new beings - so that's where the worry of Elon comes from and the need for regulations. Same way we need to talk about regulations of changing the DNA for unborn babies.
Having said this - regulations for businesses that start employing new technologies might require modificaitons (such as stock trading programs required regulation change after the 80's stock market crash - using automatic breakers), but again - this applies on the level of the regulated business itself - if technology changes it's capabilities beyond the current regulation framework.
And this last bit might by a discussion point by itself - and necessary to discuss - but has nothing per se to do with AI.

Ronny Fehling
79 months ago
0

The core concern raised in the article seems more about law-making process inadequacies, more specifically the current mainstream "reactive" approach which prohibits the fast operational pace of proactive multi-stage adaptive regulations.
Without such smart laws, socioeconomic stagnation tends to prevail (as innovation is likely hindered), or, vice-versa, increased risks derived from a failure to foresee uncertain outcomes associated with new technologies, including any type of AI (generalised or not), are poorly managed, let alone understood.
One solution there is actually the enabling of disintermediated co-creative, arguably AI-assisted, law-making processes, focused on policies rather than politics, away from politicized analysis-paralysis which generally results in regulative gaps, overlaps, inconsistencies, unwarranted legal complexity, and failures to cope with the pace of an ever-changing fast evolving hyper-connected world.
Generalised or not, and similarly to many other areas, many technical, economic and governance risks associated with AI (systemic failure risks with large-scale AI systems, self-induced training and/or operatonal dataset biases, inadequate economic models with AI-driven mainstream automation) can be foreseen... But, while they hardly beat in magnitude terms any current human-induced calamities, the benefits brought by AI are likely to outweight them.

Giovanni D
79 months ago
0

Yes as AI could be used with any purpose in a positive or bad way. Discussion should be how to regulate this as it is evolving in a super quick way, what we could regulate today could be no longer relevant shortly and on the other side something new will pop up in a second. At which level? with a unique global position? If not this would be regulated just in few countries while in other ones you could work without any limitation. The question is then who could be a global supervisor and authority for this?
To conclude my reply is still yes but before starting a general agreement on a regulation board is fundamental.

Paolo Beffagnotti
76 months ago
0

Just like any other technology that has an impact on our lives, AI has the potential to to affect us more in a positive way. Having said that, AI does need regulation. I can definitely picture a time when AI will behave similar to human individuals. Good or bad, it is difficult to comprehend right now but I agree with Musk when he pitches for better regulation. No regulation means no responsibility and no accountability. I am sure we cannot blindly trust human programmers to tackle AI the day it starts dominating human lives. If medical devices, pharma products, drugs, biologics, etc. fall under the realm of regulations, then so does AI.

Karan Verma
76 months ago

Have some input?