Singularity and AI rights

0
1383 views

Would achieving singularity open the debate for reserving rights for AI by the government?

Artificial Intelligence
Rights Management
Singularity
Conversational AI
Arun Rawlani
76 months ago

5 answers

4

First, the singularity is not near and it will not be achieved in your lifetime or mine. AI is just nowhere mature enough such that machines will have a mind of their own. The darling "deep learning" algorithm is in fact a dumb algorithm that is a commodity and lacks intelligence all together. It's just that is is highly useful on big dense data sets.

If the singularity did happen, machines would take over the government and make that decision. Don't woryy, not near.

Ed A
76 months ago
1

I think Ed is right here, but your question is perhaps broader and more relevant than you initially posed. To the extent that AI becomes more powerful, more self-regulating, and more pervasive, it will likely provoke increasing levels of government regulation and control (especially in certain governments).

See current debates and calls to limit the power and usage of military AI. Or see previous debates about human cloning or genetic modification.

Numerous issues are posed that imply a need for regulation: transparency, privacy, decision-making, liability. The space for regulation is moving for the most, to my knowledge, is autonomous vehicles. So this will be a very interesting test case for future AI regulation. Also possible: financial technology, caretaking or medical robots, educational AI, and labor displacement/automation related policy. Other ideas?

If anyone has other thoughts on AI regulation, where will be, where it is, what issues will be in play, or relevant sources, I would love to hear them!

Thanks,
~Daniel

Daniel Schiff
76 months ago
Hello Daniel, there are several initiatives. One is FAT/ML and its Principles for Accountable Algorithms and a Social Impact Statement for Algorithms: http://www.fatml.org/resources/principles-for-accountable-algorithms#social-impact - Patrick 76 months ago
Thanks Patrick! Similar to an IEEE standards project I am workng on. - Daniel 76 months ago
Hello Daniel, thanks! Right, these two visions are quite compatible. Furthermore I re-discovered the works of W. Edwards Deming, as for example his theory "System of Profound Knowledge" could used also used to manage AI Behavior. So his "Appreciation for a System" from 2000, could be updated to "Appreciation for an Algorithm". - Patrick 76 months ago
1

not a great question - better to ask how do we reach consensus on what are global ethics that provide the building blocks for every community, removing bias and pre-conceptions.

Tony Fish
76 months ago
0

As mentioned by Ed, Singularity is the creation of a super artificial intelligence. Likely, such a software would refuse to act based on our requests. Similar to "Planet of the Apes". Before Singularity, there could be the potential situation that the AI gains self-awareness. In this case, indeed would apply "human" rights for AI.

Patrick Henz
76 months ago
0

If some people go crazy to put negative thoughts in AI, it might be not reversible. This is because people are not ensured to have treatment to crazy ideas within any expected time limits.

Yucong Duan
76 months ago

Have some input?