Singularity and AI rights
First, the singularity is not near and it will not be achieved in your lifetime or mine. AI is just nowhere mature enough such that machines will have a mind of their own. The darling "deep learning" algorithm is in fact a dumb algorithm that is a commodity and lacks intelligence all together. It's just that is is highly useful on big dense data sets.
If the singularity did happen, machines would take over the government and make that decision. Don't woryy, not near.
28 months ago
I think Ed is right here, but your question is perhaps broader and more relevant than you initially posed. To the extent that AI becomes more powerful, more self-regulating, and more pervasive, it will likely provoke increasing levels of government regulation and control (especially in certain governments).
See current debates and calls to limit the power and usage of military AI. Or see previous debates about human cloning or genetic modification.
Numerous issues are posed that imply a need for regulation: transparency, privacy, decision-making, liability. The space for regulation is moving for the most, to my knowledge, is autonomous vehicles. So this will be a very interesting test case for future AI regulation. Also possible: financial technology, caretaking or medical robots, educational AI, and labor displacement/automation related policy. Other ideas?
If anyone has other thoughts on AI regulation, where will be, where it is, what issues will be in play, or relevant sources, I would love to hear them!
As mentioned by Ed, Singularity is the creation of a super artificial intelligence. Likely, such a software would refuse to act based on our requests. Similar to "Planet of the Apes". Before Singularity, there could be the potential situation that the AI gains self-awareness. In this case, indeed would apply "human" rights for AI.