notes-cog-ai-friendlyAi

http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/ provides an argument (in objections 1 and 2) that the crucial thing is not necessarily the design of a set of 'friendly' goals (because this is still prone to failure), but rather, not to design and run A.I. 'agents' at all, instead restricting ourselves to 'tools' like Google Maps (although http://lesswrong.com/lw/di4/reply_to_holden_on_the_singularity_institute/ section "Tool AI" claims that is a misinterpretation of the original post).

(note: imo that doesn't obviate the need for work on Friendly AI b/c imo many researchers will work on Agent AI anyway even if the consensus was that doing so may lead to unimaginably horrible outcomes; however a related objection to the one that the original post is making is that merely establishing a venue for discussion of Friendly AI may tangentially assist AGI research so much that it causes the bad outcome to be more, not less, likely)

---

note: one could imagine an AI which is only authorized to do certain things/unlocked if an external party sends a password signal (the owner's voice speaking an auth code, or some EM signal, for example), but then you have to worry about the AI either generating the signal itself or using external machinery to generate it.