Talk:Artificial Intelligence
Revision as of 22:10, 4 February 2013 by Frontlineacrobat4 (talk | contribs)
"This includes even orders which could get them destroyed, although other members of the crew are likely to intercede and this is a gray area that may be considered Grief."
Law 3 prevents this doesn't it? The AI cannot doing anything to cause harm to itself as long as it follows the other two laws. -Chase
- I dunno. Law 2 says they have to obey based on the chain of command. So an Assistant would be able to order the AI to shut down theoretically, if the Chief Engineer weren't there to yell back. I think. --Hotelbravolima (talk) 19:53, 12 September 2012 (UTC)
- I'm not an admin or anything but the general 0th law I have perceived is that if someone's obviously being a dickbag for no reason you can ignore him. In character justifications are varied and meaningless, the only important point is you have a point to argue. An AI could respond that shutting himself down would pose an increased danger to the humans it is charged to protect, for example. Is this flimsy? You bet. Does it matter? Fuck no. --Coolguye (talk) 20:24, 12 September 2012 (UTC)
new version
hi dunno, those giant text boxes are pretty jarring, they really only make sense on the security page which historically had a disclaimer for a long time
- I guess the blue one doesn't need to be there, but AI is different enough that the red one doesn't hurt. Darth various (talk)
- Why is there a picture of me in the AI blue box? I demand royalties for being the example of a dummy Frontlineacrobat4 (talk)