Difference between revisions of "Talk:Artificial Intelligence"
Jump to navigation
Jump to search
(→new version: new section) |
|||
Line 6: | Line 6: | ||
** I'm not an admin or anything but the general 0th law I have perceived is that if someone's obviously being a dickbag for no reason you can ignore him. In character justifications are varied and meaningless, the only important point is you have a point to argue. An AI could respond that shutting himself down would pose an increased danger to the humans it is charged to protect, for example. Is this flimsy? You bet. Does it matter? Fuck no. --[[User:Coolguye|Coolguye]] ([[User talk:Coolguye|talk]]) 20:24, 12 September 2012 (UTC) | ** I'm not an admin or anything but the general 0th law I have perceived is that if someone's obviously being a dickbag for no reason you can ignore him. In character justifications are varied and meaningless, the only important point is you have a point to argue. An AI could respond that shutting himself down would pose an increased danger to the humans it is charged to protect, for example. Is this flimsy? You bet. Does it matter? Fuck no. --[[User:Coolguye|Coolguye]] ([[User talk:Coolguye|talk]]) 20:24, 12 September 2012 (UTC) | ||
== new version == | |||
hi dunno, | |||
those giant text boxes are pretty jarring, they really only make sense on the security page which historically had a disclaimer for a long time |
Revision as of 21:11, 4 February 2013
"This includes even orders which could get them destroyed, although other members of the crew are likely to intercede and this is a gray area that may be considered Grief."
Law 3 prevents this doesn't it? The AI cannot doing anything to cause harm to itself as long as it follows the other two laws. -Chase
- I dunno. Law 2 says they have to obey based on the chain of command. So an Assistant would be able to order the AI to shut down theoretically, if the Chief Engineer weren't there to yell back. I think. --Hotelbravolima (talk) 19:53, 12 September 2012 (UTC)
- I'm not an admin or anything but the general 0th law I have perceived is that if someone's obviously being a dickbag for no reason you can ignore him. In character justifications are varied and meaningless, the only important point is you have a point to argue. An AI could respond that shutting himself down would pose an increased danger to the humans it is charged to protect, for example. Is this flimsy? You bet. Does it matter? Fuck no. --Coolguye (talk) 20:24, 12 September 2012 (UTC)
new version
hi dunno, those giant text boxes are pretty jarring, they really only make sense on the security page which historically had a disclaimer for a long time