Difference between revisions of "Talk:Artificial Intelligence"

From Space Station 13 Wiki
Jump to navigation Jump to search
Line 5: Line 5:
*I dunno. Law 2 says they have to obey based on the chain of command. So an Assistant would be able to order the AI to shut down theoretically, if the Chief Engineer weren't there to yell back. I think. --[[User:Hotelbravolima|Hotelbravolima]] ([[User talk:Hotelbravolima|talk]]) 19:53, 12 September 2012 (UTC)
*I dunno. Law 2 says they have to obey based on the chain of command. So an Assistant would be able to order the AI to shut down theoretically, if the Chief Engineer weren't there to yell back. I think. --[[User:Hotelbravolima|Hotelbravolima]] ([[User talk:Hotelbravolima|talk]]) 19:53, 12 September 2012 (UTC)


* * I'm not an admin or anything but the general 0th law I have perceived is that if someone's obviously being a dickbag for no reason you can ignore him.  In character justifications are varied and meaningless, the only important point is you have a point to argue.  An AI could respond that shutting himself down would pose an increased danger to the humans it is charged to protect, for example.  Is this flimsy? You bet.  Does it matter? Fuck no. --[[User:Coolguye|Coolguye]] ([[User talk:Coolguye|talk]]) 20:24, 12 September 2012 (UTC)
** I'm not an admin or anything but the general 0th law I have perceived is that if someone's obviously being a dickbag for no reason you can ignore him.  In character justifications are varied and meaningless, the only important point is you have a point to argue.  An AI could respond that shutting himself down would pose an increased danger to the humans it is charged to protect, for example.  Is this flimsy? You bet.  Does it matter? Fuck no. --[[User:Coolguye|Coolguye]] ([[User talk:Coolguye|talk]]) 20:24, 12 September 2012 (UTC)

Revision as of 20:24, 12 September 2012

"This includes even orders which could get them destroyed, although other members of the crew are likely to intercede and this is a gray area that may be considered Grief."

Law 3 prevents this doesn't it? The AI cannot doing anything to cause harm to itself as long as it follows the other two laws. -Chase

  • I dunno. Law 2 says they have to obey based on the chain of command. So an Assistant would be able to order the AI to shut down theoretically, if the Chief Engineer weren't there to yell back. I think. --Hotelbravolima (talk) 19:53, 12 September 2012 (UTC)
    • I'm not an admin or anything but the general 0th law I have perceived is that if someone's obviously being a dickbag for no reason you can ignore him. In character justifications are varied and meaningless, the only important point is you have a point to argue. An AI could respond that shutting himself down would pose an increased danger to the humans it is charged to protect, for example. Is this flimsy? You bet. Does it matter? Fuck no. --Coolguye (talk) 20:24, 12 September 2012 (UTC)