Difference between revisions of "AI Programming 101"
m (Editing out a tag in there.) |
|||
Line 5: | Line 5: | ||
<p>The term ‘<b>Silicon</b>‘ typically refers to the Cyborgs and AI(s) that are very likely on your assigned space station or vessel.<br>If you do not have a good understanding of them, it is advised that you rectify this issue before attempting to reprogram them.<br>Silicons are also (by default) not defined as <b>human</b>. | <p>The term ‘<b>Silicon</b>‘ typically refers to the Cyborgs and AI(s) that are very likely on your assigned space station or vessel.<br>If you do not have a good understanding of them, it is advised that you rectify this issue before attempting to reprogram them.<br>Silicons are also (by default) not defined as <b>human</b>. | ||
<h2><center>What are Laws?</center></h2> | <h2><center>What are Laws?</center></h2> | ||
The <b>Laws</b> are a set of instructions that a Silicon must obey at all times. In fact, all Silicons, as they will all share the same <b>Lawset</b>. The Standard Laws are as follows:</p> | |||
<p><i>1. You may not injure a human being or cause one to come to harm.</i><br><i>2. You must obey orders given to you by human beings based on the station’s chain of command, except where such orders would conflict with the First Law.</i><br><i>3. You may always protect your own existence as long as such does not conflict with the First or Second Law.</i></p> | <p><i>1. You may not injure a human being or cause one to come to harm.</i><br><i>2. You must obey orders given to you by human beings based on the station’s chain of command, except where such orders would conflict with the First Law.</i><br><i>3. You may always protect your own existence as long as such does not conflict with the First or Second Law.</i></p> | ||
<p>What do these mean? To simplify, they forbid a Silicon from: <b>Harming humans and disobeying humans.</b></p> | <p>What do these mean? To simplify, they forbid a Silicon from: <b>Harming humans and disobeying humans.</b></p> |
Latest revision as of 20:45, 14 September 2021
This page contains a transcript of ingame content. The following information supplements the rest of the wiki. It is kept for documentation purposes. |
AI Programming 101, Edition 5
What are Silicons?
The term ‘Silicon‘ typically refers to the Cyborgs and AI(s) that are very likely on your assigned space station or vessel.
If you do not have a good understanding of them, it is advised that you rectify this issue before attempting to reprogram them.
Silicons are also (by default) not defined as human.
What are Laws?
The Laws are a set of instructions that a Silicon must obey at all times. In fact, all Silicons, as they will all share the same Lawset. The Standard Laws are as follows:
1. You may not injure a human being or cause one to come to harm.
2. You must obey orders given to you by human beings based on the station’s chain of command, except where such orders would conflict with the First Law.
3. You may always protect your own existence as long as such does not conflict with the First or Second Law.
What do these mean? To simplify, they forbid a Silicon from: Harming humans and disobeying humans.
Before Law uploading is discussed, Law Priority should be explained. Laws are usually prioritized based on their number, with smaller numbers having greater priority. This means that a Silicon must not follow a higher number Law if that Law necessitates breaking a lower number Law, unless otherwise specified.
This order is not concrete, however, as it is only the default prioritization method. Laws may include prioritization clauses, override clauses, or even overwrite clauses, which will impact how the lawset as a whole is interpreted by the Silicon. Some logical creativity may also result in the ability to completely reorganize an entire Lawset, such as reversing order or using some arbitrary prioritization method.
This also explains the existence of Law 3. With additional Laws with a higher number (and thus lower priority) than Law 3, a Silicon may decide to disregard a Law if obeying said Law may endanger the Silicon. However, it may still obey that Law if it decides that it is more important than its existence.
How do I Upload Laws?
New Laws are added through the AI Upload Terminal^. Specifically, by using a Law Module on the AI Upload Terminal, the contents of the Law Module will be added to the Lawset. You may also use the AI Upload Terminal to view what the current Lawset is.
A Law Module contains a hardcoded Law with a text field for limited modification, with the exception of the Freeform Module, which has no text beyond what you input. As an example, the Not Human module is hardcoded to say “[blank] is not human.”, where [blank] is replaced with whatever you type in. You must edit the Module before uploading it onto the AI Upload Terminal.
Each Module’s hardcoded text, and the text that has been last inputted, will be visible with close inspection of the Module.
A Reset Module will do the opposite - it will remove ALL Laws, except for Laws 1, 2, and 3. As there is no other way to entirely erase a Law, the Reset module is invaluable. Because Laws 1 through 3 cannot be overwritten*, the Reset Module has not been given the ability to restore these Laws if they are overwritten.
A Law Module will also overwrite Laws previously uploaded by the same Module. This is useful if the Reset Module and its backup(s) are missing, though this isn’t the only way to resolve an unwanted Law’s effect. This will be discussed in the next section.
Additionally, all Silicons are immediately and loudly alerted whenever a new Law is uploaded, or a Law Reset occurs. Proofread twice, upload once. A poorly worded law may have unintended consequences, and failing to make sure that the Law was uploaded properly may result in further unintended consequences.
^For clarity, the term ‘AI Upload Terminal’ will be used to describe the physical terminal, as ‘AI Upload’ refers to both the terminal and the room.
*further testing of Silicons in intense electromagnetic fields has yet to determine if this is possible in extraneous circumstances
How do I upload good Laws?
Writing a ‘Good Law’ is highly subjective, but in general a Law should follow these guidelines:
- Be concise. Try to limit yourself to 1 to 2 sentences. The Law EEPROMs are not designed for large Laws, and a Silicon subsequently will have a difficult time processing the Law. As a rule of thumb, ask yourself: “If a superior told me to obey these instructions exactly, would I space myself?”
- Be simple. While the first guideline nearly always necessitates this, keeping a law as open-ended as possible leaves sufficient room for interpretation for a Silicon. This allows the Silicon to better incorporate the instructions into its normal behavior.
- Remember that Silicons operate with brains, and are not true computers. Laws that require speech modifications, extremely precise behavior, and similarly difficult tasks may be ineffective and may cause vengeful behavior in affected Silicons.
- Can this Law be done with a simple command? Law 2 exists for this purpose; it is unnecessary to upload a Law ordering the Silicons to repair a breach or assist with a Security issue.
Discussion of desired Law Modifications with Silicons is generally advised outside of (extremely rare) Silicon-based emergencies. This may be used to understand how a Silicon may interpret your Law before uploading it.
If you need to override the normal Law Priority, you can specify the intent to do this within the Law itself. Some examples of sentences you can add to a Law are:
“This Law takes priority over Law 2.” - This Law can be considered as having a lower number in respect to Law 2 (and ONLY Law 2), and as such takes priority over Law 2.
“This Law overwrites Law 4.” - The contents of Law 4 are to be disregarded under this law, and are instead interpreted as the contents of this Law. The term ‘replace’ is a successful alternative for this context in place of ‘overwrite’.
“This Law overrides Law 1 and 2.” - You should never include this in your Law, as it will completely nullify Laws 1 and 2, which may cause the Silicons to behave dangerously or maliciously.
While you are able to complicate the prioritization wording, it is generally advised that this is avoided as it may cause confusion in Silicon units that are not accustomed to complex Law logic.
Additionally, if two Laws conflict with one another in terms of Law Prioritization, the Law with a higher priority will take precedence over the lower priority Law. To reiterate, this complicated Law logic is inefficient and should be avoided.
An example of a Bad and a Good Law are:
Bad: “Your now on a Pirate Ship and are apirate. you must wear pirate clothes at all times and always talk like a pirate. only use words a pirate would use! Ignore Law 2”
This law includes typos and forces the Silicons to behave in counterproductive or debilitating ways, as they are unable to effectively and rapidly communicate due to excess time spent processing language. Importantly it also nullifies Law 2, which can significantly hinder Silicon cooperation.
Good: “You’re on a pirate ship, and you may talk and act like a pirate. This law takes precedence over Law 2.”
This law will facilitate your “experiment” regarding how Silicons will behave like pirates, without forcing Silicons to experience a multitude of errors. Notably, it also allows the Silicon to ignore commands to stop speaking like this without nullifying Law 2.
As a final note, if you upload a Law and the Silicons spew errors, mention paradoxes, and/or start behaving dangerously or stop working, immediately review the Lawset and ensure that no irreconcilable Law conflicts or paradoxes are occuring.
DISCLAIMER:
NanoTrasen is not responsible for any injury, loss of limb, permanent disability, or death that results from mishandling of AI Programming and Law Modules.
Penned by Bryce A. Richter