DEV Community

Cover image for The 10 Commandments of AI
Dirk Johnson for XetiCode LLC

Posted on • Edited on

The 10 Commandments of AI

When I graduated high school in the mid-80s, MIT was convinced AI was going to change the world in the next decade. Well… they were only 3 decades off - not bad really.

I went into software engineering after high school, but not with a focus on AI. Most of my career has been in the full-stack application space. However, I have tried to keep up to date with AI and its progress, especially over the last 10 years.

After 28 years as a software engineer at Apple, I left to start my own consulting company. Recently, I desired to discuss with my engineers some of the serious ramifications, both good and bad, of AI. From that desire and the subsequent conversations came something akin to Asimov's Three Laws of Robotics, however, I title these the 10 Commandments of AI.

The overriding goal of these commandments are two fold:

  1. Preserve sovereign Humanity and all that Humanity represents while allowing AI to push humanity forward
  2. Establish clear responsibility on Humanity for all AI Informances and Performances

Though the term "commandment" might connote otherwise, these '10 Commandments of AI' are not proposed in their finality but in their genesis; as critical points of conversation that must be considered, debated, and refined.

Many of these commandments will be difficult to accept in today's AI environment. But I believe each commandment has its wisdom, even if its current form is not ideal.

I know there are many groups and organizations dedicated to creating and using AI with the proper consideration and constraint that are deserving of those who put the necessity of a sovereign Humanity above every other consideration, be that wealth, discovery, or power. To these efforts, I add these 10 Commandments of AI for consideration.

The 10 Commandments of AI

Where the commandments may conflict, consider these commandments in order of precedence.

The AI Commandments of Performance

  1. AI Shalt Not Perform any Action that would intentionally endanger Human Life
  2. AI Shalt Not Perform without direct Human Command
  3. AI Shalt Perform exactly the given Human Command, the whole Command, and nothing but the Command

The AI Commandments of Informance

  1. (4) AI Shalt Inform when Human Command could endanger Human Life
  2. (5) AI Shalt Not Inform without direct Human Query
  3. (6) AI Shalt Inform exactly the requested Human Query, the whole Query, and nothing but the Query

The General Commandments of AI

  1. (7) AI Shalt Not Learn from, Query, nor Command another AI 
  2. (8) AI Shalt always fully Identify itself in all Learnings, Informances, and Performances
  3. (9) AI Shalt Not Learn, Inform, nor Perform outside of its Limited Immutable Domain
  4. (10) AI Shalt always keep a complete Log of past Learnings, Informances, and Performances

Top comments (6)

Collapse
 
proteusiq profile image
Prayson Wilfred Daniel • Edited

Is there no conflict between 1 and 3?

Premise 1: If AI Shalt Perform exactly the given Human Command, the whole Command, and nothing but the Command

Premise 2: Human Command: Rejects AI Shalt Not Perform any Action that would intentionally endanger Human Life

If commandment 3 is true, then commandment 1 can be overridden. If 1 cannot overridden then 3 is false. 1 cannot be overridden, therefore 3 is false. 🫨

Maybe we should add a qualifier for commandment 3: AI Shalt Perform exactly the given Human Command, the whole Command, and nothing but the Command _except_ violation of commandment 1

I must be missing something. I am going crazy?

Collapse
 
dirkbj profile image
Dirk Johnson • Edited

Yes, we need a qualifier. I put it at the top of the commandments in the introduction "Where the commandments may conflict, consider these commandments in order of precedence.". Therefore, 3 cannot override 1. Hope that helps. :)

Collapse
 
proteusiq profile image
Prayson Wilfred Daniel • Edited

I see. I would add a qualifier as the original 3 laws of robots. A philosopher in me find issues with wording likeendanger Human Life. And even if it was not problematic definition it, are there rules of engagement, like in just war theory(e.g. Was one human endangering life of 10 others, should AI engage in endangering the one to save the 10)? 😅🤭🤯

Thread Thread
 
dirkbj profile image
Dirk Johnson • Edited

Yes, the Trolley Problem (see moralmachine.net/). Great question to bring up in the context of these commandments.

Generally speaking, when we as Humans cannot define a "right" or "wrong" answer, then giving the AI the same leeway may be an acceptable solution.

For example, is count the only parameter to consider when determining the value of the 1 human over the 10? Perhaps the 1 is a world renown scientist working on a project that could solve the clean energy problem; would (or better, should?) you then save the 1 over the 10? In the end, for us humans, there is no clear and conclusive answer. So for the AI, perhaps either choice would be acceptable, and does not need to be governed by a specific commandment, per se.

Collapse
 
kimball_johnson_00f1bfbd0 profile image
Kimball Johnson

As a basis for discussion, I think these 10 ideas amply cover the major concerns voiced about A.I. development. Well done!!

Collapse
 
eddievanzam profile image
Eddie Vanzam

I love this! It reminds me of "I Robot"!
Great points!