Feed on
Posts
Comments

I have this really uneasy feeling these days. That uneasy feeling is coming from the people who are trying to convince me that one of the more serious existential threats we face is the possibility of dangerous, runaway AI. I get it, I do.

We were having a discussion of Asimov’s i,Robot the other day.

And I got more uneasy.

Our discussion included a mention of Asimov’s Laws. The laws go something like:

  1. A robot may not injure a human, or via inaction allow a human to be harmed
  2. Robots must obey humans (“the people”) unless such orders conflict with (1)
  3. A robot must protect its existence insofar as it is not at the expense of (1) and (2)

And the reason I am uneasy, is that we spend a LOT more time on the design of AI and robots to follow these laws. But where is the daily conversation, or perhaps obsession, with doing the same for non-robot entities in our lives? Where are Asimov’s Three Laws of the American Presidency? Where are the three laws of Government? Can someone write the new novel, “i,Government?” I know there are dystopian political tomes out there, but I don’t see (1) through (3) being as seriously engaged as other things are. Indeed, much of the most forceful discussions of the role of government celebrate times when (1) is violated by policy. If drone strikes or undeclared wars in the middle east strike you as obeying (1) I don’t quite know what to say. But we can go much farther. Do we think that government actions or inactions do not expressly injure or harm other humans? I am quite sure, for example, that ethanol subsidies kill people. As does exempting the Department of Defense from some of the environmental protection laws that we abide by. And so on.

So I ask again, why not i,Government? Or now does it make me some kind of atomistic zealot to have a hope for humanity that, first, our governments, like our doctors, do no harm?

 

Leave a Reply