Feed on

There is a presumption in the planning vs. markets discussion that has gone back at least to the debates with Abba Lerner and Oscar Lange that it was simply a matter of computing power before we would be able to plan an economy. Well, that is a gross simplification, but if you think about the Soviet input-output matrices (still in use even in the US) the fundamental question/challenge seemed to be simply coming up with identifiable functional forms for consumption, investment, etc. and macroeconomic growth, and then it was a matter of churning and burning an optimization algorithm to produce the series of Qs and Ps that “solved” such an economy, the same way a “market” would “solve” it.

Now, for now I do not wish to rehash the zillion arguments regarding this. Right now I want to focus on two fundamental observations. We shall engage them more in the future.

ONE: I do not think for many people the essence of life is in making optimal decisions. And in any case, even if we want optimal decisions, to have an AI help us figure out what to do so that we make the “best” choice every time ultimately takes away everything that makes being human worthwhile – and that is human agency. (SIDEBAR HERE – I find en extremely eerie analogy here peddled by dogmatists of the left and right who demand total purity to their worldview in order to gain acceptance into the tribe, for example, look at the way that the woke/activists condemn Clarence Thomas or even Thomas Sowell as sell-outs of living a lie of false consciousness. Think about the removal of agency, and of course outright racism, that such positions imply).

TWO: Suppose we grant that in fact we DO want to optimize. Does the AI need to understand WHY different agents work, save and invest? Well it is supposed to. The reason I have a coffee mug on me right now is to help keep my window shade stay pulled down. If the AI knew that I wanted my shades pulled down, is it “smart” enough to look around for all of the possible substitutes and pick the best way for me to keep the shades drawn. Does it know why I want my shade drawn right now (so I do not scare the deer out in my yard) as opposed to trying to keep sunlight out during the day? In any case that is beyond my point. My point here is that we consume for all kinds of reasons, often unknowable to us. One of the fantastic things about the world is that lots of our economic behavior is constructive and positive-sum. But on the other hand, many of us are obsessed with games of status competition and other zero- and negative-sum games. We want to be “the smartest” we want to be the “most popular” we want to get the best mates, we want to be remembered to history, and many people simply want to have power. There is no denying this, even for people like me who claim to not explicitly lust for those things. But think about this, how is the AI/ML entity going to “optimize” in a world where all of our rawest instincts and emotions are dedicated to negative-sum destructive competiti0n? Who will write “the decider” algorithm that makes us settle for second, or third or n-th best in a world when we seek status and power? How will we and the AI not destroy the world. Note that this goes well beyond the standard paper-clip problem that is evening one at the challenges of AI dinner parties.

One Response to “AI, Supercomputers and Computational Socialism”

  1. Snorkel says:

    This reminds me of the Saturday Morning Breakfast Cereal webcomic #2569. You can search “SMBC Felix” if you don’t want to click the link.


Leave a Reply