AI makes economic management- smoothing out market imperfections- much easier. Perhaps this will make its libertarian would-be accelerators think twice, but I won’t hold my breath.
In the limiting case where superintelligence exists, everything becomes difficult to predict. However, it is likely economic planning will become easier. I want to focus on a more interesting, though less likely case- we enter a long era of unmetered, but roughly human level, intelligence.
In this case, we get artificial intelligence that can do any job a human can do from home, but somehow it doesn’t alter all that much. Our technology advances more quickly, but not massively so. This might be because the limiting factor in technological advance is resources for experiments, not intelligence. Regardless, on this scenario, for whatever reason, there is an extended period of time in which we have AI with roughly human-level intelligence, but that intelligence, for whatever reason, does not become qualitatively better than human intelligence for a long time.
The point I want to make is that in this scenario AI will greatly reduce almost all the major costs of regulating. This is because multiple points of the regulatory cycle- drafting, monitoring, enforcement, compliance, and so on- need a great deal of white-collar labor. These savings exist on both the government and the corporate side.
Consider two economies.
A) The economy is managed. In keeping with mainstream economic theory, where natural monopolies or monopsonies exist, either the government imposes price controls or takes control of the industry. Where artificial monopolies exist, the government breaks them up. The government buys public goods. Activities with large positive externalities are rewarded with subsidies. Activities with large negative externalities are subject to Pigouvian taxes, or where appropriate, simply banned. Where asymmetric information exists, appropriate steps are taken to correct its negative effects, etc.
B) The economy is largely unregulated. Monopolies and monopsonies abuse their position, that’s just the way things are. The government does not buy public goods. Large positive externalities go unrewarded, large negative externalities go untaxed. Consumers are told to resolve their own asymmetric information problems because that’s just the way things are.
Note that the choice of A or B is at least theoretically distinct from questions around distribution. In practice, there are good technical and political reasons these questions are interlinked, but at least in theory B could have a generous tax and transfer system supporting the poor, and A could have a “fuck you you’re on your own if you can’t make a market income” attitude.
Also note that we are talking about schemes that attempt to correct market failures, not changes that might be favored for ethical reasons, but which reduce the total size of the economy. We are talking about regulations that we have reason to believe enhance the total output of the economy by internalizing externalities, removing market power, etc. Changes that benefit their winners so much that, should they choose to do so, they could compensate the losers completely, and still be winners. I have written in the past about how I think this Kaldor-Hicks framework is an extremely naive and inherently pro-elite way to think about overall economic “improvement” but we will accept it here for the sake of argument. However, even changes that do not pass the Kaldor-Hicks test will reduce economic efficiency by less.
Note also that neoclassical welfare economics, read “straight off the page” is going to tell you that you should manage externalities and market power. It’s probably fair to say that, for all economists are associated with Lasseiz Faire policies in the public imagination on the whole they’re probably closer to A than B.
However, there is an argument in favor of B, and few economists will want to go all the way to A. The strongest lines of arguments against trying to correct all market failures are:
Although the improvements themselves, by definition, increase efficiency there are costs associated with enforcement, etc which may be greater than the improvement.
There are informational difficulties in:
Uncovering monopolies
Properly pricing positive and negative externalities
Etc.
These informational difficulties
Make creating rules more difficult and expensive
Create the possibility of error, which could be more distortionary than the status quo.
There are also costs associated with conforming with regulations e.g. The changes themselves (e.g. paying for a new filter so you pollute less) are net positives, but:
Given the complexity of regulation, it can be difficult to keep up and comply with all the rules etc. Doing so will impose costs, and might lead to hesitations due to uncertainties etc.
But also, the state cannot be assumed to be an impartial social planner
Lobbyists manage to get their hobbyhorses and exceptions enshrined in law, simply by caring so much more about the issues than the general public and fighting. A large regulatory apparatus, then, can be “captured” to reduce economic efficiency for the sake of a few winners.
Bureaucracies might act in their own interests (e.g., trying to expand the work for themselves).
Government actors might impose values that don’t really work in everyone’s interest but which they favor for other reasons. For example, because they are less likely to generate negative press. They might be much more worried about a person dying of drug side effects from a medicine that was legalized a little early than about the countless people who might suffer and die because the medicine is delayed 6 months. They might be much more worried about occasional spectacular instances of fights or violence in an entertainment precinct than the fun and memorable life events people accumulate there etc. etc.
In practice, I think:
No one- not even market socialists- supports trying to remove all distortions from market economies.
Very few people- essentially just anarcho-capitalist ideologues, support trying to remove no distortions from the market.
So we have a spectrum of actions on market imperfections from no action to maximal action, with almost everyone falling somewhere the middle. This spectrum is correlated with, but not identical to, the left-right spectrum. To see they are not the same consider this: Someone who supported massive redistribution might register as fairly “leftwing” despite supporting relatively little market correction in the sense above. Conversely, someone who supported quite a bit of market correction might support fairly little redistribution.
So to state our thesis more exactly:
Good-as-a-good-human intelligence available for a negligible fraction of the current cost of intelligence almost certainly increases the optimal level of market correction substantially
The reasons are as follows:
Quantifying the costs and benefits of externalities is easier with unlimited white-collar labor, enabling action through regulation or Pigouvian taxes
Quantifying the costs and benefits of regulatory actions is easier
Monitoring for monopolies and monopoly power is easier.
Regulatory compliance is easier- companies can access legal expertise for complying with rules much more simply.
Regulatory review and maintenance are much cheaper. Presumably, humans will have to review proposed changes, but computers can play a role in drafting often enormously detailed rules.
Low-level adjudication could be much cheaper. Although I suspect people will be very reluctant to automate even low-level bureaucratic decisions, computers can certainly help.
Low-level decisions can be much more impartial and regulatory design can be much more impartial.
Relatedly, AI could enable much more specific and detailed contracts between companies, as well as (possibly) more transparent and cheaper contract arbitration.
I note that, in theory, Coasean bargaining could be used in place of Pigouvian taxes, regulation, etc. to deal with externalities. In practice, even if the necessary distributions of property rights are made, I find it extremely unlikely that the transaction costs of these sometimes billion-party negotiations would fall so low as to be viable in the general case, especially with merely human-level intelligence.