Expectations
Some people think that we’re on the cusp of a singularity. Others think machine learning is about to hit fundamental limits. Given the stakes, it would be incredibly stupid to bet everything on either of those propositions being true. We need to prepare for both scenarios.
Some people think that the main concern around AI is the possibility of human extinction or subjugation. Others think this is an absurd science-fiction scenario, and that the real concerns are job loss, and sexist/racist bias. There is zero reason whatsoever that these two views need be in conflict. It is likely that research on both problems, as well as regulatory initiatives in relation to both, share many steps and sub-goals in common.
You can spend a bit of time thinking about this. You don’t need to jump to expressing an opinion one way or the other right now. Don’t lock yourself into supporting a position that may be wrong- this topic really matters. We can’t just experiment politically and muddle around- getting this right on the first pass could be absolutely vital.
Opposing this or that position on AI safety because some bad people support it is a terrible idea. There are bad people on every side of the debate. For example, Musk is a big fan of AI safety work and regulation in the area, Peter Theil loathes it.
The only thing stupider than thinking something will happen because it is depicted in science fiction is thinking something will not happen because it is depicted in science fiction.
Regulation
We need a lot more regulation but jumping to specific regulatory demands on the basis of gut feelings would be silly. We need to think carefully about strategy- are we trying to slow down the advance of AI or control it? Certainly a temporary slowdown is well worth considering- and on the principle of keeping options open, I endorse it.
From the point of view of the left, we need to think carefully about keeping AI engineers- who are generally to the left of the population as a whole- on side. Tech workers cannot be equated with Tech CEOs. They are very different in every respect.
Technological possibilities with the capacity to shake the world’s roots should not be mandatory. We shouldn’t have to do things as a society- national or global society- just because they are possible. If we do choose to do them, it should be at a time of our choosing.
Companies, with their direct economic interests, should not control the discussion on regulation. Apparent moves towards self-regulation are hard to assess and should not sway the discussion.
At the moment we are not heading toward sensible regulation. As u/socschamp put it in relation to the company falsely known as “OpenAI”
“AI alignment is a serious issue that we definitely have not solved. It’s a huge field with a dizzying array of ideas, beliefs, and approaches. We're talking about trying to capture the interests and goals of all humanity, after all. In this space, the one approach that is horrifying (and the one that OpenAI was LITERALLY created to prevent) is a singular or oligarchy of for-profit corporations making this decision for us. This is exactly what OpenAI plans to do.”
Maybe you object to my criticism of OpenAI. Suppose that you buy the argument made by, for example, Eliezer Yudkowsky that secrecy is essential to AI safety because otherwise, people might just build rogue AIs in their backyard. Fine, probably true and I have no objection: but oversight, secrecy, control and access should not be set at the level of the individual corporation. Secrecy by individual corporations might actually slow down government involvement by concealing how far along this all is.
Equality and just transitions
If AI is going to cost a lot of jobs, we need a just transition. There is no need for anyone to be poor simply because productivity has increased. Poverty in the midst of reduced output is an avoidable political failure.
Claims that we needn’t worry about technological unemployment because in the past technology has always created more jobs than it destroys A) ignores the sheer ubiquity of the human capacities AI could substitute for and B) Miss the most important point. It is cold comfort that jobs will be created possibly years after you’ve lost yours.
To the extent AI is deployed, we should demand it decreases, not increases economic inequality. Workers made almost all the training data (who do you think wrote the internet in the main?). Workers made the models, insofar as possible, capitalists shouldn’t get the profits. More directly, we will not accept increased income, wealth or consumption inequality as a result of artificial intelligence, or for that matter increased political inequality- even if human labor is made obsolete by AI. In fact we should use artificial intelligence as an occasion to reduce income inequality- to zero if possible. Many of the traditional arguments in defense of income inequality (e.g. the need to create incentives to work) no longer apply in a world of AGI. We should make these demands on the basis of our inherent dignity, but also, again, we must make the point that it was workers who made this possible- why should workers “stand outcast and starving midst the wonders they have made”.
Let me reiterate there is a coherence- and potential mutual support- between demands for ethical AI that doesn’t increase inequality, and demands for safe AI that doesn’t kill us all. This is possibly true in the technical development sphere and definitely true in the regulatory sphere.
The singularity and our values
While many readers will be skeptical, accept, purely for the sake of argument, the possibility that we are on the cusp of a singularity. What would follow in terms of political practice? I don’t think we should sell our souls.
While complexities arise and temporary sacrifices can be necessary, we should reject the idea that injustice in the present is simply the price we must pay for a glorious human future. It is quite possible that AI could ‘lock in’ our values- e.g. through empowering surveillance, through a singularity creating a political singleton, etc. Therefore the kinds of values we insist on and fight for now matter. We can’t just, for example, hand over power to some megacorp and hope the shepherd us through the singularity, because the kind of world they create will almost certainly not reflect our values. It may be better than nothing, but the loss of value inherent in the world they make could be massive- perhaps much more than half the possible value of “industrializing the lightcone”.
Forgive me for I'm not a wordcel. I have a simple question:
If we do or do not choose to impose this proposed moratorium on development of next generation LLM AIs, when the dust settles and we will use them (as some reasonably hope) to cure most diseases and extend our lifespans, who will be blamed for months, years and decades of lag and corresponding dozens of millions of unnecessary human deaths?
Thank you in advance.
Imagine a convocation of gorillas asked to design their descendant; a super-gorilla: heavier, stronger, bigger fangs... not a human. Likewise, the step after us is... surprising.