8 Comments

>AI might enable a kind of unassailable authoritarianism- a potentially eternal suppression of the people by elites This last point I think has a useful kind of function as a bridging point between the very social-human concerns of the left and the very apocalyptic concerns of the AI safetyists.

This point is flawed, insofar as safety-from-AI and safety-from-other-humans-monopolizing-AI-against-you are both x-risks and their solutions mutually contradictory. Safety from AI requires extensively restricting access and commands given to the AI to avoid anything it could possibly misinterpret, while safety from an AI monopoly requires giving everyone access to AI so nobody can monopolize it.

Expand full comment
author

Even if the technical solutions for both problems are difficult to achieve simultaneously (not entirely clear to me for reasons I might write about later), nevertheless, the social movement that solved both problems might appeal to the same people, and form a sociological unity.

Expand full comment

I encourage you to turn your predictions into prediction markets!

Expand full comment

I want to believe a compact will be formed, and while it still has a chance to matter, but am not optimistic. You can maybe get significant agreement on complaint B, but it just doesn't do enough work by itself to provoked an appropriately strong response, it's too nebulous to pin down, it looks on the surface much like other social problems we're already trying to deal with. That means we need the left to adopt either A or C, and to believe either of these will manifest any time soon you first need to believe that recent AI progress has been substantial and will continue to be. The left.. has a problem with this, or two problems. We have a sort of nothing new under the sun disease, a desire to preserve the categories that got us some big wins last century, and a standard narrative around AI has already mostly materialized. So not only must the narrative be rewritten, hard in general, but we must overcome our group inclinations to get the story right this time.

Expand full comment

I agree this (if true) is bad. In safety particularly, you want consensus. Ideally you’d have a consensus around notkilleveryoneism and we could have furious competition around implementation.

The conservative UK government, and I say this as someone loathe to complete a sentence starting that way with anything positive, seems to be very good on notkilleveryoneism. I feel like that cuts against your predictions, but we’ll see.

Expand full comment

Just two days ago (November 22) Robert Reich posted an article "What's the real Frankenstein monster of AI?". I responded, and I think what I wrote is pertinent to your post, so I am re-posting it with some modifications. My response was directed at two points he brought up: that it might eliminate nearly all jobs, and that it could lead to autonomous warfare. My response:

As far as the latter point is concerned, this seems to be a "clear and present danger", and public efforts should unite to declare this unacceptable. This is in the military domain, and I expect there would be resistance to allowing this to be within the range of effective public input. Nevertheless, public pushback is essential.

The former point, I think, is different. It may be inevitable, and if it is handled correctly, not necessarily a bad thing. Economic growth has always been limited demographically by the number of qualified workers available to do the necessary jobs. AI plus Robotics (AI+R) has a potentially paradoxical effect: economics is a cycle with two sides--supply and demand. AI+R can blow away the demographic limitation on supply (via robots building robots) while simultaneously destroying demand by destroying jobs, the traditional source of income on which personal demand is based. It seems straightforward that a Universal Basic Income (UBI) will be essential for an AI+R economy to work, and the source of revenue to fund the UBI seems obvious: a tax on the commercial employment of AI+R in lieu of the wages it replaces. In time (and this could proceed far in a decade or so) I would think the ratio of AI+R systems to workers replaced could be far greater than one, so the revenue source would be an abundant one. For the economic cycle to function, the "Basic" in UBI cannot imply "meager"--a meager UBI would mean a weak demand. A growing demand is needed to match the growing supply. This revenue source could also fund public demand: physical infrastructure (roads, bridges, power, communications, public structures, etc.); social infrastructure (education at all levels, cultural activities, AI+R to support medical care, day-care for children and for seniors, etc.); and ecological work (complete the transformation from fossil fuels to renewables, repair the atmosphere, repair the oceans and waterways, repair the soil, put an end to the Anthropic Extinction Event (reference the Food Disruption described by the non-profit thinktank, RethinkX, which will radically decrease the amount of land devoted to food production, and which will be greatly facilitated by the advance of AI)). (Keep in mind the possible variety of robots--as large as building cranes; as small as molecular robots making repairs in the body.) The AI+R transition will be global, so the "Universal" in UBI must mean "Global". Diplomacy to fund a Marshall Plan for the developing world would be appropriate.

Adam Dorr, Director of Research at RethinkX, draws the distinction between toil and work. Toil is what humankind has always endured to "make a living". Work can just mean activity and, ideally, that activity should be personally fulfilling. Education will need to be radically revamped to assist each individual student, of any age, to follow and develop his or her personal interests and talents. Teachers ("working" avocationally because it’s what fulfills their lives) assisted by AI+R should enjoy the transition.

Economics has been called the dismal science because traditionally it has been about scarcity and want. An economics of abundance could bring about a deep change in human psychology. A key component of politics has always been the effort of the wealthy to structure government to limit taxation of their wealth to support the welfare of the community. In an economy of abundance this would be pointless. Steven Pinker's book, The Better Angels of Our Nature, argues that, despite the daily news, humankind has grown less violent and more communal over the ages, and particularly since the Industrial Revolution radically increased the human living standard. AI+R could complete the transition begun, however imperfectly, by the Industrial Revolution.

Expand full comment

I'm curious if you have any predictions about open-source LLMs, like LLaMA. I've noticed an outsize demand for LLaMA developers on Upwork.

Expand full comment

Safe predictions:

1) open-source will continue to get better

2) open-source will continue to be dumber than proprietary at baseline

3) proprietary will continue to censor in various ways, some of which are necessary and others just annoying

I think an implication here is that use of both proprietary and OS models by many actors will make a lot of sense. The censorship means that the models get dumber in random areas even if your goal doesn’t include racism, spam, or terrorism; by the same token even if you don’t have those goals the uncensored output is going to include some % of them. So the less human oversight (which TBF means more spammy anyway, so I don’t necessarily celebrate this) the more you might have models that look at a mix of proprietary and OS output and select the best from those.

OS of course should also have utility for security reasons vs SaaS. This will be particularly important to governments, but not only.

Expand full comment