I’m writing this piece not out of inspiration but of obligation. As a result, it may not be good but I feel it would be wrong of me to say nothing.
I don’t know what’s going to happen with AI, but we might posit the following stylized facts:
AI is currently improving rapidly. It has been for over a decade now. LLM’s have been improving rapidly since before 2018. Many other strategies and model classes are in play.
No one has been able to convincingly articulate a precise qualitatively distinct class of questions current AI cannot answer in text form. Problems it cannot solve are usually just more difficult than problems it can solve. This is worrying, because if we could identify the weaknesses of the current model in terms of some fundamental class of nigh insoluble problem for the method, this would give us reason to think the current method is bounded.
Many recent predictions that AI has or will hit a wall have been disproved.
AI is rapidly gaining the ability to control a computer. Agent benchmarks are becoming saturated.
All existing problems are rapidly diminishing in the number of circumstances in which they come up: from length constraints to hallucinations. It is not so much that they are being solved as that they are shrinking and being managed.
Because AI is improving so rapidly, there are doubtless many “20 dollar bills” still on the ground when it comes to strategies to be integrated for improvement. A lot of very promising strategies- e.g. reasoning models, distillation, the creation of higher quality data sets, and tool use, still have a lot of powder in the chamber. Even scaling, which many are prematurely declared “over”, isn’t yet played out. There are probably still a lot of small changes that can be made leading to disproportionate gains.
Total research funding has reached staggering levels compared to the recent past.
Even if the fundamental technology froze and did not advance from here on out, it is likely that the gradual rollout of AI would cost many writers, artists, musicians, teachers, etc. their jobs. Even if AI is fundamentally worse at these things, it will not matter for many purposes, given how cheap it is.
Here is a range of futures that I would find unsurprising should they eventuate.
AI massively devalues a bunch of creative and intellectual industries but doesn’t do much else
AI kills us all.
Labor is massively devalued by AI, leading to dystopia and rule by the rich.
Labor is massively devalued by AI. Accompanying socialization of the means of production leads to a utopia.
All these outcomes have something in common, they mark the end of the necessity of certain kinds of human output- creative, certainly, and perhaps even political. In the strongest case, all our choices may cease to have much power to shape the world as a whole.
So I am writing to tell you to pursue your dreams- heroic, artistic, political, romantic, ethical, and so on, now. Fight for a better world and fight for the dignity your life deserves.
Knowledge, intelligence, and wealth are good. Vastly increasing them is wonderful. Open-source models prevent oligarchic dystopias. Collaborating with AI is likely to make anyone more productive, and his product more valuable. Consider https://supermemo.guru/wiki/Goodness_of_knowledge