So here's one more comment in evidence against the fact that no one reads these posts. Thank you (and I mean this genuinely) for an excellent and thoroughly depressing read. It did really hit like a ton of bricks in the face, but that's a good thing, for all the little to none difference it will make to things overall.
You are absolutely right that a lot of the attention, particularly in the rational/lesswrong community has been focused on the technological path to superhuman AI and alignment issues (I arrived here via a link from one of those posts) and the attention of us readers/lurkers along with it. Which is in a way justified from and x-risk pov but it does make us somewhat forget that there will be things coming out of the AI sphere that we have to deal with as a society which are far more likely to go down during our lifetimes, some of which are, as you say, ready to go today.
I wish I had something more poignant or relevant or at least positive to add. Maybe I will once I've digested things more. I've been visiting parents in Romania over the past two weeks, and after reading this article I took a mental walk back through what I've seen and experienced. All I can think is what chance do we stand at rationally making the right choices and preparations as a society when an overwhelming number people here are still selling their votes for a bag of cooking oil and a few kilos of flour and can't really wrestle with any concepts more complicated than making ends meet due to the compounding effect of lack of time and lack of education. How do you even begin to explain to people that in 5 years time the entire world could be a radically different place because of the speed and nature of technological change and more so, how would you get them to care about it versus what's hurting them today?
Your suggestions in terms of what we can do are very good and I'm ready to support them wholeheartedly but I really can't shake the feeling that it will, once again, be too little, too slow, too late.
1. From the perspective of the majority shareholder class, we are *already* in a world where the gross majority (say, 80%?) of the human population is *un*necessary. Serviced by 15% of the population and using only current technology, the top 5% of wealth-holders globally could live functionally almost identical lives to those they presently live--including internecine competition--if the other 80% of us were gone. This is a unique fact of global human history. That bringing such a state of affairs about would "solve" the climate crisis, and that questions of competitive advantage rather than moral tissue stand in the way of collusion toward it only add to the threat this reality poses to the rest of us--now, and in your 3-5 yr near-term.
2. Your definitions of socialism, both vector and minimalist, have also to include some sense of collective control over what *counts* as social welfare. Else technocratic managerialist liberals--the very people most likely to usher in a durable authoritarianism--could with reason claim to be "socialists."
3. The production of bot-free-from-the-jump social networks is technologically feasible (I nearly registered a few nobotly. domains just now, but decided I couldn't be assed) through a combination of live, real-time-only membership uptake and periodic biometric check-ins (++ as workarounds evolved, obv). In the nearish-term future you describe, I suspect many people would be willing to trade (more) biometric data in exchange for the plausible assurance that their digital social network includes only human people as member-entities.
> Something that I think most political philosophy misses about our political feelings is that they are mostly vectors rather than points in the space of possible political philosophies. What matters is not so much my ultimate preferred society as the direction I’m inclined to want to move things.
❤️
Additionally, expressing our political feelings should involve periodically re-orienting the vector (not necessarily pointing at the same destination; along the way we might learn more about what we want and what is possible).
As to the larger point of the post (its predictions and suggestions), thanks for the Murphy & Nagel link, I'll take a look.
Yes indeed...how am I to know whether Horny Philosophy Bear is an individual human, a collective political party or an exercise in current state of art AI bot composition? Then too, what should or should not concern me about the answer?
A live Issue in dialectic give & take over core reasons or lack thereof for contributing to an N-dimentional vector space expansion of issues & answers from a Hegelian Thesis Antithesis Synthesis iteration perspective... or So it appears to me as a human 80 yr old failed PhD candidate Retrograde Barstool Psychedelic Communist with a warped & wicked 🖤 aspiration to mind rot comedy performance...
Hopefully more people read this essay and others like it that consider the coming impacts of AI/AGI. It would be a shame to sleepwalk into a world we all could've had some say but just didn't notice it until it was too late. It is amazing to think of how much of our society is defined by things that are ultimately arbitrary but still deemed important due to centuries/millennia of cultural influence. Even basic things like why we have 7 days in a week, work 8 hours per day, whether or not we celebrate the change of seasons, etc. Had things went differently anywhere along the way, what we take as normal here and now may not have been. Is there anyone anywhere on the lookout for sounding the bell on who gets to decide what AI/AGI can do and what role it serves in our society? What about people who don't have access to the AI? Are they stuck living a diminished life? Will there be an opt-in/out system? How will things like exams and competitions work in a world where people have what we may currently think of as mental steroids that can't be turned off?
Do you foresee a way to use AI to vet online interactions and weed out the bots from the real humans? Current bot farms tweet the exact same thing over hundreds of accounts, so I imagine that people using AI propaganda bots will also take a quantity over quality approach. That's not to say they would be easily recognizable as bots, but when there are many of them, patterns should become recognizable and give the AIs away. Could that save us from an internet dominated by bots?
1. In this arms race, it's going to get harder and harder to detect bots over time.
2. I think that a lot of platforms will want content generating bots. Remember the content they generate will probably be of high quality, maybe even in some sense superhuman quality. There's every chance the platform owners will be the ones running the bots.
3. Even if the bots are kept off, say, Facebook and Twitter, content will be produced elsewhere by the bots, and humans will see it and will bring that content to Facebook and Twitter.
So here's one more comment in evidence against the fact that no one reads these posts. Thank you (and I mean this genuinely) for an excellent and thoroughly depressing read. It did really hit like a ton of bricks in the face, but that's a good thing, for all the little to none difference it will make to things overall.
You are absolutely right that a lot of the attention, particularly in the rational/lesswrong community has been focused on the technological path to superhuman AI and alignment issues (I arrived here via a link from one of those posts) and the attention of us readers/lurkers along with it. Which is in a way justified from and x-risk pov but it does make us somewhat forget that there will be things coming out of the AI sphere that we have to deal with as a society which are far more likely to go down during our lifetimes, some of which are, as you say, ready to go today.
I wish I had something more poignant or relevant or at least positive to add. Maybe I will once I've digested things more. I've been visiting parents in Romania over the past two weeks, and after reading this article I took a mental walk back through what I've seen and experienced. All I can think is what chance do we stand at rationally making the right choices and preparations as a society when an overwhelming number people here are still selling their votes for a bag of cooking oil and a few kilos of flour and can't really wrestle with any concepts more complicated than making ends meet due to the compounding effect of lack of time and lack of education. How do you even begin to explain to people that in 5 years time the entire world could be a radically different place because of the speed and nature of technological change and more so, how would you get them to care about it versus what's hurting them today?
Your suggestions in terms of what we can do are very good and I'm ready to support them wholeheartedly but I really can't shake the feeling that it will, once again, be too little, too slow, too late.
Excellent, start to finish: thanks.
Three thoughts:
1. From the perspective of the majority shareholder class, we are *already* in a world where the gross majority (say, 80%?) of the human population is *un*necessary. Serviced by 15% of the population and using only current technology, the top 5% of wealth-holders globally could live functionally almost identical lives to those they presently live--including internecine competition--if the other 80% of us were gone. This is a unique fact of global human history. That bringing such a state of affairs about would "solve" the climate crisis, and that questions of competitive advantage rather than moral tissue stand in the way of collusion toward it only add to the threat this reality poses to the rest of us--now, and in your 3-5 yr near-term.
2. Your definitions of socialism, both vector and minimalist, have also to include some sense of collective control over what *counts* as social welfare. Else technocratic managerialist liberals--the very people most likely to usher in a durable authoritarianism--could with reason claim to be "socialists."
3. The production of bot-free-from-the-jump social networks is technologically feasible (I nearly registered a few nobotly. domains just now, but decided I couldn't be assed) through a combination of live, real-time-only membership uptake and periodic biometric check-ins (++ as workarounds evolved, obv). In the nearish-term future you describe, I suspect many people would be willing to trade (more) biometric data in exchange for the plausible assurance that their digital social network includes only human people as member-entities.
Thanks again for writing--an excellent read.
> Something that I think most political philosophy misses about our political feelings is that they are mostly vectors rather than points in the space of possible political philosophies. What matters is not so much my ultimate preferred society as the direction I’m inclined to want to move things.
❤️
Additionally, expressing our political feelings should involve periodically re-orienting the vector (not necessarily pointing at the same destination; along the way we might learn more about what we want and what is possible).
As to the larger point of the post (its predictions and suggestions), thanks for the Murphy & Nagel link, I'll take a look.
Was this written by a bot?
Yes indeed...how am I to know whether Horny Philosophy Bear is an individual human, a collective political party or an exercise in current state of art AI bot composition? Then too, what should or should not concern me about the answer?
A live Issue in dialectic give & take over core reasons or lack thereof for contributing to an N-dimentional vector space expansion of issues & answers from a Hegelian Thesis Antithesis Synthesis iteration perspective... or So it appears to me as a human 80 yr old failed PhD candidate Retrograde Barstool Psychedelic Communist with a warped & wicked 🖤 aspiration to mind rot comedy performance...
Hopefully more people read this essay and others like it that consider the coming impacts of AI/AGI. It would be a shame to sleepwalk into a world we all could've had some say but just didn't notice it until it was too late. It is amazing to think of how much of our society is defined by things that are ultimately arbitrary but still deemed important due to centuries/millennia of cultural influence. Even basic things like why we have 7 days in a week, work 8 hours per day, whether or not we celebrate the change of seasons, etc. Had things went differently anywhere along the way, what we take as normal here and now may not have been. Is there anyone anywhere on the lookout for sounding the bell on who gets to decide what AI/AGI can do and what role it serves in our society? What about people who don't have access to the AI? Are they stuck living a diminished life? Will there be an opt-in/out system? How will things like exams and competitions work in a world where people have what we may currently think of as mental steroids that can't be turned off?
Do you foresee a way to use AI to vet online interactions and weed out the bots from the real humans? Current bot farms tweet the exact same thing over hundreds of accounts, so I imagine that people using AI propaganda bots will also take a quantity over quality approach. That's not to say they would be easily recognizable as bots, but when there are many of them, patterns should become recognizable and give the AIs away. Could that save us from an internet dominated by bots?
I think it'll help, for a while however:
1. In this arms race, it's going to get harder and harder to detect bots over time.
2. I think that a lot of platforms will want content generating bots. Remember the content they generate will probably be of high quality, maybe even in some sense superhuman quality. There's every chance the platform owners will be the ones running the bots.
3. Even if the bots are kept off, say, Facebook and Twitter, content will be produced elsewhere by the bots, and humans will see it and will bring that content to Facebook and Twitter.
i agree