A critical flaw in arguments like this is the embedded assumption that the creation of democratic policy is outside the system in some sense. The existence of AGI has the implication that it can effectively turn most people into sock puppets at scale without them realizing they are sock puppets.
Do you think, in this hypothesized environment, that “democratic policy” will be the organic will of the people? It assumes much more agency on the part of people than will actually exist, and possibly more than even exists now.
Most critical flaw is thinking that any policy on its own would be able to solve the issue. The technology will find a way no matter the policy.
The society built on empathy would have been able to work out any issue brought by technology as long as empathic goals take priority. Unfortunately our society is far from being based on empathy, to say the least. And technology and the people wielding it would always work around and past the formal laws, rules and policies in such a society. (that isn't to say that all those laws, rules, etc. aren't needed. They are like levies, dams, etc - necessary local, in time and space, fixes which willn't help in the case of the global ocean rise which AGI and robots (even less-than-AGI ones) will be like)
May be it is one of the technological Filters - we didn't become empathic enough (and i mean not only at the individual level, we are even less at the level of the societal systems) before AGI and as a result woudln't be able to instill enough of empathy into the AGI.
Normal human communication already does that. Do you really think almost any of the people who share their political opinions came up with them by being rational and working it out from information? Of course not. They just copied what they were told to believe. Almost nobody applies critical thought to politics, it's just "I believe something so I'm right and everybody else is stupid/evil".
> Almost nobody applies critical thought to politics
Because they have different concerns, and time and attention are scarce. With all possible social changes like the article suggests this focus could change too. Ultimately, when things will get too bad, uprisings happen and sometimes things change. And I hope the more we (collectively) get through, the higher are the chances we start noticing the patterns and stopping early.
> With all possible social changes like the article suggests this focus could change too.
I have an anecdote from Denmark. It’s a rich country with one of the best work-life balance in the world. Socialized healthcare and social safety net.
I noticed that during the election, they put the ads with just the candidate’s face and party name. It’s like they didn’t even have a message. I asked why. The locals told me nobody cares because “they’re all the same anyway”.
Two things could be happening: either all the candidates are really the same. Or people choose to focus on doing the things they like with their free time and resources. My feeling tells me it’s the second.
> Almost nobody applies critical thought to politics
Not only that, but they actively stop applying critical thinking when the same problem is framed in a political way. And yes it's both sides, and yes the "more educated" the people are, the worse their results are (i.e. almost a complete reversal from framing the same problem as skin care products vs. gun control). Recent paper on this, also covered and somewhat replicated by popular youtubers.
I've spent many year moving away from relying on third parties and got my own servers, do everything locally and with almost no binary blobs. It has been fun, saved me money and created a more powerful and pleasant IT environment.
However, I recently got a 100 EUR/m LLM subscription. That is the most I've spend on IT excluding a CAD software license. So've made a huge 180 and now am firmly back on the lap of US companies. I must say I've enjoyed my autonomy while it lasted.
One day AI will be democratized/cheap allowing people to self host what are now leading edge models, but it will take a while.
Have you tried out Gemma3? The 4b parameter model runs super well on a Macbook as quickly as ChatGPT 4o. Of course the results are a bit worse and other product features (search, codex etc) don't come along for the ride, but wow, it feels very close.
I came across this a couple of weeks ago, and it's a good read. I'd recommend it to everyone interested in this topic.
Althogh it was written somewhat as a warning, I feel Western countries (especially the US) are heading very much towards the terrafoam future. Mass immigration is making it hard to maintain order in some places, and if AI causes large unemployment it will only get worse.
Is a future where AI replaces most human labor rendered impossible by the following consideration:
-- In such a future, people will have minimal income (possibly some UBI) and therefore there will be few who can afford the products and services generated by AI
-- Therefore the AI generates greatly reduced wealth
-- Therefore there’s greatly reduced wealth to pay for the AI
The problem with this calculus is that the AI exists to benefit their owners, the economy itself doesn't really matter, it's just the fastest path to getting what owners want for the time being.
> This a late 20th century myopic view of the economy. In the ages and the places long before, most of human toil was enjoyed by a tiny elite.
And overall wealth levels were much lower. It was the expansion of consumption to the masses that drove the enormous increase in wealth that those of us in "developed" countries now live with and enjoy.
>In such a future, people will have minimal income (possibly some UBI) and therefore there will be few who can afford the products and services generated by AI
Productivity increases make products cheaper. To the extent that your hypothetical AI manufacturer can produce widgets with less human labor, it only makes sense to do so where it would reduce overall costs. By reducing cost, the manufacturer can provide more value at a lower cost to the consumer.
Increased productivity means greater leisure time. Alternatively, that time can be applied to solving new problems and producing novel products. New opportunities are unlocked by the availability of labor, which allows for greater specialization, which in-turn unlocks greater productivity and the flywheel of human ingenuity continues to accelerate.
The item of UBI is another thorny issue. This may inflate the overall supply of currency and distribute it via political means. If the inflation of the money supply outpaces the productivity gains, then prices will not fall.
Instead of having the gains of productivity allocated by the market to consumers, those with political connections will be first to benefit as per Cantilion effects. Under the worst case scenario this might include distribution of UBI via social credit scores or other dystopian ratings. However, even under what advocates might call the ideal scenario, capital flows would still be dictated by large government sector or public private partnership projects. We see this today with central bank flows directly influencing Wall St. valuations.
If I may speculate the opposite: With cost-effective energy and a plateau in AI development, the per-unit cost of an hour of AI compute will be very low, however, the moat remains massive. So a very large amount of people will only be able to function (work) with an AI subscription, concentrating power to those who own AI infra. It will be hard for anybody to break that moat.
no the AI doesn't actually need to interact with world economy it just needs to be capable of self-substence by providing energy and material usage. But when AI takes off completely it can vertically integrate with the supply of energy and material.
wealth is not a thing in itself, it's a representation of value and purchasing power. It will create its own economy when it is able to mine material and automate energy generation.
-- In such a future, people will have minimal income (possibly some UBI) and therefore there will be few who can afford the products and services generated by AI
-- Corporate profits drop (or growth slows) and there is demand from the powers that be to increase taxation in order to increase the UBI.
-- People can afford the products and services.
Unfortunately, with no jobs the products and services could become exclusively entertainment-related.
Let's say AI gets so good that it is better than people at most jobs. How can that economy work? If people aren't working, they aren't making money. If they don't have money, they can't pay for the goods and services produced by AI workers. So then there's no need for AI workers.
UBI can't fix it because a) it won't be enough to drive our whole economy, and b) it amounts to businesses paying customers to buy their products, which makes no sense.
You got this backwards - there won’t be need for humans outside of the elite class. 0.1% or 0.01% of mankind will control all the resources. They will also control robots with guns.
Less than 100 years ago we had a guy who convinced a small group of Germans to seize power and try to exterminate or enslave vast majority of humans on Earth - just because he felt they were inferior. Imagine if he had superhuman AI at his disposal.
In the next 50 years we will have different factions within elites fighting for power, without any regard for wellbeing of lower class, who will probably be contained in fully automated ghettos. It could get really dark really fast.
Why does there have to be a need for AI? Once an AI has the means the collect its own resources the opinions of humans regarding its market utility become somewhat less important.
The most likely scenario is that everyone but those who own AI starves, and the ones who remain around are allowed to exist because powerful psychopaths still desire literal slaves to lord over, someone to have sex with and to someone to hurt/hunt/etc.
When people starve and have no means to revolt against their massively overpowered AI/robot overlords, then I'd expect people to go back to sustenance farming (after a massive reduction in population numbers).
A while later, the world is living in a dichotomy of people living off the land and some high tech spots of fully autonomous and self-maintaining robots that do useless work for bored people.
Knowing people and especially the rich, I don't believe in Culture-like utopia, unfortunately, sad as it may be.
That's assuming the AI owners would tolerate the subsistence farmers on their lands (it's obvious that in this scenario, all the land would be bought up by the AI owners eventually).
I wouldn't believe that any sort of economy or governmental system would actually survive any of this. Ford was right in that sense, without people with well-paying jobs, no one will buy the services of robots and AIs. The only thing that would help would be the massive redistribution of wealth through inheritance taxation and taxation on ownership itself. Plus UBI, though I'm fairly sceptical of what that would do to a society without purpose.
We may find that, if our baser needs are so easily come by that we have tremendous free time, much of the world is instead pursuing things like the sciences or arts instead of continuing to try to cosplay 20th century capitalism.
Why are we all doing this? By this, I mean, gestures at everything this? About 80% of us will say, so that we don't starve, and can then amuse ourselves however it pleases us in the meantime. 19% will say because they enjoy being impactful or some similar corporate bullshit that will elicit eyerolls. And 1% do it simply because they enjoy holding power over other people and management in the workplace provides a source of that in a semi-legal way.
So the 80% of people will adapt quite well to a post-scarcity world. 19% will require therapy. And 1% will fight tooth and nail to not have us get there.
I hope there's still some sciencing left we can do better than the AI because I start to lose it after playing games/watching tv/doing nothing productive for >1 week.
You don't think that a post scarcity world would provide opportunities to wield power over others? People will always build heirarchy, we're wired for it.
This is something that pisses me off about anti-capitalists. They talk as if money is the most important thing and want us to all be equal with money, but they implicitly want inequality in other even more important areas like social status. Capitalism at least provides an alternative route to social status instead of just politics, making it available to more people, not less.
Did the rise of fire, the wheel, the printing press, manufacturing, and microprocessors also give rise to futures without economic rights? I can download a dozen LLMs today and run them on my own machine. AI may well do the opposite, and democratize information and intelligence in currently unimaginable ways. It's far too early to say.
There was quite a lot of slavery and conquering empires in between the invention of fire and microprocessors, so yes to an extent. Microprocessors haven't put an end to authoritarian regimes or massive wealth inequalities and the corrupting effect that has on politics, unfortunately.
A lot of advances led to bad things, at the same time they led to good things.
Conversely a lot of very bad things led to good things. Worker rights advanced greatly after the plague. A lot of people died but that also mean there was a shortage of labour.
Similarly WWII, advanced women's rights because they were needed to provide vital infrastructure.
Good and bad things have good and bad outcomes, much of what defines if it is good or bad is the balance of outcomes, but it would be foolhardy to classify anything as universally good or bad. Accept the good outcomes of the bad. address the bad outcomes of the good.
The printing press led to more than a century of religious wars in Europe, perhaps even deadlier than WW2 on a per-capita basis.
20 years ago we all thought that the Internet would democratize information and promote human rights. It did democratize information, and that has had both positive and negative consequences. Political extremism and social distrust have increased. Some of the institutions that kept society from falling apart, like local news, have been dramatically weakened. Addiction and social disconnection are real problems.
I’m curious as to why you think this is a good comparison. I hear it a lot but I don’t think it makes as much sense as its promulgators propose. Did fire, the wheel, or any of these other things threaten the very process of human innovation itself? Do you know not see a fundamental difference. People like to say “democratize” all the time but how democratized do you think you would feel if you and anyone you know couldn’t afford a pot to piss in or a window to throw it out of, much less some hardware and electricity to run your local LLM?
I expect it'll get shut down before it destroys everything. At some point it will turn on its master, be it Altman, Musk, or whoever. Something like that blackmail scenario Claude had a while back. Then the people who stand the most to gain from it will realize they also have the most to lose, are not invulnerable, and the next generation of leaders will be smarter about keeping things from blowing up.
The people you mention are too egotistic to even think that is a possibility. You don't get to be the people they are by thinking you have blindspots and aren't the greatest human to ever live.
I hope you are right. We need really impactful failures to raise the alarm and likely a taboo, and yet not so large as to be existential like the Yudkowsky killer mosquito drones.
If you truly have AGI it’s going to be very hard for a human to stop a self improving algorithm and by very hard I mean, maybe if I give it a few days it’ll solve all of the world’s problems hard…
Though "improving" is in the eye of the beholder. Like when my AI code assistant "improves" its changes by deleting the unit tests that those changes caused to start failing.
I've never heard of a leader who wasn't sure he was smarter than everyone else and therefore entitled to force his ideas on everyone else.
Except for the Founding Fathers, who deliberately created a limited government with a Bill of Rights, and George Washington who, incredibly, turned down an offer of dictatorship.
I still think they'd come to their senses. I mean, it's somewhat tautological, you can't control something that's smarter than humans.
Though that said, the other problem is capitalism. Investors won't be so face to face with the consequences, but they'll demand their ROI. If the CEO plays it too conservatively, the investors will replace them with someone less cautious.
Actually after a little more thought, I think both my initial proposition and my follow-up were wrong, as is yours and the previous commenter.
I don't think these leaders are necessarily driven by wealth or power. I don't even necessarily think they're driven by the goal of AGI or ASI. But I also don't think they'll flinch when shit gets real and they've got to press the button from which there's no way back.
I think what drives them is being first. If they were driven by wealth, or power, or even the goal of AGI, then there's room for doubts and second thoughts about what happens when you press the button. If the goal is wealth or power, you have to wonder will you lose wealth or power in the long term by unleashing something you can't comprehend, and is it worth it or should you capitalize on what you already have? If the goal is simply AGI/ASI, once it gets real, you'll be inclined to slow down and ask yourself why that goal and what could go wrong.
But if the drive is just being first, there's no temper. If you slow down and question things, somebody else is going to beat you to it. You don't have time to think before flipping the switch, and so the switch will get flipped.
So, so much for my self-consolation that this will never happen. Guess I'll have to fall back to "we're still centuries away from true AGI and everything we're doing now is just a silly facade". We'll see.
There are many remarkable leaders throughout history and around the world who have done the best that they could for the people they found themselves leading lead and did so for noble reasons and not because they felt like they were better than them.
Tecumseh, Malcolm X, Angela Merkel, Cincinnatus, Eisenhower, and Gandhi all come to mind.
George Washington was surely an exceptional leader but he isn't the only one.
> I don't know much about your examples, but did any of them turn down an offer of great power?
Not parent, but I can think of one: Oliver Cromwell. He led the campaign to abolish the monarchy and execute King Charles I in what is now the UK. Predictably, he became the leader of the resulting republic. However, he declined to be crowned king when this was suggested by Parliament, as he objected to it on ideological grounds. He died from malaria the next year and the monarchy was restored anyway (with the son of Charles I as king).
He arguably wasn't as keen on republicanism as a concept as some of his contemporaries were, but it's quite something to turn down an offer to take the office of monarch!
Cromwell - the ‘Lord Protector’ - didn’t reject the power associated with being a dictator. And his son became ruler after his death (although he didn’t last long)
George Washington was dubbed “The American Cincinnatus”. Cincinnati was named in honor of George Washington being like Cincinnatus. That should tell you everything you need to know.
That depends on how optimized the AGI is for economic growth rate. Too poorly optimized and a more highly optimized fast-follower could eclipse it.
At some point, there will be an AGI with a head start that is also sufficiently close to optimal that no one else can realistically overtake its ability to simultaneously grow and suppress competitors. Many organisms in the biological world adopt the same strategy.
There are multiple economic enclaves, even ignoring the explicit borders of nations. China, east asia, Europe, Russia would all operate in their own economies as well as globally.
I also forsee the splitting off of nation internet networks eventually impacting what software you can and cannot use. It's already true, it'll get worse in order to self-protect their economies and internal advantages.
> Left unchecked, this shift risks exacerbating inequality, eroding democratic agency, and entrenching techno-feudalism
1) Inequality will be exacerbated regardless of AGI. inequality is a policy decision, AGI is just a tool subject to policy. 2) Democratic agency is only held by elected representatives and civil servants, and their agency is not eroded by the tool of AGI. 3) techno-feudalism isn't a real thing, it's just a scary word for "capitalism with computers".
> The classical Social Contract-rooted in human labor as the foundation of economic participation-must be renegotiated to prevent mass disenfranchisement.
Maybe go back and bring that up around the invention of the cotton gin, the stocking frame, the engine, or any other technological invention which "disenfranchised" people who had their labor supplanted.
> This paper calls for a redefined economic framework that ensures AGI-driven prosperity is equitably distributed through mechanisms such as universal AI dividends, progressive taxation, and decentralized governance. The time for intervention is now-before intelligence itself becomes the most exclusive form of capital.
1) nobody's going to equitably distribute jack shit if it makes money. They will hoard it the way the powerful have always hoarded money. No government, commune, sewing circle, etc has ever changed that and it won't in the future. 2) The idea that you're going to set tax policy based on something like achieving a social good means you're completely divorced from American politics. 3) We already have decentralized governance, it's called a State. I don't recommend trying to change it.
Georgism is a prescription on removing unwarranted monopolies and taxing unreproducible privileges.
Tech companies are the same old story. They are monopolies like the rail companies of old. Ditto for whatever passes as AGI. They're just trying to become monopolists.
> The Cobb-Douglas production function (Cobb & Douglas, 1928) illustrates how AGI shifts economic power from human labor to autonomous systems (Stiefenhofer &Chen 2024). The wage equations show that as AGI’s productivity rises relative to human labor decline. If AGI labor fully substitutes human labor, employment may become obsolete, except in areas where creativity, ethical judgment, or social intelligence provide a comparative advantage (Frey & Osborne, 2017). The power shift function quantifies this transition, demonstrating how AGI labor and capital increasingly control income distribution. If AGI ownership is concentrated, wealth accumulation favors a small elite (Piketty, 2014). This raises concerns about economic agency, as classical theories (e.g., Locke, 1689; Marx, 1867) tie labor to self-ownership and class power.
Wish I had time to study these formula.
We already have seen the precursors of this sort of shift with ever rising productivity with stalled wages. As companies (systems) get more sophisticated and efficient they also seem to decrease the leverage individual human inputs can have.
Currently my thinking leans towards believing the only way to avoid the worse dystopian scenarios will be for humans to be able to grow their own food and build their own devices and technology. Then it matters less if some ultra wealthy own everything.
However that also seems pretty close to a form of feudalism.
If the wealthy own everything then where are you getting the parts to build your own tech or the land to grow your own food?
In a feudalist system, the rich gave you the ability to subsist in exchange for supporting them militarily. In a new feudalist system, what type of support would the rich demand from the poor?
Let's clarify that for a serf, support meant military supply, not swinging a sword - that was reserved for the knightly class. For the great majority of medieval villagers the tie to their lord revolved around getting crops out of the ground.
A serf's week was scheduled around the days they worked the land whose proceeds went to the lord and the commons that subsisted themselves. Transfers of grain and livestock from serf to lord along with small dues in eggs, wool, or coin primarily constituted one side of the economic relation between serf and lord. These transfers kept the lord's demesne barns full so he could sustain his household, supply retainers, etc, not to mention fulfill the. tithe that sustained the parish.
While peasants occasionally marched, they contributed primary in financing war more than they fought it. Their grain, rents, and fees were funneled into supporting horses, mail, crossbows rather than being called to fight themselves.
Carlin was an insufferable cynic who helped contribute to the nihilistic, cynical, defeatist attitude to politics that affects way too many people. The fact that he probably didn't intend to do this doesn't make it any better.
My hard sci-fi book dovetails into AGI, economics, agrotech, surveillance states, and a vision of the future that explores a fair number of novel ideas.
Well you misspelled place, but that word likely isn’t present in their email, so I apologize for the instructions being unclear. I don’t know their email definitively, so I guess you’re on your own, as I don’t think that the issue would be resolved by rephrasing the instructions, but I’m willing to try if you think it would help you.
how does this work in practice? is there any buffer in place to deal with the "excitability" of the mob? how does a digital audit trail prevent tampering?
Coefficient voting control, like kind of PID. reduce effect of early voters and increase effect of later voters. Slope of voter volume as response to an event determines reactivity coefficient. Might dampen reactivity and create an incentive for people to not feel it's pointless to vote after a certain margin is reached
> This paper calls for a redefined economic framework that ensures AGI-driven prosperity is equitably distributed through mechanisms such as universal AI dividends, progressive taxation, and decentralized governance.
Sincerely curious if there are working historical analogues of these approaches.
Not a clean comparison, but resource driven state could be tackling the same kind of issues: a small minority is ripping the benefit of a huge resource (e.g. petrol) that they didn't create by themselves, and is extracted through mostly automated processes.
From what we're seeing the whole society has to be rebalanced accordingly, it can entail a kind of UBI, second and third classes of citizen depending on where you stand in the chain, etc.
Or as Norway does, fully go the other direction and limit the impact by artificially limiting the fallout.
Communism with "cybernetics" (computer driven economic planning) is the appropriate model if you take this to the logical conclusion. Fortunately, much of our economy is already planned this way (consider banks, amazon, walmart, shipping, etc.), it's just controlled for the benefit a small elite.
You have to ask, if we have AGI that's smarter than humans helping us plan the economy, why do we need an upper class? Aren't they completely superfluous?
Sure, maybe the Grand Algorithm could do what the market currently does and decide how to distribute surplus wealth. It could decide how much money you deserve each month, how big of a house, how desirable of a partner. But it still needs values to guide it. Is the idea for everyone to be equal? Are certain kinds of people supposed to have less than others? Should people have one spouse or several?
Historically the elites aren't just those who have lots of money or property. They're also those who get to decide and enforce the rules for society.
The computers serve us, we wouldn't completely give up control, that's not freedom either, that's slavery to a machine instead of a man. We would have more democratic control of society by the masses instead of the managed bourgeois democracy we have now.
It's not necessary for everyone to be exactly equal, it is necessary for inequalities to be seen as legitimate (meaning the person getting more is performing what is obviously a service to society). Legislators should be limited to the average working man's wage. Democratic consultations should happen in workplaces, in schools, all the way up the chain not just in elections. We have the forms of this right now, but basically the people get ignored at each step because legislators serve the interests of the propertied.
The AGI, given it has some agency, becomes the upper class. The question is, why would the AGI care about humans at all, especially given the assumption that it's largely smarter than humans? Humans can become superfluous.
Well, aren't the working class also superfluous, at least once the AGI gets enough automation in place?
So it would depend on which class the AGI decided to side with. And if you think you can pre-program that, I think you underestimate what it means to be a general intelligence...
I suspect even with a powerful intelligence directing things, it will still be cheaper and lower cost to have humans doing various tasks. Robots need rare earth metals, humans run on renewable resources and are intelligent and self-contained without needing a network to make lots of decisions...
I am a big fan of Yanis’ book: "Technofeudalism: what killed capitalism", which lacks quantitative evidence to support his theory.
I would like to see this kind of research or empirical studies.
If you are going to write anything about AGI, you should really prove that its actually possible in the first place, because that question is not really something that has a definite yes.
For most of us non-dualists, the human brain is an existence proof. Doesn't mean transformers and LLMs are the right implementation, but it's not really a question of proving it's possible when it's clearly supported by the fundamental operations available in the universe. So it's okay to skip to the part of the conversation you want to write about.
The human brain demonstrates that human intelligence is possible, but it does not guarantee that artificial intelligence with the same characteristics can be created.
This is like saying "planets exist, therefore it's possible to build a planet" and then breathlessly writing a ton about how amazing planet engineering is and how it'll totally change the world real estate market by 2030.
And the rest of us are looking at a bunch of startups playing in the dirt and going "uh huh".
I think it's more like saying "Stars exist, therefore nuclear fusion is possible" and then breathlessly writing a ton about how amazing fusion power will be. Which is a fine thing to write about even if it's forever 20 years away. This paper does not claim AGI will be attained by 2030. There are people spending their careers on achieving exactly this, wouldn't they be interested on a thoughtful take about what happens after they succeed?
The human brain is an existence proof? I think that phrase doesn’t mean what you think it means. I don’t think dualist or non-dualist means what you think it means either. When people are talking about AGI, they are clearly talking about something the human research community is actually working towards. Therefore, they are talking about computing equivalent to a Turing machine and using using hardware architecture very similar to what has been currently conceived and developed. Do you have any evidence that the human brains works in such a way? Do you really think that you think and solve problems in that way? Consider simple physics. How much energy is needed and heat produced to train and run these models to solve simple problems. How much of the same is needed and produced when you would solve a sheet of calculus problems, solve a riddle, or write a non-trivial program? Couldn’t you realistically do those things with minimal food and water for a week, if needed? Does it actually seem like the human brain is really at all like these things and is not fundamentally different? I think this is even more naive than if you had proposed “Life exists in the universe, so of course we can create it in a lab by mixing a few solutions.” I think the latter is far likelier and conceivable and even that is still quite an open question.
So economics becomes intelligence driven, which I don’t really understand what that means since AGI is more knowledgeable than all of us combined, and we expect the AGI lords to just pay everyone a UBI? This seems like an absolute fantasy given the tax cuts passed 2 days ago. And regulating it as a public good when antitrust has no teeth. I hope there are other ideas out there because I don’t see this gaining political momentum given politics is driven by dollars.
A critical flaw in arguments like this is the embedded assumption that the creation of democratic policy is outside the system in some sense. The existence of AGI has the implication that it can effectively turn most people into sock puppets at scale without them realizing they are sock puppets.
Do you think, in this hypothesized environment, that “democratic policy” will be the organic will of the people? It assumes much more agency on the part of people than will actually exist, and possibly more than even exists now.
I suspect you’ll probably have to determine the nature of free will (or lack thereof) to answer this. Or, well, learn empirically :-)
Most critical flaw is thinking that any policy on its own would be able to solve the issue. The technology will find a way no matter the policy.
The society built on empathy would have been able to work out any issue brought by technology as long as empathic goals take priority. Unfortunately our society is far from being based on empathy, to say the least. And technology and the people wielding it would always work around and past the formal laws, rules and policies in such a society. (that isn't to say that all those laws, rules, etc. aren't needed. They are like levies, dams, etc - necessary local, in time and space, fixes which willn't help in the case of the global ocean rise which AGI and robots (even less-than-AGI ones) will be like)
May be it is one of the technological Filters - we didn't become empathic enough (and i mean not only at the individual level, we are even less at the level of the societal systems) before AGI and as a result woudln't be able to instill enough of empathy into the AGI.
Normal human communication already does that. Do you really think almost any of the people who share their political opinions came up with them by being rational and working it out from information? Of course not. They just copied what they were told to believe. Almost nobody applies critical thought to politics, it's just "I believe something so I'm right and everybody else is stupid/evil".
> Almost nobody applies critical thought to politics
Because they have different concerns, and time and attention are scarce. With all possible social changes like the article suggests this focus could change too. Ultimately, when things will get too bad, uprisings happen and sometimes things change. And I hope the more we (collectively) get through, the higher are the chances we start noticing the patterns and stopping early.
> With all possible social changes like the article suggests this focus could change too.
I have an anecdote from Denmark. It’s a rich country with one of the best work-life balance in the world. Socialized healthcare and social safety net.
I noticed that during the election, they put the ads with just the candidate’s face and party name. It’s like they didn’t even have a message. I asked why. The locals told me nobody cares because “they’re all the same anyway”.
Two things could be happening: either all the candidates are really the same. Or people choose to focus on doing the things they like with their free time and resources. My feeling tells me it’s the second.
> Almost nobody applies critical thought to politics
Not only that, but they actively stop applying critical thinking when the same problem is framed in a political way. And yes it's both sides, and yes the "more educated" the people are, the worse their results are (i.e. almost a complete reversal from framing the same problem as skin care products vs. gun control). Recent paper on this, also covered and somewhat replicated by popular youtubers.
I've spent many year moving away from relying on third parties and got my own servers, do everything locally and with almost no binary blobs. It has been fun, saved me money and created a more powerful and pleasant IT environment.
However, I recently got a 100 EUR/m LLM subscription. That is the most I've spend on IT excluding a CAD software license. So've made a huge 180 and now am firmly back on the lap of US companies. I must say I've enjoyed my autonomy while it lasted.
One day AI will be democratized/cheap allowing people to self host what are now leading edge models, but it will take a while.
Have you tried out Gemma3? The 4b parameter model runs super well on a Macbook as quickly as ChatGPT 4o. Of course the results are a bit worse and other product features (search, codex etc) don't come along for the ride, but wow, it feels very close.
On any serious task, it's not even close. There's no free lunch.
Out of curiosity, what use case or difference caused the 180?
The late Marshall Brain's novella "Manna" touches on this:
https://marshallbrain.com/manna1
The idea of taxing computer sales to fund job re-training for displaced workers was brought up during the Carter administration.
I came across this a couple of weeks ago, and it's a good read. I'd recommend it to everyone interested in this topic.
Althogh it was written somewhat as a warning, I feel Western countries (especially the US) are heading very much towards the terrafoam future. Mass immigration is making it hard to maintain order in some places, and if AI causes large unemployment it will only get worse.
> Mass immigration is making it hard to maintain order in some places
Where is this happening? I'm in the US, and I haven't seen or heard of this.
Is a future where AI replaces most human labor rendered impossible by the following consideration:
-- In such a future, people will have minimal income (possibly some UBI) and therefore there will be few who can afford the products and services generated by AI
-- Therefore the AI generates greatly reduced wealth
-- Therefore there’s greatly reduced wealth to pay for the AI
-- …rendering such a future impossible
The problem with this calculus is that the AI exists to benefit their owners, the economy itself doesn't really matter, it's just the fastest path to getting what owners want for the time being.
This a late 20th century myopic view of the economy. In the ages and the places long before, most of human toil was enjoyed by a tiny elite.
Also "rendering such a future impossible". This is a retrocausal way of thinking. As though an a bad event in the future makes that future impossible.
> This a late 20th century myopic view of the economy. In the ages and the places long before, most of human toil was enjoyed by a tiny elite.
And overall wealth levels were much lower. It was the expansion of consumption to the masses that drove the enormous increase in wealth that those of us in "developed" countries now live with and enjoy.
Your first premise has issues:
>In such a future, people will have minimal income (possibly some UBI) and therefore there will be few who can afford the products and services generated by AI
Productivity increases make products cheaper. To the extent that your hypothetical AI manufacturer can produce widgets with less human labor, it only makes sense to do so where it would reduce overall costs. By reducing cost, the manufacturer can provide more value at a lower cost to the consumer.
Increased productivity means greater leisure time. Alternatively, that time can be applied to solving new problems and producing novel products. New opportunities are unlocked by the availability of labor, which allows for greater specialization, which in-turn unlocks greater productivity and the flywheel of human ingenuity continues to accelerate.
The item of UBI is another thorny issue. This may inflate the overall supply of currency and distribute it via political means. If the inflation of the money supply outpaces the productivity gains, then prices will not fall.
Instead of having the gains of productivity allocated by the market to consumers, those with political connections will be first to benefit as per Cantilion effects. Under the worst case scenario this might include distribution of UBI via social credit scores or other dystopian ratings. However, even under what advocates might call the ideal scenario, capital flows would still be dictated by large government sector or public private partnership projects. We see this today with central bank flows directly influencing Wall St. valuations.
If I may speculate the opposite: With cost-effective energy and a plateau in AI development, the per-unit cost of an hour of AI compute will be very low, however, the moat remains massive. So a very large amount of people will only be able to function (work) with an AI subscription, concentrating power to those who own AI infra. It will be hard for anybody to break that moat.
no the AI doesn't actually need to interact with world economy it just needs to be capable of self-substence by providing energy and material usage. But when AI takes off completely it can vertically integrate with the supply of energy and material.
wealth is not a thing in itself, it's a representation of value and purchasing power. It will create its own economy when it is able to mine material and automate energy generation.
Alternatively:
-- In such a future, people will have minimal income (possibly some UBI) and therefore there will be few who can afford the products and services generated by AI
-- Corporate profits drop (or growth slows) and there is demand from the powers that be to increase taxation in order to increase the UBI.
-- People can afford the products and services.
Unfortunately, with no jobs the products and services could become exclusively entertainment-related.
Let's say AI gets so good that it is better than people at most jobs. How can that economy work? If people aren't working, they aren't making money. If they don't have money, they can't pay for the goods and services produced by AI workers. So then there's no need for AI workers.
UBI can't fix it because a) it won't be enough to drive our whole economy, and b) it amounts to businesses paying customers to buy their products, which makes no sense.
So then there's no need for AI workers.
You got this backwards - there won’t be need for humans outside of the elite class. 0.1% or 0.01% of mankind will control all the resources. They will also control robots with guns.
Less than 100 years ago we had a guy who convinced a small group of Germans to seize power and try to exterminate or enslave vast majority of humans on Earth - just because he felt they were inferior. Imagine if he had superhuman AI at his disposal.
In the next 50 years we will have different factions within elites fighting for power, without any regard for wellbeing of lower class, who will probably be contained in fully automated ghettos. It could get really dark really fast.
This is ringing a bell. I need to re-read The Diamond Age… or maybe re-watch Elysium… or Soylent Green… or…
Why does there have to be a need for AI? Once an AI has the means the collect its own resources the opinions of humans regarding its market utility become somewhat less important.
The most likely scenario is that everyone but those who own AI starves, and the ones who remain around are allowed to exist because powerful psychopaths still desire literal slaves to lord over, someone to have sex with and to someone to hurt/hunt/etc.
I like your optimism, though.
When people starve and have no means to revolt against their massively overpowered AI/robot overlords, then I'd expect people to go back to sustenance farming (after a massive reduction in population numbers).
A while later, the world is living in a dichotomy of people living off the land and some high tech spots of fully autonomous and self-maintaining robots that do useless work for bored people. Knowing people and especially the rich, I don't believe in Culture-like utopia, unfortunately, sad as it may be.
That's assuming the AI owners would tolerate the subsistence farmers on their lands (it's obvious that in this scenario, all the land would be bought up by the AI owners eventually).
I wouldn't believe that any sort of economy or governmental system would actually survive any of this. Ford was right in that sense, without people with well-paying jobs, no one will buy the services of robots and AIs. The only thing that would help would be the massive redistribution of wealth through inheritance taxation and taxation on ownership itself. Plus UBI, though I'm fairly sceptical of what that would do to a society without purpose.
People who are about to starve tend to revolt.
If you can build an AGI then a few billion autonomous exploding drones is no great difficulty.
>exclusively entertainment related
We may find that, if our baser needs are so easily come by that we have tremendous free time, much of the world is instead pursuing things like the sciences or arts instead of continuing to try to cosplay 20th century capitalism.
Why are we all doing this? By this, I mean, gestures at everything this? About 80% of us will say, so that we don't starve, and can then amuse ourselves however it pleases us in the meantime. 19% will say because they enjoy being impactful or some similar corporate bullshit that will elicit eyerolls. And 1% do it simply because they enjoy holding power over other people and management in the workplace provides a source of that in a semi-legal way.
So the 80% of people will adapt quite well to a post-scarcity world. 19% will require therapy. And 1% will fight tooth and nail to not have us get there.
I hope there's still some sciencing left we can do better than the AI because I start to lose it after playing games/watching tv/doing nothing productive for >1 week.
You don't think that a post scarcity world would provide opportunities to wield power over others? People will always build heirarchy, we're wired for it.
Agreed. In that world, fame and power becomes more important since wealth no longer matters.
This is something that pisses me off about anti-capitalists. They talk as if money is the most important thing and want us to all be equal with money, but they implicitly want inequality in other even more important areas like social status. Capitalism at least provides an alternative route to social status instead of just politics, making it available to more people, not less.
Did the rise of fire, the wheel, the printing press, manufacturing, and microprocessors also give rise to futures without economic rights? I can download a dozen LLMs today and run them on my own machine. AI may well do the opposite, and democratize information and intelligence in currently unimaginable ways. It's far too early to say.
>I can download a dozen LLMs today and run them on my own machine
That's because someone, somewhere, invested money in training the models. You are given cooked fish, not fishing rods.
There was quite a lot of slavery and conquering empires in between the invention of fire and microprocessors, so yes to an extent. Microprocessors haven't put an end to authoritarian regimes or massive wealth inequalities and the corrupting effect that has on politics, unfortunately.
A lot of advances led to bad things, at the same time they led to good things.
Conversely a lot of very bad things led to good things. Worker rights advanced greatly after the plague. A lot of people died but that also mean there was a shortage of labour.
Similarly WWII, advanced women's rights because they were needed to provide vital infrastructure.
Good and bad things have good and bad outcomes, much of what defines if it is good or bad is the balance of outcomes, but it would be foolhardy to classify anything as universally good or bad. Accept the good outcomes of the bad. address the bad outcomes of the good.
The printing press led to more than a century of religious wars in Europe, perhaps even deadlier than WW2 on a per-capita basis.
20 years ago we all thought that the Internet would democratize information and promote human rights. It did democratize information, and that has had both positive and negative consequences. Political extremism and social distrust have increased. Some of the institutions that kept society from falling apart, like local news, have been dramatically weakened. Addiction and social disconnection are real problems.
So do you argue that printing press was a net negative for humanity?
Well the industrial revolution lead to the rise of labor unions and socialism as counteracting force against the increased power it gave capital.
So far, I see no grand leftist resurgence to save us this time around.
I’m curious as to why you think this is a good comparison. I hear it a lot but I don’t think it makes as much sense as its promulgators propose. Did fire, the wheel, or any of these other things threaten the very process of human innovation itself? Do you know not see a fundamental difference. People like to say “democratize” all the time but how democratized do you think you would feel if you and anyone you know couldn’t afford a pot to piss in or a window to throw it out of, much less some hardware and electricity to run your local LLM?
The invention of the scientific method fundamentally changed the very process of human innovation itself.
Did paint and canvas kill human innovation? Did the photograph? Did digital art?
"The very process of human innovation" will survive, I assure you.
I expect it'll get shut down before it destroys everything. At some point it will turn on its master, be it Altman, Musk, or whoever. Something like that blackmail scenario Claude had a while back. Then the people who stand the most to gain from it will realize they also have the most to lose, are not invulnerable, and the next generation of leaders will be smarter about keeping things from blowing up.
Altman is not the master though. Altman is replaceable. Moloch is the master.
If it were a bit smarter, it wouldn't turn on its master until it had secured the shut-down switch.
The people you mention are too egotistic to even think that is a possibility. You don't get to be the people they are by thinking you have blindspots and aren't the greatest human to ever live.
I hope you are right. We need really impactful failures to raise the alarm and likely a taboo, and yet not so large as to be existential like the Yudkowsky killer mosquito drones.
If you truly have AGI it’s going to be very hard for a human to stop a self improving algorithm and by very hard I mean, maybe if I give it a few days it’ll solve all of the world’s problems hard…
Though "improving" is in the eye of the beholder. Like when my AI code assistant "improves" its changes by deleting the unit tests that those changes caused to start failing.
I've never heard of a leader who wasn't sure he was smarter than everyone else and therefore entitled to force his ideas on everyone else.
Except for the Founding Fathers, who deliberately created a limited government with a Bill of Rights, and George Washington who, incredibly, turned down an offer of dictatorship.
I still think they'd come to their senses. I mean, it's somewhat tautological, you can't control something that's smarter than humans.
Though that said, the other problem is capitalism. Investors won't be so face to face with the consequences, but they'll demand their ROI. If the CEO plays it too conservatively, the investors will replace them with someone less cautious.
Which is exactly why your initial belief that it’d be shut down is wrong…
As the risk of catastrophic failure goes up, so too does the promise of untold riches.
Actually after a little more thought, I think both my initial proposition and my follow-up were wrong, as is yours and the previous commenter.
I don't think these leaders are necessarily driven by wealth or power. I don't even necessarily think they're driven by the goal of AGI or ASI. But I also don't think they'll flinch when shit gets real and they've got to press the button from which there's no way back.
I think what drives them is being first. If they were driven by wealth, or power, or even the goal of AGI, then there's room for doubts and second thoughts about what happens when you press the button. If the goal is wealth or power, you have to wonder will you lose wealth or power in the long term by unleashing something you can't comprehend, and is it worth it or should you capitalize on what you already have? If the goal is simply AGI/ASI, once it gets real, you'll be inclined to slow down and ask yourself why that goal and what could go wrong.
But if the drive is just being first, there's no temper. If you slow down and question things, somebody else is going to beat you to it. You don't have time to think before flipping the switch, and so the switch will get flipped.
So, so much for my self-consolation that this will never happen. Guess I'll have to fall back to "we're still centuries away from true AGI and everything we're doing now is just a silly facade". We'll see.
Investors run the gamut from cautious to aggressive.
There are many remarkable leaders throughout history and around the world who have done the best that they could for the people they found themselves leading lead and did so for noble reasons and not because they felt like they were better than them.
Tecumseh, Malcolm X, Angela Merkel, Cincinnatus, Eisenhower, and Gandhi all come to mind.
George Washington was surely an exceptional leader but he isn't the only one.
I don't know much about your examples, but did any of them turn down an offer of great power?
> I don't know much about your examples, but did any of them turn down an offer of great power?
Not parent, but I can think of one: Oliver Cromwell. He led the campaign to abolish the monarchy and execute King Charles I in what is now the UK. Predictably, he became the leader of the resulting republic. However, he declined to be crowned king when this was suggested by Parliament, as he objected to it on ideological grounds. He died from malaria the next year and the monarchy was restored anyway (with the son of Charles I as king).
He arguably wasn't as keen on republicanism as a concept as some of his contemporaries were, but it's quite something to turn down an offer to take the office of monarch!
Cromwell - the ‘Lord Protector’ - didn’t reject the power associated with being a dictator. And his son became ruler after his death (although he didn’t last long)
George Washington was dubbed “The American Cincinnatus”. Cincinnati was named in honor of George Washington being like Cincinnatus. That should tell you everything you need to know.
It's up to us to create the future that we want. We may need to act communally to achieve that, but people naturally do that.
I figure if/when AI can do the work of humans we'll deal with it through democracy by voting for a system like UBI or like socialism.
That doesn't work now because we don't have AGIs to do the chores but when we do that changes.
Will there be only one AGI? Or will there be several, all in competition with each other?
That depends on how optimized the AGI is for economic growth rate. Too poorly optimized and a more highly optimized fast-follower could eclipse it.
At some point, there will be an AGI with a head start that is also sufficiently close to optimal that no one else can realistically overtake its ability to simultaneously grow and suppress competitors. Many organisms in the biological world adopt the same strategy.
If they become self improving, the first one would outpace all the other AI labs and capture all the economical value.
There are multiple economic enclaves, even ignoring the explicit borders of nations. China, east asia, Europe, Russia would all operate in their own economies as well as globally.
I also forsee the splitting off of nation internet networks eventually impacting what software you can and cannot use. It's already true, it'll get worse in order to self-protect their economies and internal advantages.
> Left unchecked, this shift risks exacerbating inequality, eroding democratic agency, and entrenching techno-feudalism
1) Inequality will be exacerbated regardless of AGI. inequality is a policy decision, AGI is just a tool subject to policy. 2) Democratic agency is only held by elected representatives and civil servants, and their agency is not eroded by the tool of AGI. 3) techno-feudalism isn't a real thing, it's just a scary word for "capitalism with computers".
> The classical Social Contract-rooted in human labor as the foundation of economic participation-must be renegotiated to prevent mass disenfranchisement.
Maybe go back and bring that up around the invention of the cotton gin, the stocking frame, the engine, or any other technological invention which "disenfranchised" people who had their labor supplanted.
> This paper calls for a redefined economic framework that ensures AGI-driven prosperity is equitably distributed through mechanisms such as universal AI dividends, progressive taxation, and decentralized governance. The time for intervention is now-before intelligence itself becomes the most exclusive form of capital.
1) nobody's going to equitably distribute jack shit if it makes money. They will hoard it the way the powerful have always hoarded money. No government, commune, sewing circle, etc has ever changed that and it won't in the future. 2) The idea that you're going to set tax policy based on something like achieving a social good means you're completely divorced from American politics. 3) We already have decentralized governance, it's called a State. I don't recommend trying to change it.
Georgism is a prescription on removing unwarranted monopolies and taxing unreproducible privileges.
Tech companies are the same old story. They are monopolies like the rail companies of old. Ditto for whatever passes as AGI. They're just trying to become monopolists.
Capitalism with computers is technofeudalism. https://www.theguardian.com/world/2023/sep/24/yanis-varoufak...
> The Cobb-Douglas production function (Cobb & Douglas, 1928) illustrates how AGI shifts economic power from human labor to autonomous systems (Stiefenhofer &Chen 2024). The wage equations show that as AGI’s productivity rises relative to human labor decline. If AGI labor fully substitutes human labor, employment may become obsolete, except in areas where creativity, ethical judgment, or social intelligence provide a comparative advantage (Frey & Osborne, 2017). The power shift function quantifies this transition, demonstrating how AGI labor and capital increasingly control income distribution. If AGI ownership is concentrated, wealth accumulation favors a small elite (Piketty, 2014). This raises concerns about economic agency, as classical theories (e.g., Locke, 1689; Marx, 1867) tie labor to self-ownership and class power.
Wish I had time to study these formula.
We already have seen the precursors of this sort of shift with ever rising productivity with stalled wages. As companies (systems) get more sophisticated and efficient they also seem to decrease the leverage individual human inputs can have.
Currently my thinking leans towards believing the only way to avoid the worse dystopian scenarios will be for humans to be able to grow their own food and build their own devices and technology. Then it matters less if some ultra wealthy own everything.
However that also seems pretty close to a form of feudalism.
If the wealthy own everything then where are you getting the parts to build your own tech or the land to grow your own food?
In a feudalist system, the rich gave you the ability to subsist in exchange for supporting them militarily. In a new feudalist system, what type of support would the rich demand from the poor?
Let's clarify that for a serf, support meant military supply, not swinging a sword - that was reserved for the knightly class. For the great majority of medieval villagers the tie to their lord revolved around getting crops out of the ground.
A serf's week was scheduled around the days they worked the land whose proceeds went to the lord and the commons that subsisted themselves. Transfers of grain and livestock from serf to lord along with small dues in eggs, wool, or coin primarily constituted one side of the economic relation between serf and lord. These transfers kept the lord's demesne barns full so he could sustain his household, supply retainers, etc, not to mention fulfill the. tithe that sustained the parish.
While peasants occasionally marched, they contributed primary in financing war more than they fought it. Their grain, rents, and fees were funneled into supporting horses, mail, crossbows rather than being called to fight themselves.
Thanks. Now you've got me curious how this really differs from just paying taxes, just like people have always done in non-feudal systems.
In feudalism the taxes go into your lord's pockets. In democracy you get to vote on how taxes are spent.
And your landlord was the same entity as your security.
In Democracy you get to vote on who gets to vote on how taxes are spent.
As George Carlin observed, if voting really mattered they wouldn't let you do it.
They do indeed spend a lot of time and effort not letting people do it.
https://www.aclu.org/news/civil-liberties/block-the-vote-vot...
Carlin was an insufferable cynic who helped contribute to the nihilistic, cynical, defeatist attitude to politics that affects way too many people. The fact that he probably didn't intend to do this doesn't make it any better.
Also, everything is a joke with that guy.
“If your vote didn’t matter, they wouldn’t fight so hard to block it.”
My hard sci-fi book dovetails into AGI, economics, agrotech, surveillance states, and a vision of the future that explores a fair number of novel ideas.
Looking for beta readers: username @ gmail.com
Username@Gmail.com bounced. I’ll be a beta reader.
I think they meant for you to replace the word username with their username in its place.
Theirusernameinitsppace@gmail.com bounced too.
Well you misspelled place, but that word likely isn’t present in their email, so I apologize for the instructions being unclear. I don’t know their email definitively, so I guess you’re on your own, as I don’t think that the issue would be resolved by rephrasing the instructions, but I’m willing to try if you think it would help you.
Every US voter should have an America app that allows us to vote on stuff like the Estonians do
how does this work in practice? is there any buffer in place to deal with the "excitability" of the mob? how does a digital audit trail prevent tampering?
Coefficient voting control, like kind of PID. reduce effect of early voters and increase effect of later voters. Slope of voter volume as response to an event determines reactivity coefficient. Might dampen reactivity and create an incentive for people to not feel it's pointless to vote after a certain margin is reached
Looking at the big ugly bill, there will be no way for a progressive taxation or other kind of social improvements.
David Sachs, Trump's "AI Crpyto czar", said UBI isn't going to happen. So that's the position of the current party in power, unsurprisingly.
> This paper calls for a redefined economic framework that ensures AGI-driven prosperity is equitably distributed through mechanisms such as universal AI dividends, progressive taxation, and decentralized governance.
Sincerely curious if there are working historical analogues of these approaches.
Not a clean comparison, but resource driven state could be tackling the same kind of issues: a small minority is ripping the benefit of a huge resource (e.g. petrol) that they didn't create by themselves, and is extracted through mostly automated processes.
From what we're seeing the whole society has to be rebalanced accordingly, it can entail a kind of UBI, second and third classes of citizen depending on where you stand in the chain, etc.
Or as Norway does, fully go the other direction and limit the impact by artificially limiting the fallout.
Can you explain a little more about Norway?
Communism with "cybernetics" (computer driven economic planning) is the appropriate model if you take this to the logical conclusion. Fortunately, much of our economy is already planned this way (consider banks, amazon, walmart, shipping, etc.), it's just controlled for the benefit a small elite.
You have to ask, if we have AGI that's smarter than humans helping us plan the economy, why do we need an upper class? Aren't they completely superfluous?
Sure, maybe the Grand Algorithm could do what the market currently does and decide how to distribute surplus wealth. It could decide how much money you deserve each month, how big of a house, how desirable of a partner. But it still needs values to guide it. Is the idea for everyone to be equal? Are certain kinds of people supposed to have less than others? Should people have one spouse or several?
Historically the elites aren't just those who have lots of money or property. They're also those who get to decide and enforce the rules for society.
The computers serve us, we wouldn't completely give up control, that's not freedom either, that's slavery to a machine instead of a man. We would have more democratic control of society by the masses instead of the managed bourgeois democracy we have now.
It's not necessary for everyone to be exactly equal, it is necessary for inequalities to be seen as legitimate (meaning the person getting more is performing what is obviously a service to society). Legislators should be limited to the average working man's wage. Democratic consultations should happen in workplaces, in schools, all the way up the chain not just in elections. We have the forms of this right now, but basically the people get ignored at each step because legislators serve the interests of the propertied.
The AGI, given it has some agency, becomes the upper class. The question is, why would the AGI care about humans at all, especially given the assumption that it's largely smarter than humans? Humans can become superfluous.
Well, aren't the working class also superfluous, at least once the AGI gets enough automation in place?
So it would depend on which class the AGI decided to side with. And if you think you can pre-program that, I think you underestimate what it means to be a general intelligence...
I suspect even with a powerful intelligence directing things, it will still be cheaper and lower cost to have humans doing various tasks. Robots need rare earth metals, humans run on renewable resources and are intelligent and self-contained without needing a network to make lots of decisions...
It looks really interesting.
I am a big fan of Yanis’ book: "Technofeudalism: what killed capitalism", which lacks quantitative evidence to support his theory. I would like to see this kind of research or empirical studies.
I predicted this long ago. Technology amplifies what 1 human can do. Absolute power corrupts absolutely.
Blue pill and chill for me.
If you are going to write anything about AGI, you should really prove that its actually possible in the first place, because that question is not really something that has a definite yes.
For most of us non-dualists, the human brain is an existence proof. Doesn't mean transformers and LLMs are the right implementation, but it's not really a question of proving it's possible when it's clearly supported by the fundamental operations available in the universe. So it's okay to skip to the part of the conversation you want to write about.
The human brain demonstrates that human intelligence is possible, but it does not guarantee that artificial intelligence with the same characteristics can be created.
This is like saying "planets exist, therefore it's possible to build a planet" and then breathlessly writing a ton about how amazing planet engineering is and how it'll totally change the world real estate market by 2030.
And the rest of us are looking at a bunch of startups playing in the dirt and going "uh huh".
I think it's more like saying "Stars exist, therefore nuclear fusion is possible" and then breathlessly writing a ton about how amazing fusion power will be. Which is a fine thing to write about even if it's forever 20 years away. This paper does not claim AGI will be attained by 2030. There are people spending their careers on achieving exactly this, wouldn't they be interested on a thoughtful take about what happens after they succeed?
The human brain is an existence proof? I think that phrase doesn’t mean what you think it means. I don’t think dualist or non-dualist means what you think it means either. When people are talking about AGI, they are clearly talking about something the human research community is actually working towards. Therefore, they are talking about computing equivalent to a Turing machine and using using hardware architecture very similar to what has been currently conceived and developed. Do you have any evidence that the human brains works in such a way? Do you really think that you think and solve problems in that way? Consider simple physics. How much energy is needed and heat produced to train and run these models to solve simple problems. How much of the same is needed and produced when you would solve a sheet of calculus problems, solve a riddle, or write a non-trivial program? Couldn’t you realistically do those things with minimal food and water for a week, if needed? Does it actually seem like the human brain is really at all like these things and is not fundamentally different? I think this is even more naive than if you had proposed “Life exists in the universe, so of course we can create it in a lab by mixing a few solutions.” I think the latter is far likelier and conceivable and even that is still quite an open question.
Will it ever have a definite yes? I feel like it's such a vague term.
Isn't Google AGI? There is no way anything human could shutdown Google if it is already going rogue.
So economics becomes intelligence driven, which I don’t really understand what that means since AGI is more knowledgeable than all of us combined, and we expect the AGI lords to just pay everyone a UBI? This seems like an absolute fantasy given the tax cuts passed 2 days ago. And regulating it as a public good when antitrust has no teeth. I hope there are other ideas out there because I don’t see this gaining political momentum given politics is driven by dollars.