One of the oft-repeated talking points erupting from the mouths of futurists and tech leaders alike is claiming that things will cost nothing in the future. As if we are to believe all of these people are in the business of making something for nothing. The entire claim is a gross absurdity that charlatans like Ray Kurzweil conjured out of thin air, and others parrot at every opportunity. This claim is made with such confidence that it is rendered self-evident, and to question it means you are an out-of-touch dolt lacking the religious fervor necessary to create the techno-utopia.
But these responses are a smokescreen to dispel the very rational questions this claim evokes. None of these people can explain exactly how this will work in practice or are willing to admit just how bad things will get, which seem like consequential details to omit considering the plan to rework the social contract of most of the world.
The claim promises us a Fully Automated Luxury Communism (FALC) where all of our needs are not only met but propels us into a life of luxury. However comforting the concept, the reality may be closer to Fully Automated Digital Breadlines (FADB). I know, how dare I poo-poo the utopia.
The False Choice
We are often given a false choice. We are told that if we don’t allow companies carte blanche to raw-dog technology all the way to utopia, then humanity will vanish. Either grow or die, as the mantra goes. Given this, a minuscule number of people are trying to rework the social contract and reimagine society without society’s input.

We can have cures for cancer and other illnesses without destroying art, stealing people’s work, or removing humans from the creative process. However, curing cancer is a hard problem, and imitating humans is easy. So we get AI slop machines instead of cures for Alzheimer’s.

Maybe I’m just an idiot, but I fail to see how LLMs will make humans immortal. Immortality is one of the many promises if we only just let it happen, even though there’s absolutely no evidence for this.
Also, if you read Andreessen’s Techno Optimist Manifesto from October of 2023, you may notice his crediting of Filippo Tommaso Marinetti, the author of the Fascist Manifesto. Marinetti was a futurist, but his position as a futurist and someone who had complete disregard for the past led him to embrace fascism as a logical vehicle for technocracy.
Don’t get me wrong, there’s plenty in Andreessen’s manifesto that I agree with. We are a society built on technology, and this has brought some of our greatest achievements. There certainly are regulations that seem pointless and get in the way. There are groups inside organizations that have become politicized and create unnecessary obstacles. I also agree with the critique of communism. These are all true. However, Adreessen’s mistake assumes that multiple things can’t be true simultaneously.
Even though these are extreme views that some have labeled techno-authoritarianism, understand that they are the average view of the e/acc community. Andreessen also invokes the perils of communism multiple times while also driving humanity into techno-communism, but to each their own, I guess.
I love technology and believe, as Andreessen does, that technology will deliver the best future. It’s because of technological advancement that we’ll cure cancer and reduce suffering around the world. However, I don’t believe a better society results from discarding ethics and principles and disregarding voices different from our own in the pursuit of generating a cornucopia of innovation porn. We in technology seem to constantly make this mistake, only to be disappointed by our ignorance of the complexities of the real world and the jobs and perspectives of others.
Ethics and principles aren’t obstacles or roadblocks. They are guideposts that ensure what we build aligns with our values and vision of the world we want to create. We, in this case, meaning society as a whole and not just a couple of dudes sharing technology with their friends.
The Claim
If you have escaped these claims, here’s a recent example from Marc Andreessen below.

You read that right. We need to hurt you before we can help you. It’s the sort of pitch you’d hear from a sadistic boyfriend who insists he needs to tear a partner down before building them back up. The we have to break it before we can fix it mantra is applied to almost everything, including humans and the environment. This is the core premise of the Effective Accelerationist (e/acc) movement.
But Andreessen is hardly the only one making these claims.

That’s right, Google is in the business of giving you things for free. We’ve learned this lesson a long time ago. Yes, Google makes its money off of ads for its “free” services. However, in a future where things are worth nothing and people don’t have an income stream, it seems likely that advertising budgets will be zero as well.
I blame much of this on Ray Kurzweil. For years, he’s been peddling this nonsense. I addressed this very same claim in my post on his latest book in the “Things Will Cost Nothing” and “Jobs and Wages” sections of the article. Despite this, I wanted to explore this topic further.
These people claim we shouldn’t worry about losing our jobs to AI because AI will make companies so good that goods and services will essentially be cheap or free. But both on the surface and upon reflection, the claim is absurd.
Nobody can describe exactly how this is supposed to work other than sprinkling everything with AI magic. When someone does make an attempt, like Ray Kurzweil, for example, the explanations make no sense, don’t address the questions, and highlight how little about the real world these people know.
For years, I’ve been pushing back against the phrase, “AI won’t replace people. People with AI will replace people without.” This is just patently false. The moment AI is good enough to take our job, it will. I mean, it doesn’t even have to be that good.
So, no job, no income. This is our baseline. It doesn’t matter how cheap things get if you have zero.
But AI, Tho
Before we get too far, let’s address the counterargument. For all the issues I’m about to raise, the answer is, “But AI, tho.” The response involves invoking the name of AI like a magician conjuring a spell. We are told that AI will be so great and powerful, rising to the status of deity, and no matter what the encountered issue, AI will figure it out. But merely spouting an incantation doesn’t make it a reality.
This answer is a complete copout that leaves the questioner unsatisfied. Whenever someone invokes the But AI, Tho defense to real questions, continue to ask them for more specifics. Don’t allow the oversimplification of a vast and complex problem space. AI isn’t god, and they aren’t prophets.
The “Sucks To Be You” Gap
Remember, we need to be broken before we can be fixed. This means there will be a gap between the damage incurred and any mitigation strategies. I call this the Sucks To Be You gap. There is no telling how long this gap will stay open or what mitigations will be implemented to remedy it.
Unemployment is unlikely to hit something like 90% all at once. This would mean that early people displaced by automation would be the most harmed since they would be unable to support themselves and their families and have no real recourse for their situation. How long will this drag out? My guess is years, possibly a decade or more, depending on how slow the adoption is and any difficulties implementing mitigations.
The amount of harm caused by this gap is unfathomable. This gap brings pain, suffering, and death. If you think I’m being dramatic, think about it for a moment. Imagine the mental toll this takes on someone trying to provide for themselves and their family. This isn’t a matter of re-skilling. Even if people did re-skill, the competition for remaining jobs would be astronomical, with thousands of applicants for a single position. This isn’t p(doom) it’s p(shit).
It’s easy to see how self-harm could result from this situation, but that’s not the only scenario where mortality is concerned. Not working leads to a lack of benefits, meaning you can’t make co-pays on doctor visits and prescriptions. This doesn’t include all of the potential harm from algorithmic decision-making mistakes. Deaths will result, and we know this because we’ve seen it happen on a smaller scale with people not being able to afford insulin.
No Intelligence Explosion
All of the claims of a near-future techno-utopia are predicated upon an intelligence explosion. This is the condition in which AIs will recursively improve, creating even better AIs that morph into superintelligence. Advocates claim this attainment of superintelligence fuels this world of comfort and abundance. But what if it doesn’t manifest this way? What if we get the Diet Coke of AGI? Just one calorie, not intelligent enough.

The assumption is that superintelligence brings massive productivity gains, but what if, instead, we get algorithms that are purely good enough, leading to human workers being displaced and productivity staying relatively the same? For example, an agent can work 24 hours a day, but what if that 24-hour-a-day agent produces the same productivity as a human working 8 hours a day? This could happen because of needing to account for errors, wait times for additional reasoning, running tasks multiple times, and other issues relating to the complexities of seemingly simple tasks. It’s easy to see how this can stretch out when we factor in additional difficulties of completing complex tasks.
This human replacement could result in cost savings but would be far from driving costs to zero. Also, this would be less of a complete human replacement and more of a human staff reduction. Now, you have a displaced workforce and a company with similar productivity. This doesn’t seem like a recipe for a utopia. It’s a recipe for problems.
This is a very real possibility, especially given all of the hype around LLMs. I know everyone is losing their mind about DeepSeek at the moment, but I don’t believe LLMs are a path to AGI, much less ASI. However, it’s important to realize that we don’t need this level of intelligence to apply these technologies to specific tasks successfully. It’s entirely feasible that a company would take a shitty LLM with repeatable failures over a human worker if they could save money.
What’s The Point of Things Costing Nothing?
I’m not sure anyone gets out of bed in the morning with dreams of creating a company that delivers goods and services that cost nothing. It’s even absurd to say out loud, so you might wonder why people at the largest companies in the world are making this claim. Investors are the same way. Nobody is investing in a company so they can deliver zero-cost goods and services. In the Sucks To Be You gap, the first affected suffers the most harm, but the opposite happens with companies.
Nobody is investing in a company so they can deliver zero-cost goods and services.
Tech leaders and investors aren’t considering what happens to their companies after this so-called intelligence explosion. They are thinking of all the money they will make leading up to it. This is why these staunch capitalists are so comfortable forcing everyone into techno-communism. Now that I think of it, the thought of an algorithmic Stalin hunting kulaks is terrifying.

Stagnation
Counterintuitively, this condition could lead to stagnation. The very opposite of what proponents claim. This doesn’t strike me as a competitive environment where companies and people are stepping up to create new solutions due to a lack of incentives. I guess someone could make the argument that people’s lives will suck so bad that they’ll be incentivized to create something better. Fair enough, but it seems like these bigger initiatives would cost more money, putting them out of reach by these very people. Not to mention, this is an odd flex for the techno-utopians. “Your life will suck so bad you’ll be dying to create something better.”
The price of stagnation for a majority of the population is that they remain in the mire of the Sucks To Be You gap for a much longer time. Even if basic necessities are met, it will be miles away from a good life, much less luxurious.
Things Will Still Cost Something
The core premise of the argument that things will be zero or low cost is absurd on its face, so much so that it’s remarkable that nobody seems to push back. A whole host of things won’t be free or low-cost. Consider rent and property, the means to generate electricity, medical treatments, and, most importantly, food. Even extracting and refining raw materials is going to cost something. Imagine being monitored every moment with everything in your home subscription-based, requiring a micro transaction for nearly everything you do. Now, that’s the utopia we’ve all dreamed of!
Regarding food, Kurzweil claims that advancements in vertical farming will make abundant, nutritious food freely available. This highlights Kurzweil’s cluelessness on a variety of topics. Vertical farming took a hit last year, making MIT Technology Review’s list of the worst tech failures of 2024. Score another “L” for Kurzweil.
As I mentioned, companies and investors aren’t in the business of giving things away for free. These companies will adjust to the conditions imposed upon them. When have we ever seen a company that gets hit with higher taxes or additional tariffs responding with, “Well, sucks to be us. I guess we’ll have to make less money now.”
This condition may level out at some point. After all, if nobody has any money to buy your products, that’s not a good business strategy either. I’m saying that this leveling out could take some time, especially if a segment of the population continues to remain employed.
New Risks
New architectures, technologies, and automated processes will bring new risks. Due to our complete dependence on these systems, these risks will have a much larger direct impact. The vertical farming example is instructive because it raises new risks and considerations. For example, damage can spread quickly in these new architectures, creating cascading failures.
In reality, the company’s lettuce was more expensive, and when a stubborn plant infection spread through its East Coast facilities, Bowery had trouble delivering the green stuff at any price.
And this is just one of the many potential examples. Whenever potential challenges such as this are raised, the But AI, Tho defense is invoked as some sort of benevolent deity here to deliver our salvation and absolve us from our sins. “AI will just figure it out.” This is not an answer.
Techno-Welfare
Let’s acknowledge that these companies aren’t willing to part with their money. It’s not like they will be so successful that they’ll start sharing their profits with us. Even if they half the cost of goods and services or even reduce by 90%, we’ve got zero dollars, which makes these cheap necessities still out of reach. This begs a couple of questions.
How do companies make money from people who don’t have any?
It seems unlikely to be profitable in this environment, so companies raise prices for those who can afford their products to cover gaps. This situation actually makes it worse for displaced workers, as I mentioned previously in the adjusting to market conditions section.
What’s the remedy?
Some have proposed an automation tax that funds a Universal Basic Income (UBI) program. This sounds good on paper but may not be so great in practice. We will tax people who are making less money; hence, there will be less recovered in taxes. Not to mention, I’m only considering the United States here. What about goods and services from other countries? After all, we have a global economy. This requires tariffs on goods and increased taxes on digital goods, which will require companies to raise costs even more.
There is the impression that the techno-welfare provided by some universal basic income will have us jet-setting around the globe. This is the premise of Fully Automated Luxury Communism (FALC). This is flat-out bullshit when you consider the realities on the ground. UBI’s benefits are a social welfare program and will be commensurate with similar programs.
Nobody on a social welfare program lives it up on their yacht, sipping champagne and wondering when their Ferrari will be out of the shop. These people worry about basic necessities constantly. Any small hiccup can result in major consequences. This future techno-welfare program will be far more like today’s social welfare than some government-funded luxurious lifestyle. So, yes, it is much more like Fully Automated Digital Breadlines (FADB) than FALC.
Not to mention, this very same social welfare program will be administered by the very system systems that displaced these workers in the first place, leaving the door open to a whole host of technical issues and challenges that will affect the people in the program, adding to risks.
The thing that pisses me off about people like Kurzweil is that the very foundation of their arguments is not only so disconnected from reality that they don’t make sense, they are dehumanizing. But for people like Kurzweil, this is a feature, not a bug.
The response to hungry children comes off as, “Just shut up and eat your amino acid paste, you ungrateful little shits. Don’t you realize how much more compute you have access to? You couldn’t even run stable diffusion locally when I was a kid!” When you are hungry, it’s hard to eat your computer.
Reduced Agency and Helplessness
What does it mean to be human in an age without work and agency? Do we resign ourselves to being helpless and needy? This is hard to pin down in advance. Humans are indeed incredibly adaptable creatures, but there’s a limit to this adaptability. But more importantly, why should we settle for this vision of the future?
These systems turn us into robots, shoving us into predictable buckets, reducing our agency, and making us dependent. This is necessary to increase the accuracy of predictions. The result is we end up as helpless schmucks standing on the sidelines, waiting to be told what to do and where to go at the mercy of every algorithmic decision. Technology should work for us, not the other way around, a point that gets lost in the shuffle and hype.
With every new risk that surfaces, we’ll be helpless to intervene. We need to take it on faith that what we built will automatically do something about it, as the world we construct becomes far too complex for us to understand. In some instances, humans may not be informed of impending dangers due to their lack of ability to do anything about them. We remain blissfully aware until the asteroid strikes.
We should insist on better. We deserve something better—technology that works for us, not us working for technology.
Technological advancements require tradeoffs, which will benefit humans as a whole. For example, suppose self-driving cars worked as advertised and delivered on promises. In that case, giving up manual driving for the benefit of safer roads may be a worthwhile tradeoff that most of society accepts. However, today, we are being asked to pre-purchase a tradeoff where it’s unclear what we get and what we lose.
Does This Sound Like Utopia?
I don’t know about you, but this scenario doesn’t sound like a slam dunk in the utopia basket. At best, this sounds like human-forced retirement with a monumental cut in income and benefits. At worst, it’s suffering and death, far from the promised life of luxury. It likely won’t be either of these extremes, but it will be something like a Fully Automated Digital Breadlines scenario I mentioned where the role of humans is needy and dependent.

I’m not sure exactly where I fall on the utopia scale above except to say I am probably not in the upper half. Not a precise measure other than to say away from the luxury lifestyle.
Can we achieve artificial superintelligence quickly and solve the world’s problems by creating a world of abundance? Yes, it’s certainly possible that everything snaps into place perfectly, and governments and corporations work hand in hand to create a world of abundance free from suffering. Possible, just not probable, or at least probable in a reasonable amount of time. For this to be the winning scenario, things must work perfectly the first time with advancements free from issues. We should know from history this is rarely the case.
Even if we eventually reach a reasonable utopia, we’ll have years, if not decades, of pain and misery as humans do their best to adapt and deal with less-than-perfect technology, governments, and companies. All of these challenges are incurred by humans while simultaneously being stripped clean of our agency and purpose.
By some estimations, communism is responsible for 100 million deaths in the twentieth century. Although some dispute this number, even on the lower side, we’re still talking about 50 million people. But hey, what’s 50 million deaths among friends? Something about one death being a tragedy and a million being a statistic. And yes, I know Stalin didn’t say that, but it’s relevant here.
Although I don’t think techno-communism will cut that wide a path, I do believe that some will view resulting deaths and misery as the cost of progress. However, progress is subjective, and despite often being linked, innovation and progress aren’t the same thing.
Conclusion
I hope that none of my predictions come true, that I am wrong, that some fluke happens, and everything magically snaps into place without issue. Thankfully, much of the hot takes on social media can be written off as bros sharing vibes. Also, I don’t think the current crop of LLMs will cause mass unemployment, create large destabilizing effects in the workforce, or create immortality. However, I’m not as confident about this prediction, well, other than the immortality piece.
The real question for LLMs is how much better this buggy, insecure, black-box technology needs to get to start disrupting a larger part of the workforce. We’ve seen this happen in the creative domains, but the cost of failure is low in these use cases. Let’s hope there are no plans to hook ChatGPT up to air traffic control or the nuclear arsenal, but there are still plenty of other jobs without such high failure costs. Only time will tell.
The attempt by a few to change the social contract raises many questions: Who sets the rules? Who changes the rules? Who or what makes the important decisions affecting humanity? These are good questions to have answers to before wading into the slough.
This situation can’t be described as a Faustian bargain since most people won’t gain any true advantage. At least Robert Johnson received amazing guitar skills. Many of us will get digital breadlines and an endless feed of slop.