Microsoft CEO Admits That AI Is Generating Basically No Value

Microsoft CEO Admits That AI Is Generating Basically No Value
submitted by

ca.finance.yahoo.com/news/microsoft-ceo-admits-…

"The real benchmark is: the world growing at 10 percent," he added. "Suddenly productivity goes up and the economy is growing at a faster rate. When that happens, we'll be fine as an industry."

Needless to say, we haven't seen anything like that yet. OpenAI's top AI agent — the tech that people like OpenAI CEO Sam Altman say is poised to upend the economy — still moves at a snail's pace and requires constant supervision.

932

Log in to comment

120 Comments

JC Denton said it best in 2001:

I'm convinced the devs actually time traveled back from like 2035

That would be worrying

Unlikely, all time travel technology will have been destroyed in the war, before 2035

Like all good sci-fi, they just took what was already happening to oppressed people and made it about white/American people, while adding a little misdirection by extrapolation from existing tech research. Only took about 20 years for Foucault's boomerang to fully swing back around, and keep in mind that all the basic ideas behind LLMs had been worked out by the 80s, we just needed 40 more years of Moore's law to make computation fast enough and data sets large enough.

Foucault’s boomerang

fun fact, that idea predates foucault by a couple decades. ironically, it was coined by a black man from Martinique. i think he called it the imperial boomerang?

Ah yes same with Boolean logic, it only took a century for Moore law to pick up, they had a small milestone along the way when the transistor was invented. All computer science was already laid out by Boole from day 1, including everything that AI already does or will ever do.

/S

That is not at all what he said. He said that creating some arbitrary benchmark on the level or quality of the AI, (e.g.: as it's as smarter than a 5th grader or as intelligent as an adult) is meaningless. That the real measure is if there is value created and out out into the real world. He also mentions that global growth is up by 10%. He doesn't provide data that correlates the grow with the use of AI and I doubt that such data exists yet. Let's not just twist what he said to be "Microsoft CEO says AI provides no value" when that is not what he said.

I think that's pretty clear to people who get past the clickbait. Oddly enough though, if you read through what he actually said, the takeaway is basically a tacit admission, interpreted as him trying to establish a level-set on expectations from AI without directly admitting the strategy of massively investing in LLM's is going bust and delivering no measurable value, so he can deflect with "BUT HEY CHECK OUT QUANTUM".

AI is the immigrants of the left.

Of course he didn't say this. The media want you to think he did.

"They're taking your jobs"

That's because they want to use AI in a server scenario where clients login. That translated to American English and spoken with honesty means that they are spying on you. Anything you do on your computer is subject to automatic spying. Like you could be totally under the radar, but as soon as you say the magic words together bam!...I'd love a sling thong for my wife...bam! Here's 20 ads, just click to purchase since they already stole your wife's boob size and body measurements and preferred lingerie styles. And if you're on McMaster... Hmm I need a 1/2 pipe and a cap...Better get two caps in case you cross thread on.....ding dong! FBI! We know you're in there! Come out with your hands up!

The only thing stopping me from switching to Linux is some college software (Won't need it when I'm done) and 1 game (which no longer gets updates and thus is on the path to a slow sad demise)

So I'm on the verge of going Penguin.

Just run Windows in a VM on Linux. You can use VirtualBox.

Yeah use Windows in a VM and your game probably just works too, I was surprised that all games I have on Steam now just work on Linux.

Years ago when I switched from OSX to Linux I just stopped gaming because of that but I started testing my old games and suddenly no problems with them anymore.

What software / game is that? it could still run in Wine or Bottle.

by
[deleted]

You're really forcing it at that point. Wine can't run most of what I need to use for work. I'm excited for the day I can ditch Windows, but it's not any time soon unfortunately. I'll have to live with WSL.

But..... i am still curious.. what are you trying to run 😆

Not the same person with the program. Just another person making an excuse.

by
[deleted]

MobaXTerm namely. It will start up, but it misses a lot of features when used in Wine. To the point it's not usable for work purposes.

Plants vs. Zombies garden Warfare 2. It used to run on Linux but then they added Easy anti-cheat which broke compatibility.

They don't update the game anymore and they've essentially abandoned the franchise after the flop that was battle for neighborville so I'm not too sad leaving it behind. Was a very fun game though.

That's sad... i am not able to play battlefield 2042 after switching to linux too. For the same reason, their anti cheat is not compatible

You can just install Linux besides windows. It’s easy, I did it last month.

Correction, LLMs being used to automate shit doesn't generate any value. The underlying AI technology is generating tons of value.

AlphaFold 2 has advanced biochemistry research in protein folding by multiple decades in just a couple years, taking us from 150,000 known protein structures to 200 Million in a year.

Well sure, but you're forgetting that the federal government has pulled the rug out from under health research and therefore had made it so there is no economic value in biochemistry.

How is that a qualification on anything they said? If our knowledge of protein folding has gone up by multiples, then it has gone up by multiples, regardless of whatever funding shenanigans Trump is pulling or what effects those might eventually have. None of that detracts from the value that has already been delivered, so I don’t see how they are “forgetting” anything. At best, it’s a circumstance that may play in economically but doesn’t say anything about AI’s intrinsic value.

Was it just a facetious complaint?

OK yeah but… we can’t have nice things soooo

I think you're confused, when you say "value", you seem to mean progressing humanity forward. This is fundamentally flawed, you see, "value" actually refers to yacht money for billionaires. I can see why you would be confused.

Yeah tbh, AI has been an insane helpful tool in my analysis and writing. Never would I have been able to do thoroughly investigate appropriate statisticall tests on my own. After following the sources and double checking ofcourse, but still, super helpful.

Thanks. So the underlying architecture that powers LLMs has application in things besides language generation like protein folding and DNA sequencing.

Image recognition models are also useful for astronomy. The largest black hole jet was discovered recently, and it was done, in part, by using an AI model to sift through vast amounts of data.

https://www.youtube.com/watch?v=wC1lssgsEGY

This thing is so big, it travels between voids in the filaments of galactic super clusters and hits the next one over.

alphafold is not an LLM, so no, not really

You are correct that AlphaFold is not an LLM, but they are both possible because of the same breakthrough in deep learning, the transformer and so do share similar architecture components.

And all that would not have been possible without linear algebra and calculus, and so on and so forth... Come on, the work on transformers is clearly separable from deep learning.

That's like saying the work on rockets is clearly separable from thermodynamics.

A Large Language Model is a translator basically, all it did was bridge the gap between us speaking normally and a computer understanding what we are saying.

The actual decisions all these "AI" programs do are Machine Learning algorithms, and these algorithms have not fundamentally changed since we created them and started tweaking them in the 90s.

AI is basically a marketing term that companies jumped on to generate hype because they made it so the ML programs could talk to you, but they're not actually intelligent in the same sense people are, at least by the definitions set by computer scientists.

What algorithm are you referring to?

The fundamental idea to use matrix multiplication plus a non linear function, the idea of deep learning i.e. back propagating derivatives and the idea of gradient descent in general, may not have changed but the actual algorithms sure have.

For example, the transformer architecture (that is utilized by most modern models) based on multi headed self attention, optimizers like adamw, the whole idea of diffusion for image generation are I would say quite disruptive.

Another point is that generative ai was always belittled in the research community, until like 2015 (subjective feeling would need meta study to confirm). The focus was mostly on classification something not much talked about today in comparison.

Wow i didn't expect this to upset people.

When I say it hasn't fundamentally changed from an AI perspective i mean there is no intelligence in artificial Intelligence.

There is no true understanding of self, just what we expect to hear. There is no problem solving, the step by steps the newer bots put out are still just ripped from internet search results. There is no autonomous behavior.

AI does not meet the definitions of AI, and no amount of long winded explanations of fundamentally the same approach will change that, and neither will spam downvotes.

Btw I didn't down vote you.

Your reply begs the question which definition of AI you are using.

The above is from Russells and Norvigs "Artificial Intelligence: A Modern Approach" 3rd edition.

I would argue that from these 8 definitions 6 apply to modern deep learning stuff. Only the category titled "Thinking Humanly" would agree with you but I personally think that these seem to be self defeating, i.e. defining AI in a way that is so dependent on humans that a machine never could have AI, which would make the word meaningless.

It's always important to double check the work of AI, but yea it excels at solving problems we've been using brute force on

AI is just what we call automation until marketing figures out a new way to sell the tech. LLMs are generative AI, hardly useful or valuable, but new and shiny and has a party trick that tickles the human brain in a way that makes people give their money to others. Machine learning and other forms of AI have been around for longer and most have value generating applications but aren't as fun to demonstrate so they never got the traction LLMs have gathered.

I'm afraid you're going to have to learn about AI models besides LLMs

Very bold move, in a tech climate in which CEOs declare generative AI to be the answer to everything, and in which shareholders expect line to go up faster…

I half expect to next read an article about his ouster.

My theory is it's only a matter of time until the firing sprees generate enough backlog of actual work that isn't being realised by the minor productivity gains from AI until the investors start asking hard questions.

Maybe this is the start of the bubble bursting.

I’ve basically given up hope of the bubble ever bursting, as the market lives in La La Land, where no amount of bad decision-making seems to make a dent in the momentum of “line must go up”.

Would it be cool for negative feedback to step in and correct the death spiral? Absolutely. But, I advise folks to not start holding their breath so soon…

If it seems odd for him to suddenly say that all this AI stuff is bullshit, that’s because he didn’t. He said it hasn’t boosted the world economy on the order of the Industrial revolution - yet. There is so much hype around this, and he’s on the line to deliver actual results. So it’s smart for him to take a little air out of the hype ballon. But the article headline is a total misrepresentation of what he said. He said we are still waiting for the hype to become reality, in the form of something obvious and impossible to miss, like the world economy shooting up 10% across the board. That’s very very different from “no value.”

He said we are still waiting for the hype to become reality, in the form of something obvious and impossible to miss, like the world economy shooting up 10% across the board.

That’s such an odd turn of phrase. “We’re still waiting for the hype to become a reality…” and “…something obvious and impossible to miss…”

So, like, do I have to time to go to the bathroom and get a drink, before I sit down and start staring at the empty space, or…?

Don’t get me wrong. I work with this stuff every day at this point. My job is LLMs and model training pipelines and agentic frameworks. But, there is something… off, about saying the equivalent of “it’ll happen any day now…”

It may just, but making forward-looking decisions on something that doesn’t exist—may *not* come to pass—feels like madness.

Again, you’re twisting the words.

He said “we’re still waiting” and you’ve twisted that into “any day now.” As if still waiting for something to materialize is the same thing as being certain it will.

Hype always precedes reality. That’s the nature of hype. If someone says hype is nice but we’re still waiting to see it become reality that is not blind faith in the hype. The complete opposite. It’s restraint.

Anyway. People get some kind of emotional catharsis from hating AI and shitting on CEOs so I think a fair reading of this is just going to go out the window at any opportunity. I won’t spend any more energy trying to contain that.

And crashing the markets in the process...
At the same time they came out with a bunch of mambo jumbo and scifi babble about having a million qbit quantum chip.... 😂

Is he saying it's just LLMs that are generating no value?

I wish reporters could be more specific with their terminology. They just add to the confusion.

Edit: he's talking about generative AI, of which LLMs are a subset.

But the hype isn’t about any other form of AI. It’s just OpenAI and a few other companies trying to copy them.

microsoft rn:

✋ AI

👉 quantum

can't wait to have to explain the difference between asymmetric-key and symmetric-key cryptography to my friends!

Forgive my ignorance. Is there no known quantum-safe symmetric-key encryption algorithm?

i'm not an expert by any means, but from what i understand, most symmetric key and hashing cryptography will probably be fine, but asymmetric-key cryptography will be where the problems are. lots of stuff uses asymmetric-key cryptography, like https for example.

Oh that's not good. i thought TLS was quantum safe already

LLMs in non-specialized application areas basically reproduce search. In specialized fields, most do the work that automation, data analytics, pattern recognition, purpose built algorithms and brute force did before. And yet the companies charge nx the amount for what is essentially these very conventional approaches, plus statistics. Not surprising at all. Just in awe of how come the parallels to snake oil weren't immediately obvious.

I think AI is generating *negative* value ... the huge power usage is akin to speculative blockchain currencies. Barring some biochemistry and other very, very specialized uses it hasn't given anything other than, as you've said, plain-language search (with bonus hallucination bullshit, yay!)
... snake oil, indeed.

Its a little more complicated than that I think. LLMs and AI is not remotely the same with very different use cases.

I believe in AI for sure in some fields, but I understand the skeptics around LLMs.

But the difference AI is already doing in the medical industry and hospitals is no joke. X-ray scannings and early detection of severe illness is the one being used specifically today, and will save thounsands of lives and millions of dollars / euros.

My point is, its not that black and white.

On this topic, the vast majority of people seem to think that AI means the free tier of ChatGPT.

AI isn't a magical computer demon that can grant all of your wishes, but that doesn't mean that it is worthless.

For example, Alphafold essentially solved protein folding and diffusion models built on that discovery let us generate novel proteins with specific properties with the same ease as we can make a picture of an astronaut on a horse.

Image classification is massively useful in manufacturing. Instead of custom designed programs purpose built for each client ($$$), you can find tune existing models with generic tools using labor that doesn't need to be a software engineer.

Robotics is another field. The amount of work required for humans to design and code their control systems was enormous. Now you can use standard models, give them arbitrary limbs and configurations and train them in simulated environments. This massively cuts down on the amount of engineering work ($$$) required.

It is fun to generate some stupid images a few times, but you can't trust that "AI" crap with anything serious.

I was just talking about this with someone the other day. While it’s truly remarkable what AI can do, its margin for error is just too big for most if not all of the use cases companies want to use it for.

For example, I use the Hoarder app which is a site bookmarking program, and when I save any given site, it feeds the text into a local Ollama model which summarizes it, conjures up some tags, and applies the tags to it. This is useful for me, and if it generates a few extra tags that aren’t useful, it doesn’t really disrupt my workflow at all. So this is a net benefit for me, but this use case will not be earning these corps any amount of profit.

On the other end, you have Googles Gemini that now gives you an AI generated answer to your queries. The point of this is to aggregate data from several sources within the search results and return it to you, saving you the time of having to look through several search results yourself. And like 90% of the time it actually does a great job. The problem with this is the goal, which is to save you from having to check individual sources, and its reliability rate. If I google 100 things and Gemini correctly answers 99 of those things accurate abut completely hallucinates the 100th, then that means that all 100 times I have to check its sources and verify that what it said was correct. Which means I’m now back to just… you know… looking through the search results one by one like I would have anyway without the AI.

So while AI is far from useless, it can’t now and never will be able to be relied on for anything important, and that’s where the money to be made is.

Even your manual search results may have you find incorrect sources, selection bias for what you want to see, heck even AI generated slop, so the AI generated results will just be another layer on top. Link aggregating search engines are slowly becoming useless at this rate.

While that’s true, the thing that stuck out to me is not even that the AI was mislead by itself finding AI slop, or even somebody falsely asserting something. I googled something with a particular yea or no answer. “Does X technology use Y protocol”. The AI came back with “Yes it does, and here’s how it uses it”, and upon visiting the reference page for that answer, it was documentation for that technology where it explained very clearly that x technology does NOT use Y protocol, and then went into detail on why it doesn’t. So even when everything lines up and the answer is clear and unambiguous, the AI can give you an entirely fabricated answer.

What's really awful is that it seems like they've trained these LLMs to be "helpful", which means to say "yes" as much as possible. But, that's the case even when the true answer is "no".

I was searching for something recently. Most people with similar searches were trying to do X, I was trying to do Y which was different in subtle but important differences. There are tons of resources out there showing how to do X, but none showing how to do Y. The "AI" answer gave me directions for doing Y by showing me the procedure for doing X, with certain parts changed so that they match Y instead. It doesn't work like that.

Like, imagine a recipe that not just uses sugar but that relies on key properties of sugar to work, something like caramel. Search for "how do I make caramel with stevia instead of sugar" and the AI gives you the recipe for making caramel with sugar, just with "stevia" replacing every mention of "sugar" in the original recipe. Absolutely useless, right? The correct answer would be "You can't do that, the properties are just too different." But, an LLM knows nothing, so it is happy just to substitute words in a recipe and be "helpful".

Ironically, Google might be accelerating its own downfall as it tries to copy the “market”, considering LLMs are just a hole in its pocket.

He probably saw that softbank and masayoshi son were heavily investing in it and figured it was dead.

I've been working on an internal project for my job - a quarterly report on the most bleeding edge use cases of AI, and the stuff achieved is genuinely really impressive.

So why is the AI at the top end *amazing* yet everything we use is a piece of literal shit?

The answer is the chatbot. If you have the technical nous to program machine learning tools it can accomplish truly stunning processes at speeds not seen before.

If you don't know how to do - for eg - a Fourier transform - you lack the skills to use the tools effectively. That's no one's fault, not everyone needs that knowledge, but it does explain the gap between promise and delivery. It can only help you do what you already know how to do faster.

Same for coding, if you understand what your code does, it's a helpful tool for unsticking *part* of a problem, it can't write the whole thing from scratch

For coding it's also useful for doing the menial grunt work that's easy but just takes time.

You're not going to replace a senior dev with it, of course, but it's a great tool.

My previous employer was using AI for intelligent document processing, and the results were absolutely amazing. They did sink a few million dollars into getting the LLM fine tuned properly, though.

LLMs could be useful for translation between programming languages. I asked it to recently for server code given a client code in a different language and the LLM generated code was spot on!

I remain skeptical of using *solely* LLMs for this, but it might be relevant: DARPA is looking into their usage for C to Rust translation. See the TRACTOR program.

Exactly - I find AI tools very useful and they save me quite a bit of time, but they're still tools. Better at some things than others, but the bottom line is that they're dependent on the person using them. Plus the more limited the problem scope, the better they can be.

Yes, but the problem is that a lot of these AI tools are very easy to use, but the people using them are often ill-equipped to judge the quality of the result. So you have people who are given a task to do, and they choose an AI tool to do it and then call it done, but the result is bad and they can't tell.

True, though this applies to most tools, no? For instance, I'm forced to sit through horrible presentations beause someone were given a task to do, they created a Powerpoint (badly) and gave a presentation (badly). I don't know if this is inherently a problem with AI...

So why is the AI at the top end amazing yet everything we use is a piece of literal shit?

Just that you call an LLM "AI" shows how unqualified you are to comment on the "successes".

Not this again... LLM is a subset of ML which is a subset of AI.

AI is very very broad and all of ML fits into it.

No and if you label statistics as AI you contribute to the destruction of civil rights by lying to people.

This is the issue with current public discourse though. AI has become shorthand for the current GenAI hypecycle, meaning for many AI has become a subset of ML.

Deleted by author

 reply
-12

I'm a researcher in ML and LLMs absolutely fall under ML. Learning in the term "Machine Learning" just means fitting the parameters of a model, hence just an optimization problem. In the case of an LLM this means fitting parameters of the transformer.

A model doesn't have to be intelligent to fall under the umbrella of ML. Linear least squares is considered ML; in fact, it's probably the first thing you'll do if you take an ML course at a university. Decision trees, nearest neighbor classifiers, and linear models all are machine learning models, despite the fact that nobody would consider them to be intelligent.

LLMs are a type of machine learning. Input is broken into tokens, which are then fed through a type of neural network called a transformer model.

The models are trained with a process known as deep learning, which involves the probabilistic analysis of unstructured data, which eventually enables the model to recognize distinctions between pieces of content.

That's like textbook machine learning. What you said about interpreting sentiment isn't wrong, but it does so with machine learning algorithms.

LLMs are deep learning models that were developed off of multi-head attention/transformer layers. They are absolutely Machine Learning as they use a blend of supervised and unsupervised training (plus some reinforcement learning with some recent developments like DeepSeek).

What are you talking about? I read the papers published in mathematical and scientific journals and summarize the results in a newsletter. As long as you know equivalent undergrad statistics, calculus and algebra *anyone* can read them, you don't need a qualification, you could just Google each term you're unfamiliar with.

While I understand your objection to the nomenclature, in this particular context all major AI-production houses including those only using them as internal tools to achieve other outcomes (e.g. NVIDIA) count LLMs as part of their AI collateral.

The mechanism of machine learning based on training data as used by LLMs is at its core statistics without contextual understanding, the output is therefore only statistically predictable but not reliable.
Labeling this as "AI" is misleading at best, directly undermining democracy and freedom in practice, because the impressively intelligent *looking* output leads naive people to believe the software knows what it is talking about.

People who condone the use of the term "AI" for this kind of statistical approach are naive at best, snake oil vendors or straightout enemies of humanity.

Can you name a company who has produced an LLM that doesn't refer to it generally as part of "AI"?

can you name a company who produces AI tools that doesn't have an LLM as part of its "AI" suite of tools?

How do those examples *not* fall into the category "snake oil vendor"?

what would they have to produce to not be snake oil?

YES

YES

FUCKING YES! THIS IS A WIN!

Hopefully they curtail their investments and stop wasting so much fucking power.

I think the best way I've heard it put is "if we absolutely have to burn down a forest, I want warp drive out of it. Not a crappy python app"

AI is burning a shit ton of energy and researchers’ time though!

That’s not the worst. It is burning billions for the companies with no signs of them ever becoming close to profitable.

You say this like it’s a bad thing?

Indeed, we just have to wait until venture capital realized no one’s getting their money back from this. It’ll all crumble down like the world is ending.

eh, the entireity of training GPT4 and the whole world using it for a year turns out to be about 1% of the gasoline burnt just by the USA every single day. Its barely a rounding error when it comes to energy usage.

R&D is always a money sink

It isn't R&D anymore if you're actively marketing it.

Uh... Used to be, and should be. But the entire industry has embraced treating production as test now. We sell alpha release games as mainstream releases. Microsoft fired QC long ago. They push out world breaking updates every other month.

And people have forked over their money with smiles.

Microsoft fired QC long ago.

I can't wait until my cousin learns about this, he'll be so surprised.

I'd tell him but he's at work. At Microsoft, in quality control.

Make sure to also tell him he's doing a shit job!

He's probably been fired long ago, but due to non-existant QC, he was never notified.

Especially when the product is garbage lmao

Joseph Weizenbaum: "No shit? For realsies?"

Makes sense that the company that just announced their qbit advancement would be disparaging the only "advanced" thing other companies have shown in the last 5 years.

That’s standard for emerging technologies. They tend to be loss leaders for quite a long period in the early years.

It’s really weird that so many people gravitate to anything even remotely critical of AI, regardless of context or even accuracy. I don’t really understand the aggressive need for so many people to see it fail.

For me personally, it's because it's been so aggressively shoved in my face in every context. I never asked for it, and I can't escape it. It actively gets in my way at work (github copilot) and has already re-enabled itself at least once. I'd be much happier to just let it exist if it would do the same for me.

Because there’s already been multiple AI bubbles (eg, ELIZA - I had a lot of conversations with FREUD running on an Apple IIe). It’s also been falsely presented as basically “AGI.”

AI models trained to help doctors recognize cancer cells - great, awesome.

AI models used as the default research tool for every subject - very very very bad. It’s also so forced - and because it’s forced, I routinely see that it has generated absolute, misleading, horseshit in response to my research queries. But your average Joe will take that on faith, your high schooler will grow up thinking that Columbus discovered Colombia or something.

I just can't see AI tools like ChatGPT ever being profitable. It's a neat little thing that has flaws but generally works well, but I'm just putzing around in the free version. There's no dollar amount that could be ascribed to the service that it provides that I would be willing to pay, and I think OpenAI has their sights set way too high with the talk of $200/month subscriptions for their top of the line product.

For a lot of years, computers added no measurable productivity improvements. They sure revolutionized the way things work in all segments of society for something that doesn’t increase productivity.

AI is an inflating bubble: excessive spending, unclear use case. But it won’t take long for the pop, clearing out the failures and making successful use cases clearer, the winning approaches to emerge. This is basically the definition of capitalism

What time span are you referring to when you say "for a lot of years"?

Vague memories of many articles over much of my adult life decrying the costs of whatever the current trend with computers is being higher than the benefits.

And I believe it, it’s technically true. There seems to be a pattern of bubbles where everyone jumps on the new hot thing, spend way too much money on it. It’s counterproductive, right up until the bubble pops, leaving the transformative successes.

Or I believe it was a long term thing with electronic forms and printers. As long as you were just adding steps to existing business processes, you don’t see productivity gains. It took many years for businesses to reinvent the way they worked to really see the productivity gains

Deleted by author

 reply
3

Comments from other communities

How have I seen this article posted on here like eleven times

crossposts?

Cross post? That sounds like a Jesusy sort of thing

I believe them: if anybody knows mediocre worthless software, it's Microsoft.

Honestly, I expect this AI bubble to implode with much more devastation than the dot-com bubble.

And it's not even that AI is useless. Like the internet (during dot-com) it will definitely have good uses.

But (a) those uses will take many years to crystallize and mature and (b) the early capital-intensive movers have a big disadvantage and most of them don't have a feasible path to recoup the money invested into them.

This is why the AI club is licking Trump's boot. They will get the federal government to bail them out by buying overpriced AI products and services and taking over worthless investments "in the interest of national security".

American taxpayers are going to foot this bill and they will not like it when they start seeing the effects.

Agreed. Except for the taxpayer reaction part. American taxpayers who then vote republiQan never seem to give a flying shit what happens to all the money.

As long as it doesn’t go to “those people”, they’re fine with the spending.

The motto of the American voter is "Me First." If they get theirs, the government can burn the money for all they care.

If only that were true. That is at least a rational response that can be worked around.

But COVID told us just how many conservatives will laugh on their death bed if they've been convinced that what's happening to them 'owns the libs'.

It's a cult and so reality is whatever the cult leaders say it is, even if that reality is harming them directly.

I don't understand your point.

The effects Iean are things like inflation and cuts in social security.

If you mean that Republicans will find a way to rationalize blaming the libs - I agree that will happen.

But they won't actually like the inflation and lower social security.

More the latter about rationalization. In the real world such things would affect electability. But in our 24/7 Propaganda Turd Circus it will have no effect.

“I wasted a shit-ton of money and boiled the earth for epic AI profit and all I got was this lousy podcast”

The *highest* population growth rate *in recorded human history* was sometime between 1965 and 1970, when the population was growing at about 2.1% annually. It's down around 1.1% now. 10% hasn't been realistic since we went from 10 humans to 11 humans.

(Note: I go on research benders. This particular rabbit hole (pun intended) became really interesting from a terrifying sci-fi perspective; kind of like a reverse *Children of Men*. As I wrote it up, it started to feel like an apocalyptic xkcd *What If.*)

So, think about the sheer numbers--for a 10% growth rate, we'd need more than *eight hundred million* births per year. There are only 1.9 billion women of childbearing age on Earth, which means that a little under *half* of *every woman between 15 and 49* would need to be pregnant *every single year* in order to make that target (a little more than that to account for women who aren't able to be pregnant for whatever reason, a little less than that to account for multiple births, let's just say those will wash out).

Let's imagine that this happened this year. Right now, we have about 132 million births annually. That would mean that we'd need about seven times the number of maternity wards, fully staffed. Five years after this unprecedented baby boom, we'd need to begin increasing the number of classrooms worldwide by a factor of 7 as well, which would mean that seven times the number of college students would have to go into education this year. We'd need to massively ramp up food production, probably starting sometime in the late 90s. We'd need to massively scale up infrastructure and housing on a Chinese scale. In 2050, our annual birth rate would equal our 2025 population, meaning we'd be adding an additional *everyone who's here right now* every single year.

Interestingly, this would cause a precipitous and sustained spike in our *death* rate, since 0.2% of women worldwide die during childbirth. This means 1.6 million women dying in childbirth every year between 2025 and 2040 (which almost exactly balances out the number of women coming to childbearing age during those years to replace them), or the first couple months of COVID happening perpetually, but for women in childbearing. Every woman would have, on average, seventeen children by age 49, and if you know 33 women under the age of 49, one of them will die in childbirth.

In 2040, we'd have to start ramping up production *again* as the ten percent growth rate would begin forcing (because that's obviously the only way for it to work) half of the 400 million women born in 2025 to start having their own children. Fifteen years later, do it again. Fifteen years later, do it one more time.

But you can probably stop worrying about it by that point. Studies are incredibly mixed on the total carrying capacity of Earth, ranging from 2 billion to 1.024 trillion, but a 10% growth rate would get us to even the most gigantic estimates within 50 years.

At some point, the pendulum begins swinging the other way; whether due to famine, water shortages, pandemics, climate change, or any number of other factors, there will be a huge increase to the global death rate once again. Our population would stabilize at our global carrying capacity; and from that point on, Earth's line can never go up again without the line going down first.

It's so patently ludicrous. Just total nonsense.

Edit 1: About halfway through this research bender, I realized that he was probably talking about the economy, not population. But I was having too much fun to stop. I justified it with the fact that there's really no way to sustain 10% annual economic growth without a 10% annual birth rate. You just have to have those workers, homeowners, economic entities, businesspeople, etc. being born and joining the economy.

But then the OP pointed me toward the actual quote, which *honestly makes it sound like he really was talking about population.*

Edit 2: Also, this wouldn't solve any economic problems! In fact, it would cause more. Half all childbearing women pregnant every year, with recovery time, that means that *literally every woman would have to be pregnant every other year*.

That would utterly torpedo our economy. We'd be basically losing a quarter of the workforce, all at once, and we'd never get them back.

I think they meant the economy, not population, but still, quality comment right here.

People who don't do math are doomed to talk nonsense. And you just used math to showcase the stupidity. Bravo, sir.

One of my pet peeves is all the people concerned about the birth rate.

We are at a time in the history of the planet where there have never existed as many homo sapiens as there are today, and that record will get broken every day for the next 20-50 years.

Of all the times to want a higher birth rate since we have existed as a species, this just ain't the time where it makes any kind of logical sense.

I think they meant the economy, not population,

About halfway through this research bender, I realized that. But I was having too much fun to stop. I justified it with--there's really no way to sustain 10% annual economic growth without a 10% annual birth rate. You just have to have those workers, homeowners, economic entities, businesspeople, etc. being born and joining the economy.

but still, quality comment right here.

People who don't do math are doomed to talk nonsense. And you just used math to showcase the stupidity. Bravo, sir.

You're very kind. Thank you.

One of my pet peeves is all the people concerned about the birth rate.

We are at a time in the history of the planet where there have never existed as many homo sapiens as there are today, and that record will get broken every day for the next 20-50 years.

That's part and parcel with our remarkably low *death* rate, too. In fact, our death rate is so low that our replacement rate could actually go below 2 and the population would still keep growing for a few years. That's unprecedented through human history.

Of all the times to want a higher birth rate since we have existed as a species, this just ain't the time where it makes any kind of logical sense.

It definitely isn't our biggest problem as a species. Either way, honestly; we don't need to try to make it bigger or smaller.

I began to wonder where that quote was, it’s here:

"The real benchmark is: the world growing at 10 percent," he added. "Suddenly productivity goes up and the economy is growing at a faster rate. When that happens, we'll be fine as an industry."

See, I wanted to give him the benefit of the doubt and assume that he meant economic growth (though that sort of requires a commensurate birth rate growth, but we can leave that aside for now), but this kind of makes that feel like an unmerited concession.

Of course, through IVF you could probably do it with about one third of that amount of women. Just thinking like a manager here.

Hmm. Maybe. Though then you start pushing that maternal mortality number up; and while, of course they don't actually care about the lives of these people, they'd probably care that they're depleting their "breeding stock" (ew).

I'm still gobsmacked by that number, though. 800 million births per year. That's half all childbearing women pregnant every year; one thing I didn't think about earlier is, with recovery time, that means that *literally every woman would have to be pregnant every other year*.

That would utterly torpedo our economy. We'd be basically losing a quarter of the workforce, all at once, and we'd never get them back.