Do moguls like Masayoshi Son, Sam Altman, and Larry Ellison truly consider Synthetic Tremendous Intelligence is imminent?
Because the polycrisis lurches into a brand new yr, allow us to take just a few moments to query the motivations of a few of its main actors and their acknowledged perception system.
Do They Actually Consider in ASI?
This quote from Shanaka Anslem Perera’s Substack bought me enthusiastic about the (possible delusional) perception system that’s driving the multi-trillion greenback push for Synthetic Basic Intelligence (AGI) and Synthetic Tremendous Intelligence (ASI)
Both the gods are being constructed within the Texas desert, or the best monetary delusion in human historical past is unfolding in actual time whereas refined observers debate quarterly earnings.
It’s one factor to attempt to perceive what’s occurring on our planet utilizing statement and purpose, however the diploma of issue will increase significantly when one realizes that many main actors are pushed by ideologies and even eschatologies which might be at greatest supra-rational and at worse fully insane nonsense.
World’s Dumbest Cash Believes
For a selected instance, let’s return to Perera and his description of the acknowledged beliefs of SoftBank CEO Masayoshi Son:
Within the June 2024 transcript of SoftBank’s Annual Basic Assembly, buried on web page forty-seven between a query about dividend coverage and a disclosure about cross-shareholding preparations, Masayoshi Son stopped being a businessman and have become one thing else fully.
“SoftBank was based for what objective?” he requested the assembled shareholders, most of whom had come anticipating steering on quarterly earnings and capital allocation technique. “For what objective was Masa Son born?”
The query was not rhetorical.
“It might sound unusual,” he continued, his voice carrying the burden of a person who had spent 4 a long time constructing towards a single second of revelation, “however I believe I used to be born to appreciate ASI. I’m tremendous severe about it.”
The room didn’t gasp. The monetary press talked about the remark in passing and moved on to earnings estimates. The delicate analysts overlaying SoftBank inventory dismissed it as one other grandiose proclamation from a person whose Imaginative and prescient Fund had change into synonymous with enterprise capital extra, a person who had incinerated forty billion {dollars} on WeWork and whose funding judgment had been publicly questioned by shareholders, regulators, and journalists for years.
Former SoftBank exec Alok Sama put Son’s funding technique in context for The Subsequent Huge Thought E-book Membership in late 2024:
Within the mid-nineties, Masa foresaw the web revolution. On the peak of the 2000 tech bubble, he owned 8 % of all web shares, briefly making him the richest individual on this planet. He additionally had an settlement to purchase a 3rd of a small on-line bookseller referred to as Amazon, however he ran in need of money. Later, he purchased a struggling cellphone firm 5 occasions the scale of his SoftBank, primarily based on a imaginative and prescient of related smartphones—earlier than the iPhone existed. He additionally made a $20 million wager on a Chinese language schoolteacher with “sturdy and shining eyes” and turned it into the best enterprise funding of all time: a $100 billion stake in Alibaba. His audacious $32 billion acquisition of chip designer Arm Holdings was a wager on a tomorrow of “related and clever issues” and is now value nearly $200 billion.
Masa’s distinctive model of loopy is a canny aggressive technique. When Masa Son’s capital cannon will get behind a enterprise, the competitors continuously folds, as Uber did in China and Southeast Asia after SoftBank invested in its native rivals. As a result of, as Masa memorably says, in a battle between a sensible man and a loopy man, the loopy man all the time wins.
Sama additionally particulars Son’s ASI beliefs:
Ten years earlier than the launch of ChatGPT, Masa Son would speak to me obsessively about AI and the singularity. On the time, this was a largely theoretical future occasion during which machine intelligence may surpass human intelligence. Whereas Elon Musk’s Neuralink undertaking sought a Vulcan mind-meld with machines to regulate them, Masa put his religion in companion humanoids with “emotional intelligence.” And whereas Musk seeks to colonize Mars in preparation for doomsday, Masa stays evangelical about his religion in a benign AI.
And Son is placing his cash the place his mouth is.
SoftBank Fulfills $40 Billion OpenAI Pledge
In March, Son’s SoftBank promised to take a position $40 billion in OpenAI at a $300 billion valuation.
Regardless of the skepticism of Ed Zitron, SoftBank has closed the deal.
SoftBank has accomplished its $40 billion funding dedication to OpenAI, sources advised CNBC’s David Faber.
The Japanese funding big despatched over a remaining $22 billion to $22.5 billion final week, in response to sources aware of the matter, who requested to not be named so as to talk about particulars of the transaction.
SoftBank had beforehand invested $7.5 billion within the ChatGPT maker and syndicated one other $11 billion with co-investors, the Japanese conglomerate confirmed in a press launch, with the ultimate mixture dedication at $41 billion. The funding takes SoftBank’s stake within the firm to round 11%.
CNBC experiences that SoftBank needed to dump different investments to provide you with the money:
Final month, SoftBank liquidated its total $5.8 billion stake in main AI beneficiary Nvidia.
A special supply aware of the transfer to promote the stake advised CNBC on the time that the sale, mixed with different money sources, would assist its OpenAI funding.
SoftBank dumped its total Nvidia stake for OpenAI. Somebody have to be a clean talker.
Sam Altman Synthetic Tremendous Intelligence Messiah
That somebody is OpenAI CEO Sam Altman. Here’s a latest instance of his patter through Politico:
I count on, although, the trajectory of the potential progress of AI to stay extraordinarily steep. We’ve seen simply within the two years or three years since ChatGPT has launched, how way more succesful the fashions have gotten. And I see no signal of that slowing down. I believe in one other couple of years, it is going to change into very believable for AI to make, for instance, scientific discoveries that people can’t make on their very own. To me, that’ll begin to really feel like one thing we might correctly name superintelligence.
…
One of many issues that I’ve realized constantly is, though we are able to say the ramp shall be very steep, it’s troublesome to be very exact that, you realize, it’ll occur this month or this yr. However I would definitely say by the top of this decade, so, by 2030, if we don’t have fashions which might be terribly succesful and do issues that we ourselves can’t do, I’d be very stunned.
…
I’ve heard many individuals describe many alternative variations of what the connection between an AI and humanity shall be. The one which has all the time been my favourite is: My co-founder, Ilya Sutskever, as soon as mentioned that he hoped that the way in which that an AGI would deal with humanity or all AGIs would deal with humanity is sort of a loving father or mother. And given the way in which you requested that query, it got here to thoughts. I believe it’s a very lovely framing.That mentioned, I believe once we ask that query in any respect, we’re kind of anthropomorphizing AGI. And what this shall be is a software that’s enormously succesful. And even when it has no intentionality, by asking it to do one thing, there might be uncomfortable side effects, penalties we don’t perceive. And so it is vitally essential that we align it to human values. However we get to align this software to human values and I don’t assume it’ll deal with people like ants.
Effectively, that’s very reassuring.
Actually, if I had $40 billion I’d be sorely tempted to mild all of it on fireplace make investments all of it with the person Elon Musk has referred to as Rip-off Altman.
Simply kidding.
One SoftBank funding that Ed Zitron was right to be skeptical about concerned Sam Altman in addition to Oracle CEO Larry Ellison.
Is Stargate Constructing God within the Texas Desert?
POTUS Trump kicked off 2025 with a press convention (full video) to announce a large American AI infrastructure undertaking, per CNN:
OpenAI CEO Sam Altman, SoftBank CEO Masayoshi Son and Oracle Chairman Larry Ellison appeared on the White Home Tuesday afternoon alongside President Donald Trump to announce the corporate, which Trump referred to as the “largest AI infrastructure undertaking in historical past.”
The businesses will make investments $100 billion within the undertaking to begin, with plans to pour as much as $500 billion into Stargate within the coming years. The undertaking is predicted to create 100,000 US jobs, Trump mentioned.
Stargate will construct “the bodily and digital infrastructure to energy the following technology of AI,” together with knowledge facilities across the nation, Trump mentioned. Ellison mentioned the group’s first, 1 million-square foot knowledge undertaking is already below development in Texas.
…
“I believe this shall be an important undertaking of this period,” Altman mentioned on Tuesday. “We wouldn’t be capable of do that with out you, Mr. President.”
To this point, so good.
Or Perhaps Not?
By July, a report in The Wall Road Journal was pouring chilly water over the deal:
A $500 billion effort unveiled on the White Home to supercharge the U.S.’s artificial-intelligence ambitions has struggled to get off the bottom and has sharply scaled again its near-term plans.
Six months after Japanese billionaire Masayoshi Son stood shoulder to shoulder with Sam Altman and President Trump to announce the Stargate undertaking, the newly fashioned firm charged with making it occur has but to finish a single deal for a knowledge middle.
Son’s SoftBank and Altman’s OpenAI, which collectively lead Stargate, have been at odds over essential phrases of the partnership, together with the place to construct the websites, in response to folks aware of the matter.
Whereas the businesses pledged on the January announcement to take a position $100 billion “instantly,” the undertaking is now setting the extra modest purpose of constructing a small knowledge middle by the top of this yr, possible in Ohio, the folks mentioned.
The identical WSJ piece revealed the popular culture inspiration behind Altman’s imaginative and prescient, or a minimum of the branding of it:
Altman has used the Stargate title, shared with a 1994 Kurt Russell movie about aliens who teleport to historic Egypt, on initiatives that aren’t being financed by the partnership between OpenAI and SoftBank. The trademark to Stargate is held by SoftBank, in response to public filings.
As an illustration, OpenAI refers to an information middle in Abilene, Texas, and one other it agreed in March to make use of in Denton, Texas, as a part of Stargate although they’re being completed with out SoftBank, a few of the folks aware of the matter mentioned.
Let’s let Ed Zitron clarify the sleight of hand behind this bait-and-switch:
I’ve confirmed that SoftBank by no means, ever had any involvement with the location in Abilene Texas. It didn’t fund it, it didn’t construct it, it didn’t select the location and, actually, doesn’t seem to have something to do with any knowledge middle that OpenAI makes use of. The info middle many, many reporters have known as “Stargate” has nothing to do with the “Stargate knowledge middle undertaking.” Any experiences suggesting in any other case are unsuitable, and I consider that this can be a aware try at deceptive the general public by OpenAI and SoftBank.
…That is an astonishing — and egregious — act of misinformation on the a part of Sam Altman and OpenAI. By my rely, at least 15 totally different tales attribute the Abilene Texas knowledge middle to the Stargate undertaking, even though SoftBank was by no means and has by no means been concerned. One would forgive anybody who bought this unsuitable, as a result of OpenAI itself engaged within the deliberate deception in its personal announcement of the Stargate Challenge.
…
You’ll be able to weasel-word all you need about how no one has instantly reported that SoftBank was or was not a part of Abilene. It is a deliberate, intentional deception, perpetrated by OpenAI and SoftBank, who intentionally misled each the general public and the press as a method of maintaining the looks that SoftBank was deeply concerned in (and financially obligated to) the Abilene web site.Primarily based on reporting that existed on the time however was by no means drawn collectively, it seems that Abilene was earmarked by Microsoft for OpenAI’s use as early as July 2024, and by no means concerned SoftBank in any manner, form or type. The “Stargate” Challenge, as reported, was over six months outdated when it was introduced in January 2025, and there have been no extra websites added aside from Abilene.
There’s undoubtedly a ‘who’s zooming who’ side to a minimum of the Stargate deal, however let’s circle again to the January Presidential press convention for some perception as to how Sam “Rip-off” Altman could have hooked Oracle CEO Larry Ellison.
AI Remedy for Most cancers?
From the transcript of the January 21 Star Gate presser:
Sam Altman: I consider that as this know-how progresses we are going to see illnesses get cured at an unprecedented price.
We shall be amazed at how rapidly we’re curing this most cancers and that one and coronary heart illness. And what this can do for the flexibility to ship very top quality healthcare, the prices, however actually to treatment the illnesses at a speedy, speedy price, I believe shall be among the many most essential issues this know-how does.
Larry Ellison: Probably the most thrilling issues we’re engaged on, once more, utilizing the instruments that Sam and Masa are offering is a most cancers vaccine.
It’s very attention-grabbing. It seems, I’ll be fast, all of our cancers, most cancers tumors, little fragments of these tumors float round in your blood. So you are able to do early most cancers detection. You are able to do early most cancers detection with a blood check. And utilizing AI to have a look at the blood check you’ll find the cancers which might be truly severely threatening the individual.
However wait, absolutely there’s one other angle.
Larry Ellison’s Huge Plans
Audrey of The Drey File supplies some psychological perception to the gamers right here:
One thing that’s essential to learn about Larry Ellison is that he’s obsessive about most cancers, like fairly actually obsessive about it. Actually, he’s so obsessive about it that some may say that if he truly needed to treatment it, we wouldn’t nonetheless be right here speaking about it at the moment.
Drey will get right into a sidebar about a Seventies DIA program (that she calls a CIA program) that had been named Stargate, however then she will get again to speaking about Larry Ellison, AI, and most cancers.
There isn’t a manner that they’re truly seeking to remedy most cancers. I imply, everyone knows this, proper? And in case you don’t know this, then develop up. I don’t know what to inform you as a result of it’s a trillion greenback trade.
Trillion greenback industries should not issues for governments. Plus, if most cancers have been to go away, then they’d lose one thing much more invaluable than these income.
They’d lose their primary pitch for getting away with actually something. The most cancers pitch has been used for many years, wrapped in packages with all very totally different motives.
…
Backside line is that most cancers will get you within the door. Most cancers will get you the regulatory exemptions Most cancers will get you entry to intimate knowledge that you just shouldn’t have any entry to. And the treatment could by no means come, however the knowledge infrastructure turns into way more everlasting. Okay, so if Stargate LLC isn’t about curing most cancers, then what’s all of it about?
…
In 2023, Larry Ellison personally invested in a $23 million firm referred to as Imagene AI. Not utilizing Oracle’s cash, his personal cash. And Imagene AI was based by, look ahead to it, officers of the IDF unit 8200.
…
They’ve been utilizing this allegedly in Israel and Gaza. And what they do is that they extract this genomic knowledge from liquid biopsies they usually analyze blood samples for fragments of DNA.
…
However so as to discover these most cancers markers, they should have a sequence of your total genome. And as soon as your genomic knowledge is digitized, then it may be saved, copied, analyzed, and offered. , the standard.However that also doesn’t reply what Larry Ellison is gonna do with this genomic knowledge as soon as Imagene extracts it. Effectively, keep in mind Oracle Well being?
The corporate that Larry Ellison owns and controls 9.5 million affected person healthcare data by way of the Cerner acquisition? That Oracle Well being?
And people data embrace your medical historical past, your therapies, your diagnoses, all of that. And in case you’ve had any genomic testing completed in a hospital that makes use of Oracle Programs, which is most main hospitals in the USA, that genomic knowledge is sitting in Oracle’s databases.
So now you may have this theoretical construction, proper, of Imagene with the ability to extract these contemporary genomic knowledge blood samples from you, Oracle Well being storing present genomic knowledge from hundreds of thousands of sufferers, and now Stargate’s 10 gigawatts of AI compute infrastructure simply able to course of all of it.
So perhaps Larry Ellison nonetheless has his ft on the bottom, a minimum of within the sense that he’s placing his arms round our collective throats.
However lest you assume he’s not a dreamer, I’ll wrap with a few of Ellison’s feedback and claims about Synthetic Tremendous Intelligence.
“About 18 months in the past, once we started to totally grasp what the folks at OpenAI and ChatGPT had achieved — a degree of synthetic intelligence that might truly advance human pondering with neural networks that would reply questions that human brains would battle with — I made a speech during which I requested, ‘Is synthetic intelligence an important discovery within the historical past of humankind? Perhaps. And we’ll quickly discover out,’ ” Ellison mentioned on the latest World Governments Summit.
“At present, 18 months later, I believe it’s very, very clear: AI is a a lot greater deal than the Industrial Revolution, electrical energy, and all the pieces that’s come earlier than,” Ellison mentioned in a video dialog with former UK prime Minister Tony Blair.
“We’ll quickly haven’t solely synthetic intelligence but in addition — a lot before anticipated —synthetic basic intelligence after which, within the not-too-distant future, synthetic tremendous intelligence.”
“We could have unbelievable reasoning energy, the flexibility to find issues that might elude the human thoughts as a result of this subsequent technology of AI goes to purpose a lot sooner and uncover insights a lot sooner, whether or not it’s with the ability to diagnose most cancers in early levels or design therapies, custom-design vaccines for these cancers which might be custom-made to your genomics and your particular tumor antigens. So in drugs, we’ll see revolutions in diagnostics and in therapeutics.
“We’ve began a undertaking the place we’re gathering satellite tv for pc imagery from Kenya to California. And we are able to predict crop yields — so we might truly inform a farmer or a whole nation whether or not they’re going to exceed what they count on from this yr’s harvest or they’re going to have a shortfall and wish to begin making ready for that. We will inform particular person farmers that part of their area wants extra irrigation, or a part of their area wants extra fertilizer. So we are able to enhance yields on a person farm, and in addition on a a lot bigger scale the place we are able to enhance yields throughout nations and even throughout total areas of the world.
“I can go on and on, however AI will essentially change our lives in drugs, agriculture, and robotics throughout the board.”
And sure, this is identical Larry Ellison who’s constructing a media empire for his nepo-baby son, David which we’ve coated beforehand:

