This Is Imaginal Disk
"A fool and his money are one big party" - Robert Kiyosaki
Say hello, it's you, the purest you
The next stage, the next phase is here
Instinctive, impatient, impossible
In memory, mirror, and membrane“True Blue Interlude”, Magdalena Bay
AI, short for “artificial intelligence” in case you’ve been living under a rock, is the hottest story of the last few years. It’s a major technology that seems poised to, at least, drastically change how we approach work and education. AI’s critics think it’s a stochastic parrot with no added value that’s about to crash the economy, while its proponents think AI will enable radical abundance the likes of which we’ve never seen. So, who’s right?
Also, related to the title, I very highly recommend Magdalena Bay’s Imaginal Disk. And these two edits of The Substance using songs from the album because of course.
It’s a nice room, I just didn’t think it would be Chinese
As good a starting point as any is how AI like, works. Basically, the starting point is that, for a long time, computer scientists have been turning words into vectors, mathematical constructs showing location and angle. By turning the word “big” into a vector, you can also say it has a similar location to “large”, “huge”, “enormous”, “the runtime of Killers of the Flower Moon”, etc, and also to “small”, “tiny”, “Andrew Tate’s penis”, and so on, but with a different direction. So this means that, with math, you can easily group words by meaning, context, magnitude, and origin.
A Large Language Model, or LLM, is basically a series of filters (called “transformers”) put on a sentence to make sense of it. So for example if you say “Maia called her mom to talk about her blog”, it first puts all the nouns and verbs on a first transformer, then figures out the relations of different types of words on a second one, and then answers whatever questions you ask on a third. This is done by modifying “hidden” vectors that contain information on each word - so, for example, if you have a 20 sentence story and then the final line is “the killer is the person with a blog”, then the LLM would be able to answer “it was Maia” because it stored the information somewhere along the text. So you could have an LLM that has a vector for “Maia” that has qualities like “has a blog”, “26”, “woman”, “very attractive”, “smart”, “likes the 2022 movie TÁR too much”, etc. These words, if they appear again in the story, would also have a distinct value for “Maia”. A bit like how Nikita Khruschev had a list of every joke he made around Stalin during their Politburo days and wrote down Stalin’s impressions - so if he liked a joke about farmers and didn’t like a joke about Comrade Zinoviev, he could remember it later when either of those topics came up or when he had to say anything. This is known as an “attention mechanism”, which maches words that have relevant context and share information, and tries to predict the next word to match based on the information of all relevant or related words.
The most powerful version of ChatGPT’s GPT-3 (so, two-ish models ago) used word vectors with over 12,000 dimensions, that is, it applied 12,000 filters to store 12,000 types of information on everything you asked it. It also had 96 “attention heads” per each of its 96 layers, so it had 9,216 attention operations. They then take this into a “hidden layer” with 49,152 “neurons” (basically math functions that sum whatever is funneled into them in specific ways), and have over 12,000 output neurons. So put together there’s over 12,000 input vectors that you can transform, that get thrown around through 49,000 neurons, and then get transformed into usable output by 12,000 neurons, which makes it so the model part of Large Language Model has 1.2 billion parameters per layer, or 116 billion parameters total. The way this works is that if you ask it what the capital of France is, it would look for data it has of a country that’s close to France (say, Germany), and then look for a word related to both “capital” and “Germany” (Berlin), and then would apply the same “math” to France, that is, Berlin - Germany + France = … Paris!
Of course, it could also say that the word is Marseille, or Madrid, which makes it important to have enough training data - big datasets that humans break down and feed to the model to help it have pre-answered questions. LLMs were originally developed by Google to have an algorithm that could tell what an image was to improve Google Image Search (which, fun fact, was invented so people could look at picture of Janet Jackson’s nipple that one time), and the way you’d train something like that is by showing it what an image of “boob” was and what an image of “butt” was and then showing it other images and asking it what each was. But this is a very complicated system - GPT-3 needed 300 billion trillion (that is, 300 quintillion or 300,000,000,000,000,000) calculations to be trained, because it had access to a dictionary of 500 billion words (as opposed to 100 million, or 50,000 times more, than the average human child by the age of 10). These calculations are very complicated and they determine the, also very complicated, ways AI works. The obvious question here is whether AI actually understands what it’s saying or if it’s just repeating whatever a bunch of deterministic rules without rhyme or reason (the “stochastic parrot” model), which I have to say is quite literally the Chinese Room thought experiment which still pisses off philosophers to these days. But I digress.
So, to train an LLM, you need to do a lot of math - billions of trillions of math operations, basically. OpenAI, the owners and creators of ChatGPT, claim that how good their models were goes up “with model size, dataset size, and the amount of compute used for training, with some trends spanning more than seven orders of magnitude”. What this means is that the more calculations a model does, the better it gets at language tasks - but only if they increase the amount of training data, which needs them to increase the amount of computing power. Here’s where the chips come in: we’ve all seen what a chip looks like, it’s a green chunk of plastic with a bunch of little squares embedded. These squares are called “cores”, and they’re groups of transistors (little circuits that send signals) that do math. A regular computer, like the laptop I’m typing this on, have CPUs, which have 4 cores that work sequentially, so they do one operation after another. They can do a lot of things (run videos, Microsoft Excel, Minecraft, power my blog, keep the 5000 tabs on my US vs China post reading list open indefinitely), but they’re not very good at any one thing. On the other hand, AI uses very specific cores called GPUs, or graphic processing units, which help beef up image generation and thus can do a lot of math at once because of how computers work with images. The chips AI uses are very big, because they have a lot of cores, and they run parallel, not sequentially. They make two types: training chips (see above), and inference chips (which are used to spit out stuff); training, as mentioned above, is just much harder to do because you need all the background data. The AI companies put together a warehouse with all the computers running their chips (called data centers), and they take a lot of energy to run; and, like anyone who has had a laptop on their lap for more than 30 seconds can tell, they get very hot very fast, which they cool down with water. Basically all of the environmental harm from AI comes from their energy use (which, itself, comes from the fact most countries have very dirty energy grids), and while water may be a problem in some specific contexts, it’s mostly misleading.
The market for chips is pretty complicated, but there’s more or less four steps: designing chips, making the equipment to make the chips, making the chips themselves, and running the data centers. The big player in design is Nvidia, one of the biggest new companies of the 2020s, the equipment comes from ASML, the fabrication is mostly TSMC (the one in Taiwan whose factories keep being taken out by earthquakes), and the compute is less concentrated (each of those has 90% of their respective market), with Amazon Web Services being one of the leading players, but Microsoft and Google Cloud having a roughly equal share - though the data comes from before Microsoft reached a Shylock-esque deal with OpenAI where they provided a lot of compute in exchange from not a pound, but the entire body mass index of the company’s flesh. The TL;DR here is that Microsoft is building gigantic supercomputers and data centers for OpenAI in exchange for exclusivity, a gigantic share of OpenAI’s profits until Microsoft breaks even, and exclusive rights to integrate OpenAI products into itse software (including the Excel plug in that is somehow worse at basic math than Ken Rogoff). The basic tension is that Microsoft wants OpenAI to make as much money as possible to sell them more GPUs, and OpenAI allegedly (well, until Sam Altman fired everyone) wants to take it more slowly. There’s another problem clause: if OpenAI invented an artificial intelligence that was as smart as humans, then Microsoft doesn’t have access to OpenAI models.
Anyways, as I mentioned above, AI companies admit that they need to put together more and more of these data centers full of better and better chips in order to keep improving their models - most notably because the demand of power from AI has grown pretty quickly, the length of time to train new models has grown significantly, the costs of AI supercomputers has increased substantially, and more data, more training, and more size are necessary to improve AI accuracy. The growth rate of AI models for the next two years is expected to continue not just holding but in fact increasing substantially, meaning that AI would increase in power and capability quite strongly. AI, at the moment, is around a level of basic-to-intermediate competency on technical questions without reaching true expertise, which puts a (short term and quite high) ceiling to the immediate growth of AI agents (basically, programs powered by AI that can do tasks in your computer), as a replacement for white collar work.
One of the real questions about AI is how capable it can get; that is, whether AI will continue increasing exponentially in its abilities the way it has, or whether it will hit some sort of wall. This is tangentially related to the problem of consciousness I mentioned above (my position is that AI is at least in principle not conscious, and thus cannot generate true discernment which is crucial for applying AI to the real world - sort of like a virtue ethics equivalent of the halting problem), but I don’t really think it matters very much whether AI is conscious or not. It would be like asking if cars would become a type of animal over time in order to replace horses - it’s not necessary for them to become one even if they could. But some relatively serious thinkers do consider it an important factor to keep in mind, such that they forecast an “intelligence explosion” in the short to medium term that completely disrupts human life by ushering in “artificial general intelligence” or AGI, a computer that is as smart as a person. Because AGI would develop the ability to improve itself, experts believe it would just grow exponentially in its intellectual abilities and would thus dwarf the intelligence of humans in a very short time, perhaps as short as seconds.
The AGI scenario is more or less what the OpenAI business model is built around: in their AI 2027 document that Sam Altman shopped around to DC policymakers to support his company, the whole thing is predicated on OpenAI inventing AGI at more or less the same time as China does (which involves the government not regulating them until after they win the AI race wink wink) and then managing to bring it under control through some means that’s never very clearly explained. Then there’s some kind of American Soviet put in charge of ChatGPT to make sure it’s not evil. The premise is pretty simple (and also disturbingly similar to some Yudkowsky inspired My Little Pony fanfics): by the end of next year most important things will have happened and OpenAI will have a permanent monopoly on all white collar work, which they can supply at zero marginal cost for a measly 5 bucks a month. Otherwise they can’t pay Microsoft back and the company goes bankrupt. Journalist Matt Levine has described the AI business model as ““We will create God and then ask Him for money”. It’s basically this tweet:
♫ The bright shiny bubble / Blissfully floating above ♫
A well-known story tells of a finance professor and a student who come across a $100 bill lying on the ground. As the student stops to pick it up, the professor says, “Don’t bother—if it were really a $100 bill, it wouldn’t be there.”
Malkiel (2003), “The Efficient Market Hypothesis and its Critics”
What even is an asset bubble? Well, what even is an asset? It is, to quote Robert Kiyosaki, something that generates revenue and value. The important thing about assets is how their price is estimated: it comes from the expected cash profits that they produce. These profits, called cash flows, are also adjusted for the opportunity costs, that is, the money you’d make just stuffing your cash in a bank - meaning they’re adjusted by expected interest rates. This means, in short , that asset prices are based on the expected discounted cash flows over a fixed time horizon. Importantly, the prices of assets are related to each other: for instance, if Coca Cola is expected to gain market share, then the value of Pepsi stock should fall, and so should the price of derivatives1 and other stuff like commodities and bonds.
The main approach economists take to integrated asset markets is called the Efficient Markets Hypothesis. Unlike what the name suggests, the position isn’t that the markets are perfect; it’s that they are efficient at using and reflecting information. The core of contemporary economics is that prices contain some information about the goods and relationships between people they represent that can’t be gathered from those goods and people directly. This also applies to asset values. Imagine Pepsi puts out a statement saying they found out that a Pepsi factory was using crushed up poop to make Pepsi; people would stop drinking Pepsi, which would mean fewer expected profits, which would mean lower expected cash flows, which would mean lower stock prices. Everyone would thus sell Pepsi stock. But the thing is, because everyone could do this, you couldn’t make a profit off this information. The Pepsi stock thing is public information, which means everyone can access it relatively easily, and because financial markets are reasonably competitive, then it wouldn’t be possible to be “early” consistently on these unexpected drops.
This has fairly unexpected consequences: the first is that the field of finance, at least as commonly understood, is completely useless (note: the source linked is from 1932). There are no rules that are useful for investing, beyond the ultra-short term, that hold over time if the EMH is true - any rules can just be exploited and preempted, so the margin of action gets shorter and shorter. This is because, unless they’re trafficking in highly restricted information (called “private” information), fund managers and other people can’t expect to outperform the average of the market because they can’t really expect to be always consistently early all of the time. If everyone is trading on the same information quickly, whether you make a profit or not is basically a game of chance, which over time averages out. For example, I do know someone in finance who made their employer a good amount of money based on the fact that they found out super early about the Russian invasion of Ukraine (basically, someone posted satellite photos on an obscure forum for military nerds), but that kind of thing is rare and you can’t really expect it to happen often. This is the “weak” form of the efficient markets hypothesis, because it simply states that over a long enough time horizon all market participants can only make as much as the rest of the market does.
The semi-strong form of the Efficient Markets Hypothesis follows from the weak form, and it basically states that all new public information gets translated into prices basically immediately. This is because, as long as the costs of acquiring information or trading in the market are not too high, regular participants in the market have every incentive to be active in paying attention to new information and acting on it immediately, which means that in effect new information is nearly immediately reflected in prices. This is, at least according to the evidence, roughly true - for instance, back in the 1950s, an economist figured out which chemical the US military used for a nuclear bomb because of which companies saw increases in their military contracts, which boosted their stock price. The US government confiscated the research, calling it a national security risk. The strong version of the EMH is somewhat similar, and it states that all private information also gets immediately reflected into asset prices, which means that, among other things, insider trading shouldn’t be illegal2. The obvious problem for this is that there’s a pretty big information asymmetry (i.e. information that some people have and others don’t) game taking place where you just don’t know if or whether other market actors have the information, so it is actually possible to time the market perfectly, which as we said above, just isn’t possible with public information. The empirical evidence, for whatever is worth, is quite favorable to the Weak EMH, is pretty strongly supportive of the Semistrong EMH (it’s quite easy to test whether new information affects stock prices, and it does), and is not very favorable to the Strong EMH, in large part because it’d be very hard to test and also because it’s obviously ridiculous.
The question that follows here is obviously then whether asset bubbles are possible under the EMH. Paul Samuelson referred to the tulip bubble of the Netherlands by saying bubbles are associated with “the purely financial dream world of indefinite group self-fulfillment”; Robert Shiller, meanwhile, defines bubbles as the systematic mispricing of an asset for a temporary period. Before getting into what causes bubbles, we should get something out of the way: speculative asset bubbles are bad. NFTs and crypto were, in many ways, a bubble in 2021 and 2022, but a lot of the money was just random petty cash and leftover COVID stimulus. On the contrary, economy-wide asset bubbles typically lead to erroneous investment that costs the economy a lot of money. In his recent book The Land Trap, journalist Mike Bird details the extremely negative effects of real estate bubbles on various economies: they syphon enormous amounts of money into land and housing, which leads to exacerbated housing costs; this leads to misallocation of labor away from productive cities, enormous drains on consumer purchasing power, and a massive and extremely politically influential rentier class. Other work, such as (gag) Reinhart and Rogoff’s This Time Is Different, finds that asset bubbles are typically followed by sharp financial downturns and economic contractions - even to the point of causing a bonafide financial crisis.
Economist Hyman Minsky tends to have a description of bubbles between five phases: a displacement, that leads to higher expected profits in a sector; a boom, characterized by a strong upward trajectory, low volatility, and higher investment; euphoria, where the assets’ values increase stratospherically with extremely high trading volume; profit taking, the early exit of sophisticated investors; and then a Minsky moment, when “the music stops” to quote the movie Margin Call and the final phase, the panic, starts. The panic is when it all goes to shit. This makes it sound relatively simple how to detect a market bubble, but it’s not: quite simply, some assets do just increase a lot in price because the underlying fundamentals also increased a lot and very rapidly.
Why do bubbles happen? This seems like something the EMH does not support. Thus, its critics seem vindicated, and the main critics of the Efficient Markets view are behavioral economists. Behavioral economics is based on criticism of the unrealistic assumptions of human behavior that undergird traditional financial economics: perfect information, perfect rationality, and perfect translation of information into actions. This is, by measure of basic common sense, quite dicey: 5 years ago a bunch of people decided a random defunct company should be more valuable, so they put a lot of money into it, so the stock went up a lot. Traditional behavioral finance research (as seen in George Akerlof and Robert Shiller’s Phishing for Fools and Daniel Kahnemann’s Thinking Fast and Slow; all three of them are Nobel Prize winners) finds that people have all sorts of biases, such that they overrate their own competence despite making mistakes, tend to factor in irrelevant or pointless information, and tend to make quite baffling decisions based on information. The most important things are that these irrationalities are systemic (that is, they happen across agents) and are regular, as in, they can be understood and reliably assessed, and thus can explain empirical regularities like the fact that risk-adjusted returns are much higher and much more variable than what the EMH would predict - because people reliably and consistently mistake short-term changes for long-term trends. In this sense, the behavioral explanation for bubbles is quite simple: people just develop a euphoric and overly optimistic view of the value of the assets in what John Maynard Keynes called “animal spirits”: emotionally convincing narratives outweigh rational and sober-minded calculations of risk and benefit.
There’s an extremely long list of objections that EMH proponents have against behavioral finance. The first is that most anomalies can be explained by business cycle changes and rate expectations, since stock and bond prices tend to shift in similar directions and bond prices react directly to interest rates. There’s extremely lengthy discussions of the specific causes of divergences in volatility and trends in asset prices. And there’s also the complicated fact of the “double hypothesis problem”: the other half of the standard model of finance is the Capital Asset Pricing Model (CAPM), which predicts that the only way to make more money than the average of the market by taking on greater risk. This means that companies can sometimes beat the market by having a much higher apetite for risk; that is, that when the joint EMH and CAPM model fails, it’s possible that CAPM and not the EMH fails - that is, that the markets are properly incorporating risks that asset prices don’t reflect. But the EMH defenders mostly focus on the seemingly paradoxical term of the “rational bubble”. Rational bubble theory operates on two issues: the first is that information about the underlying value of assets tends to be contradictory. Right now, for example, how much AI is raising productivity is fairly unclear; thus, the bubble could be growing off best case scenario evidence even if the evidence for a less optimistic outcome ends up being shown to be stronger (a bit more on this later). The difference between public information (which is available to everyone) and common knowledge (which is known by everyone) can be quite large in this type of case. Other components of rational bubbles are frictions in information or frictions in beliefs; that is, not a limit to how rationally agents act, but a limit to how they can exercise their impeccable rationality. A different paper points out that the cost of shorting (basically betting on an asset declining in price) also increases exponentially during bubbles, which means that there isn’t an equivalent downwards pull on prices during periods of exuberance. The other part of a rational bubble is the synchronization problem: everyone can know that something is a bubble, but they might not be able to coordinate on the optimal moment to sell. Very famously, Isaac Newton got involved in one of the oldest financial bubbles (the South Sea Company bubble of 1720) by buying low, selling at a medium price, and then buying again and failing to sell until prices had fallen too much, which basically bankrupted him. In fact, I think the synchronization problem can solve the single oldest and stupidest asset bubble: Dutch Tulipmania of the 1630s. If you haven’t heard, between 1634 and 1637 tulip prices in the Netherlands increased enormously, which obviously cannot reflect market conditions; however, flower markets (even contemporary ones) do act like that during trends and fads, so some rare tulip bulbs did have favorable fundamentals. However, common tulip bulbs increased due to obvious speculation - which can be understood as rational if and only if the tulip sellers obviously knew the bulbs weren’t that valuable, but didn’t know how long the tulip market would stay up and just mistimed their exit, Newton-style. A lot of historical bubbles, contrary to common belief, did have some underlying value, paired with uncertain financial conditions and, especially in the ones related to settlement of distant lands, information frictions. Something worth noting is that, in the end, the GameStop fanatics lost all their money. The market stayed rational longer than they could stay liquid.
She Looked Like Me!
The behavioral bubble story is that predictable and manageable irrationalities fuel extreme market euphoria that devolves into market chaos. The rational bubble theory is based on the notion that individuals act reasonably, but that difficult-to-estimate values and difficult-to-time transactions prevent them from optimizing their trades. I think both can provide a general account of the facts and, in reality, aren’t actually inconsistent. One of the economists the behaviorals rely the most on has been Charles Kindleberger, author of Manias, Panics, and Crashes3; Kindleberger, as the first word of his title on the subject may suggest, did not believe in rational bubble theory - even though his work is perfectly compatible with rational bubble theory: in an information environment in which it is time consuming and intellectually costly to distinguish between true facts and promotional factoids, some consumers can be misled perfectly rationally. The concept of a rational bubble caused by mistimed uncoordinated exits and uncertain fundamentals is also compatible with this position.
The reason I am bringing Kindleberger up is that one of his central insights was that bubbles usually appear around new assets and new technologies, where there are genuine reasons to disagree on the underlying value proposition. Recently, a lot of people downgraded their chance of AI being a bubble for two big reasons:
That is why the most pro-bubble and most anti-bubble narratives come from the most anti-AI and pro-AI people respectively: if you think the true value of AI really is that high, then why would it even be a bubble; if you think it’s worthless, then why wouldn’t it. That’s not discounting the role of groupthink and motivated reasoning, of course, but there has to be something deeper about the belief in the underlying value of the technology (at least without sinking into doomerism). Regardless of whether cars are or can become a type of horse or not, the important thing to note is that the question for the markets isn’t “is the underlying technology valuable”. It is “is the underlying cash flow in the valuation of AI technologies on the financial market feasible or not”. This invites two questions: the first is whether AI really is that big a deal. The second is whether AI can be financially unfeasible while still being a big deal. And the guide to the second question isn’t theory, but history - particularly, the history of the dot com and telecom bubble of the early 2000s.
It’s obvious now that the internet was a major socially useful technology; at the time, it was seen as a major economically useful technology. In 1998, Paul Krugman infamously wrote “By 2005, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s”. The context for Krugman’s quote wasn’t that he was a grumpy old man refusing to use new technologies. It was this quote, in the same year, from prominent economist Rudiger Dornbusch: “The U.S. economy likely will not see a recession for years to come. We don’t want one, we don’t need one, and, as we have the tools to keep the current expansion going, we won’t have one. This expansion will run forever”. The expectation among economists was that the Great Moderation of the 1990s, coupled with the explosive growth of new technologies like the internet, would keep the economy growing on a trajectory of upwards of 4% a year - that is, a new economic miracle on the scale of the postwar boom. The second half of the nineties saw eye-popping rates of growth in the technology industry, thanks to technological advances that allowed for mass adoption of personal computers, access to the “World Wide Web”, cratering costs of information transfers, and a gigantic rise in business formation and IPOs. Thomas Friedman’s book The World is Flat predicted social, economic, and technological revolutions, a world of peace, prosperity, and trade (except for the Muslims amirite). In 1990, the stocks traded on the NASDAQ (the technology stock exchange) only made up 11% of the value of the market; by 1999, they made up 80% of the value. In 1999 alone, the NASDAQ grew 86%, shares of tech firm Qualcomm rose 2,619%, and the price-to-earnings ratio of the NASDAQ reached 200, meaning that companies were routinely valued at 200 times their earnings. Nasdaq peaked in March 2000 at over 5,000 points, but in the following two years, the index would lose 77% of its value, and would not reach its March-00 high until 2015. Shareholders in the telecom industry, meanwhile, lost roughly $2 trillion and shed half a million workers. In fact, Krugman took a victory lap: in the twenty five years following his prediction, labor productivity had in fact consistently decreased and economic growth got slower and slower.
What happened? A 2001 article on the Harvard Business Review says the following:
Yes, the capital markets did a great job of channeling money into the new business sector that the dot-coms represented. But they did a lousy job of selecting which start-ups to support. Dot-coms differed from the manufacturing and services start-ups that venture firms and investment banks were used to working with. Because dot-coms were built on new business models, not on proprietary technologies or products, traditional business plans and financial measures didn’t apply. Yet investors continued to use the old tools, pressuring start-ups for impossible specificity in their strategies and reckless speed in implementing them.
Fundamentally, the investors in “dot com” companies like Pets dot com and Webvan thought that these internet companies had users and had a viable business (selling stuff online), which meant that they were safe investments - even though the companies had no path to financial viability. Typically, the bubble is dated to Alan Greenspan’s famous “irrational exuberance” speech of 1996, where he warned “… the simple notion of price has turned decidedly ambiguous. What is the price of a unit of software or a legal opinion? (…) sustained low inflation implies less uncertainty about the future, and lower risk premiums imply higher prices of stocks and other earning assets. We can see that in the inverse relationship exhibited by price/earnings ratios and the rate of inflation in the past. But how do we know when irrational exuberance has unduly escalated asset values, which then become subject to unexpected and prolonged contractions…?” The question, which Greenspan asked in his famously extremely hard to grasp and overly technical style, is pretty simple: the economy is growing a lot. How do we know if this growth is sustainable? The main problem is that Greenspan and the hawks on the Fed were referring to the real growth of the economy and the real unemployment rate; in this sense, tighter money (as recommended by the Harvard Business Review) was a complete nonsequitur. Tighter money wouldn’t have benefitted the economy - it would have harmed it by reducing investment and employment, as (successfully) argued by Janet Yellen in the following years, and as detailed in Ben Bernanke’s memoir The Courage To Act.
In fact, something quite obvious is that the stock market did not enter bubble territory until the late 1990s, particularly after 1998. The rate of growth of the NASDAQ in the early and mid 90s was consistent with the expansion of the internet as a sector of the economy. The companies growing in the first half of the bubble cycle were all major and legitimate: Microsoft, Intel, IBM, HP, Dell, Oracle, etc. Garbage like Pets.com, Palm, and the rest of the gang only started getting eye-popping stock surges, alongside the real companies, in the final two years of the 20th century. In fact, the dot com bubble was fundamentally a result of rational disagreements on the fundamental value of companies: alongside real firms like Amazon you had duds like Webvan. The fundamental driver of speculative mania was uncertainty about the viability of the business plans of various firms, at the same time as the strong track record of the sector for close to a decade allowed for higher risk tolerance in the sector. In fact, the major reason why the scam from Wolf of Wall Street was viable (Jordan Belfort would sell people stock of a blue-chip company like Kodak, and then scam them with fake information about some rinky dink pink sheet stuff4) is that the technological sector was growing so fast regular consumers were hearing about it often and openly. By 2000, however, a Barron’s magazine article pointed out something obvious: these internet companies were not viable for much longer. Their extreme reliance on IPOs, venture capital, and stock and bond placements to finance their expenses versus actual tangible revenue left them exposed and raised questions about the sustainability of the sector. The reason why Amazon and a handful of others like PayPal survived the dotcom crash and still exist to this day is that they actually had a functioning business model.
The even less understood bubble is the telecommunications bubble of 1997 to 2000. The “telecom” bubble grew for two reasons: the first was the explosive growth of the internet in the 1990s, which led to a higher demand for physical infrastructure to carry broadband signals. To address this need, the Clinton Administration pushed for the Telecommunications Act of 1996 (the one that includes the controversial Section 230), which deregulated and subsidized the development of new telecom infrastructure. The companies then poured more than 500 billion dollars (1 trillion in today’s money) in fiber optic cable, wireless networks, and other forms of long-distance capacity. The expectation was that the usage of the internet would grow extremely quickly and thus the high number of companies that popped up to build infrastructure would make a profit. However, they did not count on two factors: the first is the dot com bubble, which set back for 15 years the growth of the internet as a sector of the economy. The companies that would use this infrastructure simply did not materialize, leaving these companies holding the bag. The other problem, which was the fundamental one, is that telecom infrastructure, like basically all infrastructure, is a natural monopoly: it has extremely high fixed costs (i.e. the cost of putting up the pipes), but very low marginal costs (i.e. the cost of running internet through the pipes), which makes it a sector inherently prone to having very little competition and very high profit margins. Without proper regulation and coordination from the FCC, the market was crowded, resulting in expected profits that would be lower than costs given the squeeze on competitive prices from excess capacity unless demand continued growing at astronomical paces. As mentioned above, it did not.
Feeling DiskInserted?
The simple truth is that I am less valuable than I used to be. It stings to be made obsolete, but it’s fun to code on the train, too. And if this technology keeps improving, then everyone who tells me how hard it is to make a report, place an order, upgrade an app or update a record — they could get the software they deserve, too. That might be a good trade, long term.
Paul Ford, “The A.I. Disruption Is Actually Here, and It’s Not Terrible”
The similarities between the AI bubble and the dotcom and telecom bubbles is quite stark, which I am obviously not the first to mention. First, you have a new technology with an unproven business model but clearly revolutionary potential. Second, you have a related boom in infrastructure investment being egged on by the White House. Third, you have some obvious quantitative similarities: the excessive growth of a single sector of the economy to nearly a third of the stock market (the “hyperscalers” plus Nvidia and Broadcom make up 28% of the stock market and a majority of its gains) at the same time as the S&P 500 reaches its highest price-to-value ratio since thhe dot com bubble, retail trading on the Russell 2000 (the garbage pink sheets from Wolf of Wall Street) is up, and, most hysterically of all, the stock of Cisco, one of the most infamous dot com bubble offenders, has returned to March 2000 levels. The fundamental question for AI is the same fundamental question the dot coms faced: is the business model there? Given that, as exemplified by the AI 2027 document, the business model is heavily betting on AI revolutionizing the economy in the short term, the stakes are high on all ends. If the AI bet pays off, the labor market implodes. If the AI bet doesn’t, the capital market does. What’s the evidence in favor of AI, then?
An important note to make is that who profits from the AI boom matters greatly. A recent paper by Ricardo Caballero makes a stark point: because AI can decrease labor compensation and increase capital income, then an increase in speculation in the AI sector can boost the wealth of investors to the point where the valuations can sustain a transition towards a high-capital (read: AGI or whatever) equilibrium, though the small number of beneficiaries also renders them enormously exposed to a reversal and a crash in values. A related topic (to be discussed in an upcoming post, so hype I guess) is what’s known as the “backstop option” or, more polemically, as technofeudalism: the idea that Silicon Valley’s business model is simply to buy the US government and use that power to force everyone to use AI. The wealth inequality aspects of AI, as well as the wealth concentration impact, is extremely important, thus, to institutional health. But that’s kind of beyond the point - if it’s just Peter Thiel and Larry Ellison propping up the AI fad, then their investments should collapse as soon as the lack of real underlying value is revealed. Hence, is AI actually valuable?
There’s three sources of data: micro, macro, and anecdotal. The anecdotal evidence is what’s mostly driving the current tide of discourse: Claude Code just came out and people with no coding background have done relatively impressive things with it. Joe Weisenthal of Odd Lots put out an AI tool he “vibe coded” that measures orality, a metric he’s been talking about quite extensively recently. John Ganz of Unpopular Front, a historian by trade with a humanist background (that is, a wordcel) put out another vibecoded project (by his own admission!) about whether news validated different theories of fascism in real time. Neither of them have any coding experience - Weisenthal’s post, was, thus, titled “AI’s Productivity Potential Has Never More Obvious”. We have people who, again, don’t know anything about coding and programming putting out frankly very impressive websites. The main problem with using anecdotal evidence is the obvious: as Doctor House put it best, everybody lies. A 2025 study by METR examines whether AI increased productivity in coding tasks (measures as minutes taken to complete a task) and whether people thought it would. Experts estimated efficiency gains of around 40%; workers estimated 20% to 25%. The real number was -25%: using AI made workers less productive at their jobs.
The micro level evidence, which examines the productivity gains from the adoption of artificial intelligence in specific firms, is relatively moderate but positive. A list of papers available here (highly recommend giving this post a read) finds some large increases but mostly moderate ones, particularly outside of coding related work, translations, and mammographies. The first, and most obvious, caveat is that the research tends to lag the conversation: AI has changed a lot over time, and has become much more widespread: for example, between 2022 and December 2024, the share of people who use AI rose from 0% to around 14%. And between January and April 2025, that share rose from 14% to 43%, and is already a majority - in contrast, this process took 6 years for social media, 12 years for the internet, and 40 years for electricity. However, the evidence is not especially encouraging for AI maximalists. Recent European evidence finds that AI increases output per hour by 4% on average, but without changes to employment and without any business type benefitting homogenously. A recent, viral post on the Harvard Business Review finds that AI doesn’t reduce the total amount of work for human employees, but rather increases it: AI enables expanding tasks (for instance project managers “vibe coding” some work), reducing the time spent on breaks because of the ease of completing tasks, increased multitasking, and organizational gains. In this sense, this matches relatively older (i.e. from 2023 and 2024) studies: the gains in productivity have been mostly driven by automating away “tedious” tasks like editing and drafting, emails, document creation, or retrieving information from data sources. In large part, these gains come not from letting top employees pull away from everyone else, but rather, they come from letting bottom performers catch up, especially in tasks they have less experience in. AI has a “jagged frontier”, where it’s very very good at automating or upgrading some tasks, and atrocious at others, and there seems to be very limited forethought capable of predicting which is which ahead of time.
A compounding problem is that AI adoption in business is not showing encouraging signs: after rapidly quadrupling between 2023 and mid 2025, AI usage has flatlined at 12% for the second half of the year give or take, according to US Census Bureau data. Private and unofficial sources find a similar trend: the number of people using AI at work fell from 46% in June 2025 to 37% in September, while the number of workers using AI stayed astable at 12%, and the number of firms plateaued at 40%. While economic uncertainty plays a role, data from Dayforce finds less usage among rank-and-file workers (27%) than managers (57%) and executives (87%); again, it seems that so far, the gains in productivity from writing fewer emails and reading fewer reports accrue to people whose jobs involve those tasks to a lower extent. Similarly, a paper from researchers at Carnegie Mellon finds that top-of-the line models (before the rise of Claude Code, at least) tended to fail in at least 60% of the tasks proposed, at best, corresponding to real-world job responsibilities; a Hong Kong University study finds that firms report AI underperforming five times more often than overperforming (45% to 10%), while a McKinsey report does not yet see any major gains from AI adoption. MIT’s project on business investment in AI, meanwhile, reported that while around 35 billion dollars were spent by companies on AI last year, only 5% reported higher changes to profits and losses, and 95% did not see any positive improvements. The report also cites limited disruption in major sectors, as well as the lack of implementation and training into the workflow, and “misalignment with day-to-day operations” (that is, uselessness). Additionally, take-up of AI for professional use has been slown, and AI diffusion among businesses has been growing fairly slowly and steadily rather than growing explosively, which could signal that firms are waiting until they have enough information on the benefits of AI, which slows the uptake of the technology compared to maximalist forecasts. So it seems that instead of improving productivity radically, AI is probably moderately raising the amount of meaningful work done at the expense of low-effort drudgery, with a dark underside of simply not doing anything except generating “workslop” to show off to bosses with no gains. Recent Nobel Laureate Daron Acemoglu told The Atlantic that he’s even worried companies will overwork their employees so much under the notion of AI-driven higher productivity that they’ll decrease productivity by needing the same stuff to get less things done on the aggregate.
On the macro level, let’s start with the obvious. Acemoglu doesn’t think AI will boost growth very much. Looking at what tasks can be replaced by AI as estimated by previous papers, he found that only around 5% could be profitably performed by AI, which would boost productivity by 0.7%. This is particularly notable because it also accounts for a big mismatch between what companies are investing (big ones) and what companies could benefit from automating work (small ones), and the adjustment costs of switching to AI in the short term. However, he does note that an increase in AI capabilities beyond what is expected or an increase in innovation could boost these forecasts.
Starkly, the macro-level evidence clearly shows basically no gains in innovation during the AI era. The most obvious reason to doubt that AI doesn’t make people an umptillion times more productive is that we are not seeing more productivity: we are not seeing more apps, more websites, more video games, more plug-ins, or any other explosion in output. Adopting AI does not increase productivity measures that much when coding companies utilize it. Similarly, the markets are just not pointing towards a scenario of rapid acceleration: given the fact that real interest rates have not increased (to the double digit range consistent with short-term AGI) since 2023, this analysis still holds: it is not reasonable to assume AGI in the short term yet. AGI would result in double-digit economic growth right away, and, given the basic structure of microeconomics, if there was a predictable future rapid spike in growth, there would have to be a rapid increase in interest rates, since people would like to spend their money because obviously when AGI comes we’ll all be living in fully automated luxury. But the real market interest rate has increased moderately at best in the last three years, and mostly as a result of the US increasing its fiscal deficit to extraordinary levels and threatening to invade its closest allies - not precisely something that should matter if Good Guy Skynet is around the corner.
Again, how much productivity has increased is extremely controversial: estimates range from 0% to “a lot”. The main argument wielded by technology guru Erik Brynjolfsson has to do with the US economy: it grew around 1.8% in 2025 despite only adding 160,000 jobs in all of the year (lower than the monthly average for 2024), which would imply a productivity acceleration from 1.4% a year on the past decade to 2.7% in 2025. Brynjolfsson writes “This decoupling — maintaining high output with significantly lower labour input — is the hallmark of productivity growth”, and cites his own recent work, which finds that AI-exposed roles saw a decrease of 16% in hiring since 2023. There’s two problems with this argument. The first is that drawing conclusions from macro-level data is frequently not a good idea when there’s a lot of other stuff going on: a moderately cooling labor market during a year when the government embarked on absolutely insane economic policy and while massive investment piled onto technology can be a sign of high productivity, sure, but it can also be a sign that the domestic economy is in trouble and the external sector (tech) is doing pretty well. The other problem is that the timing of AI related job losses is, well, kind of weak: 2022 was also a year after the SPAC bubble collapsed, the year when there was almost a major financial crisis that trashed the tech sector, and the start of a painful global rate hiking cycle that came on the tails of massive overhiring at tech and other big employers during 2020 and 2021. Looking at Brynjolffson’s paper, you can actually see this: all of 2022 shows a downward trend, that does get somewhat worse in 2024, when AI started having substantial penetration in the economy. A recent paper finds that the job drought facing young graduates has little to do with technology, and more to do with the overall challenging global labor market.
So if AI isn’t taking jobs, and isn’t boosting innovation, maybe it can still boost growth through other means? The big debate on this front is between Brynjolfsson and Robert Gordon, to the point they’ve literally made a bet worth money over it. Gordon’s case, which he most famously made in his book The Rise and Fall of American Growth, is that technologies have a pretty defined lifespan where they increase productivity for a while as they’re adopted, and then taper off once they’re exhausted. Most importantly, the more technologies build off past ones, the smaller their impact: going from no TV to TV obviously increases wellbeing less than going from 4K HD TV to 8K HD TV. Particularly notable is Gordon’s case at the end of Rise and Fall (of a Midwest Princess): he considers that artificial intelligence will not boost growth much because it’s a subset of computers and the internet. His position, which is quite old fashioned, is that in the last 30 years we’ve become more and more capable of collecting and analyzing data, but we haven’t managed to increase productivity from it at all. Gordon’s big claim is that the main impact of information technology comes from workplace organization, which is a big deal, but which can only be marginally affected by AI: as seen in the work intensification article from the HBR, the effects on who does what are mostly limited. He told an interviewer last year “It’s not going to be a revolution. It’s not going to blow out human nature”.
The other perspective, which is I think a bit more optimistic, comes from Erik Brynjolfsson. His central idea is what’s called the Productivity J-Curve: the economy is structured on top of some fundamental basic technologies, called General Purpose Technologies or, quite coincidentally, GPTs. The shift from a GPT to another one is quite slow, then dips downwards, and then takes off. This isn’t because of rapid contagion, but rather, because of accounting: as mentioned by Gordon, the core asset is business organization, which is an intangible asset - a collection of organizational practices (think culture X versus culture Y), business models, practical tricks, and human skills. These intangibles materialized at a certain point only, which makes it seem like productivity is illusively down - money is being spend on assets that aren’t measured. If you account for these assets, Brynjolfsson and his coauthors argue, growth in the 2010s is less modest than it seems. But the obvious problem is that electricity, the telephone, and the computer also had substantial intangible assets attached - the argument is as absurd as saying social media and phones increased subjective wellbeing more than indoor plumbing (related paper). Because GPT-related intangibles increase productivity in two “rounds”, then AI productivity growth should be about to show up on productivity statistics, because the complementary innovations in skills, training, management, and org charts haven’t been fully developed and implemented yet. A recent paper by Brynjolfsson finds evidence that ChatGPT is, well, a GPT, with firms with longer exposure to AI (in the 2017-2021 period, aka prehistory) showing J-Curve consistent improvements in performance. This doesn’t, however, provide much evidence of note about Claude Code and AI agents.
The business model
But does this economic revolution add up to a business model? To quote the New Yorker, “in the short run, the stock market is a voting machine, but in the long run it is a weighing machine that weighs the cash flows that companies generate”. Well, how much cash can they generate? A recent piece published by the Brookings Institute points to six key factors in determining AI success and therefore whether the AI valuations are justified. The six factors are AI investments, data center construction, AI adoption, AI price rates, company competition, and public trust.
AI adoption, as we’ve already seen, will fundamentally be about the economic value of AI, and it’s still an open question. The main and most important fact is that, according to recent data, only 3% of users pay for AI services. This share could increase, of course, as AI adoption rises. The demand is not there yet. Studies tend to find uneven use of AI based on skill, education, and income; however, when all employees utilize AI, it tends to overwhelmingly benefit less experienced, less skilled, and less talented workers. There is also little evidence of endogenous convergence: workers who don’t use AI may be making a rational choice not to because they think the benefits are insufficient or the costs are too high - for instance there is some evidence of a social penalty for using AI at work. This is also a determining factor in gender dynamics using AI (really thorough read about it here), as Iv’e written about in the past. An October 2025 EIG survey of American workers finds that older, lower income, and less educated voters are much more anxious about AI than the average person, which is driven by beliefs as well as employment characteristics and personal preferences driving use. Particularly troubling, thus, is the rise of AI porn as a business plan: OpenAI recently fired a safety expert for discrimination against men for objecting to this plan. The real blackpill isn’t that ChatGPT might make money from gooning - it’s that it can’t make money without it, maybe, or at least not enough.
The topic of public trust in AI is pretty important, though. Public opinion is not especially favorable to AI: according to a 2024 Pew Survey only 17% of Americans think the technology will have a positive effect on the country, compared to 35% negative and 33% neutral. This is in contrast to 56% of experts on artificial intelligence having a positive view. 43% of Americans think AI will harm them personally (nearly double the share that thinks will benefit them), and less than a quarter of adults think AI will improve how people do their jobs, the economy, education, arts and entertainment, or the environment; all of these have at least a 15 point gap with experts, who are three times more likely than regular people to rank the impact of AI on jobs as positive. Two thirds of adults think AI will eliminate jobs, compared to 39% of experts. A December 2025 poll by the centrist Searchlight Insititute (remember them?) found that the two most common uses of AI were general information (63% of users) and just for fun (46%), with writing emails, documents, or posts dropping to 30%, summarizing documents by just 23%, education to 12%, and coding just 7%. The net positive approval of AI is 8%, which is on par with social media but a third of nuclear energy and drones, and significantly lower than solar energy, the internet, and drones, which have 65% or higher net approval. Adults tend to rank AI as important as the smartphone which is to say, not very. People, however, are evenly split on whether AI will replace, augment, or ease the work they do. And they are overwhelmingly favorable to regulate AI, with two thirds of voters supporting more regulations and only 15% agreeing with the OpenAI pitch of not regulating AI until superiority over China is established. Most concerns are about job loss, privacy, misinformation, and lack of control and oversight over the technology. AI child pornography ranks very low - which was, however, a month before the extremely high profile child pornography scandal involving the company formerly known as Twitter. Grok, as its in-house AI is known, was seen publicly generating adult content, including of children, as well as unclothing pictures posted by female users; the affair was serious enough that the French government raided the offices of Twitter and the UK government launched a formal investigation. It truly is all happening on X the Everything App. The ethics of AI, and issues like AI psychosis and suicide, are starting to come up with startling regularity in public opinion - startling, at least, for AI companies.
But as seen from the telecom bubble, AI investment, data center construction, AI prices, and company competition are all endogenous: AI requires huge investment in dedicated physical infrastructure, making it closer to a natural monopoly than social media or other online businesses. If there is excessive spending on those, AI prices will be driven down, which will make it less profitable (again, which is what happened with the telecom sector). Both excessive spending and low prices could be driven by excessive competition. The AI 2027 report actually underhandedly refers to this, by including a need for the OpenAI proxy to take over most compute in the United States to compete properly with the Chinese AI projects.
Thus, the question is infrastructure: are AI companies overbuilding relative to a reasonable level of demand? That’s the central question. If the companies are reasonably estimating demand, then they will recoup their investment on data centers and other physical assets. If they’re not, then it’s a bubble and we’ll all go to shit. AI firms have already dialed back their spending on talent acquisition, including Zuckerberg’s outrageous nine-digit dollar offers to top talent to… do nothing of note. However, capital spendign has only kept decreasing: investment by the top tech firms has exceeded 300 billion dollars a year since 2024 and is expected to surpass 500 billion by 2030. Total spending on data centers is expected to surpass 3 trillion dollars by 2028, of which half comes from the largest tech firms and the other half comes from… debt. This spending, according to some estimates, is roughly as high as capital spending was by telecom companies during the telecom boom. Derek Thompson estimates that AI needs the companies to invest in an entire Apollo program every ten months. The question right now is whether this investment “is propping up the economy” or “is crowding out other activities”, without many questions about its susteinability - does the United States need ten figures of new data centers? According to a relevant Morgan Stanley report:
Calling back to the tech boom of the mid-to-late 90s, investors have been asking about the possibility that this investment cycle for data centres could be a bubble. While we agree that it is a lot of financing, very quickly, and in service of a technology that has yet to generate material revenues (GenAI), we believe there are a few important differentiating factors about this situation. For one, there are diverse pools of capital available today, which can distribute the warehousing of credit risk, unlike in the 90s when it was concentrated on corporate balance sheets. Second, the ultra-high-quality credit profile of hyperscalers and their significant cash on hand mean less sensitivity to macro conditions. Lastly, our equity research colleagues find that the ROI of AI should already be positive this year, generating $50bn in revenues, and that this will grow to exceed $1tr/year by 2028
Fundamentally, the issue is not qualitative, which is what Morgan Stanley tried to use to assuage customers, but quantitative: the require funding is exactly as big as the entire high-yield (speculative) private bond market. There’s also a problem to do with the extremely high spending on GPUs, which depreciate very fast (50% a year); the intermediate goods investment is also supposed to be astronomically high. And these data centers also need an astronomical amount of energy: Morgan Stanley estimates that basically none of the electricity capacity to build AI data centers exists yet.
This hits the main problem for artificial intelligence: NIMBYism. The first issue is with data centers themselves. They are extremely unpopular, leading to a gigantic political backlash that is “swallowing American politics” on both the left and the right. Polling shows a large number of the public is either already against data centers being built in their area or is easily persuadable to build them, particularly because of the notable lack of tangible benefits, namely, jobs. People cite usage of electricity, environmental harm, and usage of water (the latter two of which are fake) as their major concerns - which is important in a political environment dominated by electricity bills and the cost of utilities. A (highly controversial) Bloomberg analysis finds that places with more data center development had higher utility bills; even if the link is not causal in the slightest, to voters, it might as well be, particularly for low income voters. But even if you move past the specific hurdles, data centers still need electricity to be built for them, which is a gigantic political problem related to enormous stasis in the American political system. I don’t think American politicians are going up to bat for some of the most toxic infrastructure projects you can imagine.
The average American is currently increasingly enthralled by an ideology I’ve previously described as being comprised by zero-sum, low trust, and particularist politics. Zero sum politics refer to the belief that, for one person to gain, others inevitably have to lose. Low trust refers to low confidence in other people and in broader institutions like the government or big corporations. Particularism is somewhat less important and it relates to closed-mindedness. In all three cases, all arrows point to opposing data centers and other big, shiny, and new local high-tech developments. In particular, the outsized role of low trust and zero sum voters in American politics and the disproportionate presence of those values in working class and younger people (who, surprise surprise, are the most skeptical of AI and the most opposed to data centers respectively) makes them impossible to ignore. The looming clash between popularism and Abundance has one clear culprit: the American voter.
Conclusion
So the question about AI isn’t whether there’s an irrational mania; it’s whether the market is pricing an endeavor that involves spending mid single digit trillions of dollars to produce low double digit billions in revenue. According to Thompson, the numbers just don’t add up. According to Noah Smith, they don’t either. The amounts of debt required to finance all this investment are also astronomical, and are increasingly complex, featuring a number of byzantine instruments and byzantine financial arrangements that are driving comparisons to the financial crisis of 2007, which is how you know that everything is going well.
Fundamentally, the question of whether AI is a bubble has to be answered by whether AI will advance technologically enough to overcome “the messiness of the real world” and replace or augment human labor in large quantities. That this is possible is not really clear at the moment.
The 2016 British documentary HyperNormalisation, directed by Adam Curtis, is based on a term (hypernormalization spelled, well, normally) coined to explain life in the Soviet union. By the 1970s, it had begun to be clear to everyone that the Soviet economic model had failed. State socialism had produced a stagnant economy incapable of meeting basic consumer demands. However, instead of trying to improve it, Soviet policymakers decided on a different course of action: to simply invent a new system of information where everything was fine. All inhabitants of the USSR were told meaningless information about quotas and targets while their lives got increasingly and noticeably worse. Curtis applies the term to the American financial system as well: starting in the 1970s, corporate value began being formed not via increased production in the real world, but via increased engagement with the fictional world of finance, public relations, and presently, online virality. The whole thing is kind of nonsense, but what stuck to me is that the financial market is a kind of parallel reality driven by narrative - rational or not. The problem, which Curtis doesn’t seem to leave much room for (wanting to discuss in extreme, tedious detail the nuances of the relationship between Assad and Gaddafi instead) is that eventually the fake reality of finance has to give way to the real reality of fundamentals. The housing bubble was driven by supply-side constraints preventing effective housing demand from being met, a rapid expansion of debt and credit without sufficient oversight, and extremely complex and opaque instruments proliferating. Unless the AI bubble generates cash flows, it will suffer the same destiny. But, after the housing bubble, people still live in homes. After the AI bubble, people will still use AI.
Derivatives are just assets based on the price of other assets - like for example whether the price of a stock is going to go up or down (known as a future). They’re useful because they can be used to protect companies against things like oil prices going up without actually having to buy comparatively much more expensive stocks or commodities, especially if the outcome they’re protecting themselves against is very unlikely.
The funny thing is that this kind of assumes government regulators, who are the biggest source of insider trading stuff by far, are perfectly honest and would never be corrupt, which is quite paradoxical given how strong EMH believers are almost always strong libertarians who highly distrust the governments.
Not giving him credit for the “Kindleberger Spiral” of world trade during the Depression because apparently he just stole it from either Oskar Morgernstern or John Condliffe.
Robert Kiyosaki recommends investing in pink sheet companies in Rich Dad, Poor Dad, in case you wanted to have a lower opinion of him than after knowing he thought the idiom was “a fool and his money are one big party”.






AI 2027 wasn't a "document that Sam Altman shopped around to DC policymakers to support his company" It's a forecast-as-narrative produced by AI safety-adjacent researchers who believe transformative AI is imminent and want policymakers and the wider public to take that possibility seriously. The lead author, Daniel Kokotajlo, left OpenAI over safety disagreements and refused to sign a non-disparagement agreement; his relationship with OpenAI is adversarial, not promotional. The misattribution comes up several times in the piece. I'd encourage you to include some corrections flagging this
I wish we could talk more about the difference between China and America's approach to AI. America is going the proprietary approach while China is going Open Source. They are also spending 100 billion to our 600 billion on AI infrastructure.