I can't watch this stuff and find [it] interesting. Whoever creates this stuff has no idea what pain is whatsoever. I am utterly disgusted. If you really want to make creepy stuff, you can go ahead and do it. I would never wish to incorporate this technology into my work at all. I strongly feel that this is an insult to life itself.
Hazao Miyazaki, founder of Studio Ghibli, on AI-generated art
As usual, the internet is discussing AI and automation - this time, because ChatGPT created viral images of users as Studio Ghibli cartoons. The discussion, instead of in any productive or interesting way, quickly shifted towards art and how “art is now accessible” and the livelihoods of artists. This discourse is, as usual, incredibly tedious, because it’s mostly about how artists (don’t) deserve to exist when machines can do their job for them. You can read an interesting take here. But in my opinion, besides the bothersome and usually gendered dynamics, it also doesn’t really deal with the underlying truth of the argument: both the annoying tech dudes and the equally annoying artists seem to agree that ChatGPT and Claude and Deepseek and Poob and Pheebo will replace them. I don’t agree, not just on the normative case on the sanctity of labor, but on the positive case - I think they’re just plain wrong about what ChatGPT will do to the labor market
The Macrodat’ Uprising
A pointsman’s back straightened itself upright suddenly against a tramway standard by Mr. Bloom’s window. Couldn’t they invent something automatic so that the wheel itself much handier? Well but that fellow would lose his job then? Well but then another fellow would get a job making the new invention?
James Joyce, “Ulysses”
I have a pretty lengthy post about automation, so go back to read it for a more in depth look. But the fundamental thing here is that jobs aren’t actually a single task - they’re a bundle of tasks, performed under supervision. Technology doesn’t actually displace jobs it displaces some, if not all, of the tasks that those jobs do. So when people say “70% of jobs are at risk of automation”, what they actually mean is “70% of jobs contain tasks that could plausibly be automated”. Let’s look at one example: ATMs. When cash machines were introduced, they displaced the people whose job it was to give out money at the bank - but total bank employment grew over time, as the job of bank teller was replaced by more “customer service” and “financial services salesperson” type profession. This shows that automation can displace the tasks of a job without displacing the jobs, which depends on the skills necessary, meaning that even if machines can replace some aspect of a job, it’s possible that the job still exists - but it might emphasize different skillsets.
This phenomenon, known as skill-biased technical change, is extremely important - basically, new technologies can displace workers by shifting the skill profile of their profession from, for example, information-processing to more “creativity-driven” processes. In particular, since the 1970s, advanced economies have started having an increasing premium on “human capital” (education) over other forms of skills. In particular, this was heightened by the introduction of the computer: the spread of computer technology can account for a third to half of the acceleration in demand for educated workers starting in the 70s. This is important because back in the 80s people were concerned that computers would create mass unemployment, especially among knowledge workers- but this was not the case; in fact, the impact of computer technology on the labor market was to strengthen the demand for educated labor, and weaken the demand for high cognitive load but non-educated labor - by eliminating clerical, processing, and information-collecting positions. The positions that were eliminated weren’t the ones that would be using computers the most - it was the ones who would be using computers the least.
In fact, this had significant impacts on the labor market - computer technology allowed companies to drastically reduce “mid skill” professions like, well, computers (the people who did calculations by hand) and instead hire more highly skilled professionals, who could easily use the information those professions gathered for them directly. The result was that the labor market was split in twain, with the high human capital “knowledge track” reaping the rewards and the low human capital “physical” and “care” tracks paying the price. The implications of this phenomenon are massive1, particularly on wage inequality, but importantly it means that we can’t know ahead of time whether automation will increase or decrease the demand for a certain skill without considering ex-ante whether it would be enhanced or substituted by automation. In general, computer-driven automation eliminated specialized professions that needed mid-level cognitive skills, and it made it so the few remaining mid-level cognitive professions were mostly occupied by college graduates, who had more legible credentials for their peers.
I think the fundamental comparison for ChatGPT on knowledge occupations is international trade, particularly this exerpt from Paul Krugman, titled the Accidental Theorist, a classic in trade economics. Krugman proposes the following: an economy that makes only sausages and buns, each of which you obviously need one of to make a hot dog. Assuming that the economy is at full employment, its 120 million employees work 60 million each on making franks, and 60 million on buns. But imagine a technology doubles productivity in the bun field, such that it only needs 30 million workers to make 60 million buns. Wouldn’t this result in 30 million unemployed? Well, not necessarily. If skills are portable from buns to links and if the gains from higher hot dog efficiency are distributed evenly, then it’s entirely possible that some of the workers would stay on anyways, and some would move onto the brat field, thus raising total output of both sausages and buns and therefore leaving everyone better off. The point being, even if AI makes every technical/knowledge worker more productive, it doesn’t mean that there will be fewer of them - it very well could mean that there’s more of them, but at lower wages, either due to compression, or due to the fact that less productive industries start demanding their skills. To take another page from Krugman’s extense catalogue, in fact technology is basically irrelevant to the amount of jobs (but not to their relative wages or quality): what matters most is aggregate demand - that’s why it’s so important that the gains from trade (or technology) be shared, because if they benefit people who don’t spend, then the economy can enter a prolonged slump. In fact, what happened in the US was precisely this: after the trade disruptions of NAFTA and the “China Shock” were internalized, the economy both entered an extraordinarily severe and prolonged recession and saw a drastic plunge in geographic mobility, leading to communities that were in severe economic distress without any outlet. The losses from trade were geographically concentrated in ways that provoked severe political effects later on.
Moving on, I think that basically everyone gets the case on creative fields precisely backwards: AI generated content drastically lowers the skill premium of raw talent, but drastically increases the premium from creativity and aesthetic sense. For the film industry, studios and producers being excited about AI is, I think, the source of their undoing: the main value added that producers provide is financing and networks for directors to assemble both the cast and the crew. I don’t know the impact that AI is going to have on the creative process, but my guess is that actors would net out about the same, “physical” crew would lose out somewhat, VFX and the like would win out, and editors, screenwriters, and cinematographers would make bank. Fundamentally, moving the onus of filmmaking (or other arts) from talent and physicality to creativity, originality, and interpretation will have major effects - but I think that, like computers, they’ll be concentrated in the non-AI exposed sectors.
As for the rest of the economy, Baumol’s the word. William Baumol, an economist who passed in 2017, has the defining idea about technology, productivity, and wages (called “cost disease”2, although it’s not necessarily a negative - it implies higher wages and higher productivity for the entire economy): that while the most productive sectors see wage growth in line with their advances, less productive sectors see wage growth somewhat on track - mainly because workers in the “diseased” sectors can bargain for higher wages by threatening to move to the technically advanced, higher paying sectors. The evidence for this to predict costs rising in these sectors is somewhat controversial, especially when there are clear barriers in terms of qualifications and, once again, geographic barriers that weren’t there before.
Bargain bins
Fundamentally, the core principle that unites the trade-technology comparison is that both rely on comparative advantage: let’s explain with an example. Imagine you’re the owner of FC Barcelona in ~2014 and you have Lionel Messi, the greatest “soccer” player in the world, at your disposal (because he wasn’t washed back then). But, besides being the best footballer, he’s also the best lawn-mower in the planet. Since Camp Nou, your stadium, needs to have its turf mowed every day: would you ask Messi to mow it for 2 hours, or would you rather he devote those two hours to training? What a preposterous question - of course Messi, the greatest player, should play instead of mowing, that’s much more productive. But if instead of running El Barça you’re running a blockchain-based project management SaaS startup, and if you replace “Messi” with “your highly trained coders” and “mowing the lawn” with “doing a lot of bullshit busy work like client relations and graphic design”, then you understand why ChatGPT isn’t going to replace HR or the people making stupid pitch decks or whatever - because it’s a waste of time for employees at core functions to do those things with AI.
This gets to the basic reason why firms exist at all - to aggregate information on how to make certain products. As the philosopher Immanuel Kant said, humans had a priori knowledge of certain ways to interpret the world (things like time, but also physical dimensions, and some intuitive logical concepts like, say “fairness”, or mathematics). This can help account for why I’m skeptical of the bull case for AI: a lot of tasks at work require not rote repetition of logical tasks, but adaptability, and much more importantly, common sense and creativity. Another half of this is what Friedrich Hayek would call “the knowledge of the circumstances of time and place” (he was not a good marketer): the idea that there is knowledge that is not capable of being generalized or turned into “scientific” rules - and the specific example is “how much we have to learn in any occupation after we have completed our theoretical training, how big a part of our working life we spend learning particular jobs, and how valuable an asset in all walks of life is knowledge of people, of local conditions, and of special circumstances”. Once again, this is information that cannot be synthesized directly, and which emerges only as an aggregate property of economic activity. This information is, in some way or another, transmitted through prices.
Furthermore, it also gets to why economic activities take place in companies as groups of people: that guiding oneself by prices has costs to discovering them, and to negotiating contracts, such that it’s best to in-house large amounts of tasks. This is why companies exist, and why relying on freelance graphic design or on case-by-case uses of ChatGPT are just not efficient at certain scales of usage. The limits to the size of firms are not just that it’s hard to manage all that information centrally, but also that it’s hard to manage all those people centrally. This is due to the fact that the relationship between employer and employee is unique because it’s team-based, and like anyone who has ever done a group assignment can tell, it’s really hard to get everyone to pull their weight. The central role of management, not just of owner-managers but of a management “class” of workers, is measuring and tracking effort by each individual worker. If this is not possible to do in quantifiable ways, both the importance of the task and the importancet of human middle managers rises.
This all means that there’s a lot of internal comparative advantage in offloading plenty of personnel management decisions to internal third parties, and thus that there will still be a pretty hefty “PMC” and pretty heavy demand of creative and knowledge work -it’s just too contextual to teach to an LLM in the abstract. At its core, I think a lot of the “promise” of ChatGPT for the tech capitalists is the “promise” of not having to employ people who do middle management work like human resources, project management, or client work - without understanding that the key benefit from those is specialization of tasks and labor for their actually productive employees. The proliferation of these “middle managers” is just the Baumol’s Cost Disease of the technical side of the business becoming much more profitable and important - just like Messi and the lawnmower, your H1B coders shouldn’t waste their time on BS.
Conclusion
I already wrote about trade, automation, and the nature of the firm each separately and to a lengthier extent, but never as a unified explanation for why I’m pretty skeptical that AI will “end” the knowledge class or the HR crones who work from the hotel pool. I think that it’s all a weird fantasy for the tech capitalists that has very weird political consequences: the gains from the AI economy will not be concentrated in capital, but rather, will have to be shared with labor, especially high-skilled labor. If you see the current tech industry project as one of shifting the functional distribution of income in their favor (that is, of ensuring that “capital” gets the biggest chunk of the pie, while labor, especially educated labor, gets as little as possible), then their enamorment with the transformative powers of AI and drones and robotics make sense: the capitalists want to invent a form of capital that replaces all forms of labor, as is their right as capitalists. But what will happen instead is that AI is going to end up boosting the “PMC” share of labor even further, while also perhaps compressing low-skill wages even more, since the gains are shared with the less advanced, manual labor economy as much as it can threaten exit credibly. This means that the polarization of the job market would get worse, not better, from AI, and that the Silicon Valley tech-lash is going to get even more pronounced.
I also think, as seen in the footnote right below, that this has a pretty big and significant impact on the gender politics of the AI booster sphere - DOGE is about sex.
One of the major areas of comparison is gender, in ways that are now more politically relevant than ever… mild spoiler for some upcoming writing. Major, major project.
Strengthening the trade/technology comparison is that Baumol’s Cost Disease is the same basic idea as the Balassa/Samuelson effect: that countries with high productivity in advanced sectors will have more expensive non-advanced products (usually services).
I really liked the your usage of the argument for automation but into creative fields, it’s convincing in a unique way!
However, I feel like the ‘burden’ or concern of ai replacing or transforming ‘art’ itself is more imposing than you give credit to, because it is fundamentally different from other kinds of skilled labor in its production. The concerns of AI as a threat to the purity of art as a social and human phenomenon breaks down to two main points. Firstly, that AI does impose some practical existential threat on human art. This comes from its model of learning from the collective body of human art itself, and actual instances of AI art that resembles human levels of creative expression, which is destructive to art. This leads into the next point, that art is a uniquely phenomenological and social thing. The technical means of producing art are not always rigidly separate from its means of creative expression, and are not always reducible to some raw mechanical input that can be automated or made more productive separately. Creative expression of art is a subjective product of human consciousness and experience, and the ‘raw talent’ which corresponds to those technical means of artistic production arises not as an objective mechanical process such as in drawing lines or producing sequences of precisely intonated notes, but as intentional means of conscious expression. Human art, in every stage of its production, is intentional expression of consciousness and human experience, while AI art amounts to its mimicry. In the year 2XXX, I can’t imagine that some AI model would not reach a stage of development capable of reproducing a work in exact likeness to one of Miyazaki’s, but it would never be ‘art’ because it is devoid of its subjective essence. The ultimate point of this is that the AI automation of certain tasks in the process of artistic production would necessarily appropriate its process of creative expression to some extent, if not entirely, for more efficient commodity production. This is what it means to say that AI art is destructive to artists, and ‘an insult to life itself’.
great essay. i'd never considered the tasks vs jobs point. makes perfect sense.