The Footnote is the Story
Why an Ivy League PhD is training your AI for $35 an hour
An Ivy League PhD is training your next AI model for $35 an hour. A philosophy graduate is reviewing AI-generated slop at three in the morning so the chatbot won't surface it to you. Six months ago I wrote an essay about depth becoming the new speed, about protecting the texture of the work when everything else is moving too fast. This week, Karen Hao's reporting for More Perfect Union forced me to add a footnote.
The headline story about AI and jobs has always been humans versus machines. Layoffs, displacement, the shrinking middle. But the quieter story underneath is stranger and harder to look away from.
The same companies citing AI as a reason to lay people off are turning around and hiring those same people back, cheaply and without protection, to train the next model.
An Ivy League PhD on a contract that paid $35 an hour and disappeared overnight. A philosophy graduate reviewing AI-generated slop in the middle of the night. Median earnings under $23,000 a year for a workforce literally building one of the most heavily capitalized technologies of our time.
I keep coming back to a line I wrote in December, that every dataset has an origin. Every hiring funnel we optimize maps back to someone trying to move their life in a better direction. When I wrote that, I was thinking about the people moving through the funnel. This piece made me think about the people building the funnel itself, and how often they are the same people, just at a different point in the cycle. The Ivy League PhD applying for $55-an-hour contract work is the same candidate our industry was rejecting from full-time roles a month earlier. The supply chain of AI has people in it, all the way down, and most of them are invisible by design. Silicon Valley has spent a long time building an aesthetic where the model is the magic and the labor is a footnote. The footnote is the story.
What bothers me most is the ideology underneath it. There is a real belief, openly stated by some of the loudest voices in the space, that human input is friction. That the goal is fewer people in the loop. That a leaner team and a thinner middle is just progress. I do not think that is progress. I think that is a choice dressed up as inevitability.
That is not a side effect. That is the design.
I run RightMatch, a hiring company. I sit adjacent to exactly the supply chain Karen is describing, and I am not pretending we have it all figured out. Founders in my position are the ones writing the job posts, designing the funnels, choosing what to automate and what to leave to a human. That puts a real weight on the work we do. The companies building AI on top of a hidden underclass are making a bet that none of us will look closely at how the sausage is made. The founders who win the next decade are not the ones who automate the most. They are the ones who are honest about who their tools touch, and how, and what they owe those people in return.
There is a version of this industry that treats the people doing the work, the people inside the funnel, the people training the model, as inputs to be minimized. There is another version that treats them as the whole point. Those are different companies. They will produce different futures.
Watch Karen's piece if you have twenty minutes. It is worth your attention, and attention is the thing we are all running low on.
If you're building in AI right now, what's the line you won't cross on labor? I'm trying to figure out what "building responsibly" actually looks like from inside a hiring company, and I'd genuinely like to hear how others are thinking about it. Drop a comment.
If this hit, restack it so more founders see it. If you want more like this, subscribe.

