Rabbit Holes 🕳️ #112
From the context economy to everything is fast, the post world, eating alone, data → dance, the trap of logical thinking, leisure ≠ non-work, not misdiagnosing risk as uncertainty, and kids x AI
Hello and welcome to the 32 new subscribers who joined us since last week.
Before we dive into this week’s Rabbit Holes—something to think about…
I automated my creative process. AI comes up with better ideas than I have. I automated my writing. All SEO-optimized. Claude now does my social media strategy and posting. I hate being on social media anyway. And ChatGPT does the business admin, strategy, and decision-making. Why bother, right?
So what’s left for me to do? Nothing? Just “orchestrating” these AI “agents” as some futurists would say?
What’s left to do, left to think, left to create for me?
What’s left to make meaning, to provide purpose?
What’s left to live for?
Instead of just talking about AI as a machine for optimizing productivity (or a threat of replacement), my latest deep dive explores how we can reframe AI—not as something that strips away our humanity, but as something that could actually make our work more alive.
If you haven’t read it yet, you can get access here:
And now, onto this week’s Rabbit Holes:
THIS WEEK ↓
🖼️ Framings: The Context Economy // Everything is Fast // Post World
📊 Numbers: Eating Alone
🌀 Re-Framings: Data → Dance // Logical → Illogical // Leisure ≠ Non-Work
🧬 Frameworks: Risk vs Uncertainty
🎨 Works: Children x AI // Cosmos // Plastic Cyanotypes
⏳ Reading Time: 10 minutes
🖼️ Framings
Naming Framing it! Giving something we all feel more prominence in a way that promotes a deeper reflection.
ℹ️ The Context Economy
A framing about framing (or context), how meta… ;). And again, this is very much aligned with an everything-everywhere-all-at-once vibe and thereby one of my key themes for 2025, Hyperreality, as well as with what I try to do with this newsletter in general: re-framing!
“In our hyper-digital age, it's increasingly the context and framing of information (not the content itself) that drives debates and shapes economic futures. […]
Unlike previous technological revolutions that primarily transformed physical production, AI transforms meaning-making itself, changing how we create and distribute value. The Ghibli AI trend is a perfect microcosm:
The raw content (images) is easily reproducible and has minimal intrinsic value
The context (who shares it, how it's framed, which platforms amplify it) creates the actual economic and social value
The creators of the original style (Studio Ghibli) receive some value through "increased interest" while platforms capture the economic benefits.
Studio Ghibli itself saw a surge in interest but captured almost none of the economic value from the trend
Platform companies (Twitter) monetized the increased engagement
AI developers gained valuable training data from millions of uploaded images
Individual users "spent" their social capital by participating in the trend
This pattern of value distribution - where content creators receive attention but platforms capture revenue - is clearly the dominant economic model of our digital age.
But notice how the context (the viral tweets, the platform hype, the novelty of AI filter) drove the phenomenon more than the intrinsic value of the images themselves. Once again, context overshadowed content.
But as AI accelerates our ability to (1) generate, (2) manipulate, and (3) distribute information, this weird tension becomes even more pronounced. We're entering an era where the battle between content and context will reshape how all policy is created, communicated, and understood. […]
Power gravitates to those who control context. Ultimately, the real battle isn’t about the raw content - it’s about who defines, manipulates, and monetizes the frame.
» Studio Ghibli AI, Classified Leaks, and the Context Shift by
🤖 AGI-Prepping
In so many different ways, the world is AGI-prepping. And although nobody knows if AGI (Artificial General Intelligence) will actually come or is even possible, everyone is AGI-prepping. What’s more, and this is even more interesting, AGI-Prepping is used to argue for optimizing for “efficiency”.
“The case for imminent AGI more or less reduces down to the notion that creative problem solving can be commoditized via large model based technologies. Such technologies include language models like the GPT family and Claude, the diffusion models that produce art and others. The thesis is that these models will soon be able to solve difficult problems better than humans ever could. […] Under this theory, we should prioritize building AI over solving other problems because AGI (or whatever you want to call it: Amodei doesn’t like that term) will be a superior and independent means for solving those problems, exceeding the problem solving capacity of mere humans. […]
Our account provides a different understanding of large models and problem solving. Specifically, it claims that large models are a social and cultural technology through which human beings can solve problems and coordinate in new and sometimes useful ways. We explain large models as “‘lossy JPEGs’ of the data corpora on which they have been trained,” statistical machines that “sample and generate text and images.” The implication is that they will never be intelligent in the ways that humans, or even bumble-bees are intelligent, but that they may reflect, mediate, compress and remix human intelligence in useful ways. […]
Even so, AGI-prepping is reshaping our politics. Wildly ambitious claims for AGI have not only shaped America’s grand strategy, but are plausibly among the justifying reasons for DOGE. […] And indeed, one of DOGE’s major ambitions, as described in a new article in WIRED, appears to have been to pull as much government information as possible into a large model that could then provide useful information across the totality of government.
The point - which I don’t think is understood nearly widely enough - is that radical institutional revolutions such as DOGE follow naturally from the AGI-prepper framework. If AGI is right around the corner, we don’t need to have a massive federal government apparatus, organizing funding for science via the National Science Foundation and the National Institute for Health. […] From this perspective, most human based institutions are obsolescing assets that need to be ripped out, and DOGE is only the barest of beginnings.”
» Should AGI-preppers embrace DOGE? by
💬 Post World
We’re moving from article world to post world! A great framing by Ryan Broderick. It still amazes me how legacy media and institutions, in general, underestimate the power of social media.
“As Biederman so succinctly put it, at some point between the first Trump administration and the second, “Article World” was defeated by “Post World”.
As he sees it, “Article World” is the universe of American corporate journalism and punditry that, well, basically held up liberal democracy in this country since the invention of the radio. And “Post World” is everything the internet has allowed to flourish since the invention of the smartphone — YouTubers, streamers, influencers, conspiracy theorists, random trolls, bloggers, and, of course, podcasters. And now huge publications and news channels are finally noticing that Article World, with all its money and resources and prestige, has been reduced to competing with random posts that both voters and government officials happen to see online. These features are not just asking, “what happened to American men?” They’re asking, “why can’t we influence American men the way we used to?” […]
Article World is dying, or maybe already dead, and Post World is ascendant. […] We’ve replaced the largely one-way street of mass media with not even just a two-way street of mass media and the internet, like we had in the 2010s, but an infinitely expanding intersection of cars that all think they have the right of way. Think about it for a second. When was the last time you truly felt consensus? Not in the sense that a trend was happening around you — although, was it? — but a new fact or bit of information that felt universally agreed upon? Was it in the last two years? Was it this decade? And the most skilled and seasoned journalists in this country can continue to try to win that back. To use journalism to shift public perception and hold the powerful to account. An admirable and necessary endeavor. But unless the very architecture of the internet changes, it’s likely whatever they write will end up as just another post.”
» When was the last time you felt consensus? by Ryan Broderick
📈 Numbers
A thought-provoking chart that perfectly captures a pivotal shift:
The Loneliness Epidemic In One Graph
You’ve already come this far—why stop now? Join over 120 strategists, creatives & visionaries leveling up their thinking with my curated re-framings in the paid edition (+ you get access to my recent AI Reframed deep dive and dozens more…):
🌀
Re-Framings: Data → Dance // Logical → Illogical // Leisure ≠ Non-Work
🧬
Frameworks: Risk vs. Uncertainty
🎨
Works: Children x AI // Cosmos // Plastic Cyanotypes