Rabbit Holes 🕳️ #108
From vibe working to algorithmic complacency, industrial distraction, from going round in circles to spiraling, scaling wide → scaling deep, ZIRP-era → AI-era, and spiritual innovations
Hello!
It’s a new month, spring-like weather in Berlin, and a few changes are coming to this newsletter. Why? Because I want to make this newsletter as crisp and as valuable as possible for you:
From now on, each Rabbit Holes issue will have a small paywall towards the middle. As you'll see in today's issue, free subscribers will still receive a lot, but paid subscribers will receive more.
For paid subscribers, there will be one deep dive per month instead of two. However, this one will be in a new, more visual, keynote/report-like format. The idea here is to focus on quality and value over quantity.
So all in all, there will be 3 Rabbit Hole issues + 1 enhanced deep dive per month. Aaand I might occasionally throw in a short and free post, but only when time allows and when I have something insightful to say.
Now, let us get into this week’s Rabbit Holes:
THIS WEEK ↓
🖼️ Framings: Vibe Working // Algorithmic Complacency // Industrial Distraction
🌀 Re-Framings: Circles → Spirals // Scaling Wide → Scaling Deep // ZIRP-Era → AI-Era
🧬 Frameworks: Petal Model Of Regenerative Transition
🎨 Works: Spiritual Innovations // OneCourt // AI x Biomimicry
⏳ Reading Time: 10 minutes
🖼️ Framings
Naming Framing it! Giving something we all feel more prominence in a way that promotes a deeper reflection.
🤟 Vibe Working
‘AI won’t take your job, but someone vibing with AI will.’ 😅 Look, I’m AI’s first critique: I think that most of today’s AI use cases are shit, that there is a massive AI hype balloon still flowing over our heads, and I’m very concerned about the ongoing de-humanization that AI might accelerate (see next Framing). There’s a different path, though, one in which AI helps us be more human. And there’s a small number of pioneers out there who already use AI in such a way. Azeem Azhar gives it an interesting name but framing:
“If you are anything like me, vibes, half-baked thoughts based on snippets of evidence and tons of experience are constantly fizzing in your brain. They are inklings of ideas, essays to write, research to conduct, advice for a founder, new audiences to reach, new ways of doing things, gut feel, and judgment.
Vibes are the raw material of genius.
Historically, turning these vibes into something tangible—a detailed plan, a memo, a piece of code—has been a slog. That final 20% of clarity often demands 80% of the effort, as we wrestle our intuitions into structured form. But AI is changing that.
As LLMs improve, they’re becoming adept at deciphering our incoherent ramblings. You can throw a jumbled idea at them—“I want something that does this, kind of”—and they’ll figure out your real intention, delivering a workable starting point. It’s reminiscent of a parent interpreting a child’s babbling needs: it might sound fanciful, but it’s already happening. It is vibe working. […]
Quality work isn’t just about vibes. It is also about analysis, detail and precision. […] AI can’t replace the human gut entirely—sometimes I still need silence, pen and paper—but it does free up time for just that. And, of course, a final output can’t be vibed. In my case, it often has to be written — as it is now, typed deep into the evening. […]
Vibe working is more than efficiency—it feels like a fundamental shift in cognitive work. By having AI handle the structuring, refinement and boring bits, we free our minds for what, for now, we excel at: intuition, creativity, and judgment.”
» Introducing the vibe worker by
😃 Algorithmic Complacency
This framing ties in with quite a few other pieces I’ve shared recently (e.g. the death of “I don’t know”). Like the YouTuber below, I’m also perceiving a new level of de-agencyfication (is that a word?) in society due to the growing prevalence of recommendation algorithms all over the access layer of the internet. We’re moving from being addicted to being dependent on these algorithmic filters.
“Recommendation algorithms end up putting content in front of our eyes using methods almost nobody really understands (but probably have something to do with maximizing revenues) and, well, I think it’s breaking our brains.
When you have that finely-tuned, algorithmically-tailored firehose of information just coming at you like that, you might feel like you’re having a good time and learning some interesting things, but you’re not necessarily directing your own experience, are you? Is your train of thought really your own when the next swipe might derail it?
Now, I am by no means the first person to ask that question. […] But here’s what I think might be new, or at least under-discussed: I am seeing mounting evidence that an increasing number of people are so used to algorithmically-generated feeds that they no longer care to have a self-directed experience that they are in control of.
The more time I spend interacting with folks online, the more it feels like large swaths of people have forgotten to exercise their own agency. That is what I mean by algorithmic complacency. More and more people don’t seem to know or care how to view the world without a computer algorithm guiding what they see.”
😵💫 Industrial Distraction
Maybe you’ve also noticed how certain weekly news shows, at times even daily ones, are becoming needless to watch because by the time they are airing (or online), the news they are reporting on has already changed, flipped or moved to the background because something even more bizarre happened?! Well, welcome to the age of industrial distraction.
“Last week, President Donald Trump signed an executive order banning paper straws. No, I didn’t exactly seek out this information. It crept into my feed, and instead of ignoring it, my eyes and mind betrayed me, lingering on yet another piece of manufactured noise disguised as news. […]
Whether it’s debates over minor sources of plastic waste or news about those debates or sensationalised political theatrics packed with ‘alternative facts,’ baseless claims, and outright fiction, so much of today’s information landscape seems to serve one purpose only: to distract us from what truly matters. Of course, somewhere in the mix, there’s still the real real news and stories that actually warrant our attention. But trying to find them is like searching for plastic straws in an ocean full of plastic waste. And that’s exactly the point, isn’t it? […]
In a recent paper published by Cambridge University Press, philosophers of science Cailin O’Connor and David Peter Wallis Freeborn argue that this is a clear example of what they term ‘industrial distraction.’ In a nutshell, it’s various techniques big corporations — and their handmaidens — use to shift public focus and policy in their favour. This usually involves funding and promoting research that, while technically accurate and high-quality, can be misleading. And as O’Connor and Freeborn note, it takes three main forms:
‘At its heart, industrial distraction involves changing how targets understand some causal system in the world. Typically it shifts public understanding towards some distracting potential cause of a public harm, and away from a known industrial cause of the same harm. A second variation uses inaccurate information to introduce distracting mitigants of industrial harms. And a last variant shifts public beliefs about downstream effects of policies to focus on distracting harms they may cause.’ […]
As political scientist Adnan Rasoo […] explains:
‘In a ‘rule by distraction’ situation, the survival of the administration depends on people not being able to process the complete information. By creating multiple simultaneous distractions, the administration overloads the attention of its citizens. In essence, then, they are not lying to the people, they are just creating enough alternative explanations that ‘truth’ becomes debatable.’”
» Distraction Is The Whole Point by
🌀 Re-Framings
Three quite useful reframings that I’ve recently stumbled across: