Rabbit Holes π³οΈ #108
From vibe working to algorithmic complacency, industrial distraction, from going round in circles to spiraling, scaling wide β scaling deep, ZIRP-era β AI-era, and spiritual innovations
Hello!
Itβs a new month, spring-like weather in Berlin, and a few changes are coming to this newsletter. Why? Because I want to make this newsletter as crisp and as valuable as possible for you:
From now on, each Rabbit Holes issue will have a small paywall towards the middle. As you'll see in today's issue, free subscribers will still receive a lot, but paid subscribers will receive more.
For paid subscribers, there will be one deep dive per month instead of two. However, this one will be in a new, more visual, keynote/report-like format. The idea here is to focus on quality and value over quantity.
So all in all, there will be 3 Rabbit Hole issues + 1 enhanced deep dive per month. Aaand I might occasionally throw in a short and free post, but only when time allows and when I have something insightful to say.
Now, let us get into this weekβs Rabbit Holes:
THIS WEEK β
πΌοΈ Framings: Vibe Working // Algorithmic Complacency // Industrial Distraction
π Re-Framings: Circles β Spirals // Scaling Wide β Scaling Deep // ZIRP-Era β AI-Era
𧬠Frameworks: Petal Model Of Regenerative Transition
π¨ Works: Spiritual Innovations // OneCourt // AI x Biomimicry
β³ Reading Time: 10 minutes
πΌοΈ Framings
Naming Framing it! Giving something we all feel more prominence in a way that promotes a deeper reflection.
π€ Vibe Working
βAI wonβt take your job, but someone vibing with AI will.β π Look, Iβm AIβs first critique: I think that most of todayβs AI use cases are shit, that there is a massive AI hype balloon still flowing over our heads, and Iβm very concerned about the ongoing de-humanization that AI might accelerate (see next Framing). Thereβs a different path, though, one in which AI helps us be more human. And thereβs a small number of pioneers out there who already use AI in such a way. Azeem Azhar gives it an interesting name but framing:
βIf you are anything like me, vibes, half-baked thoughts based on snippets of evidence and tons of experience are constantly fizzing in your brain. They are inklings of ideas, essays to write, research to conduct, advice for a founder, new audiences to reach, new ways of doing things, gut feel, and judgment.
Vibes are the raw material of genius.
Historically, turning these vibes into something tangibleβa detailed plan, a memo, a piece of codeβhas been a slog. That final 20% of clarity often demands 80% of the effort, as we wrestle our intuitions into structured form. But AI is changing that.
As LLMs improve, theyβre becoming adept at deciphering our incoherent ramblings. You can throw a jumbled idea at themββI want something that does this, kind ofββand theyβll figure out your real intention, delivering a workable starting point. Itβs reminiscent of a parent interpreting a childβs babbling needs: it might sound fanciful, but itβs already happening. It is vibe working. [β¦]
Quality work isnβt just about vibes. It is also about analysis, detail and precision. [β¦] AI canβt replace the human gut entirelyβsometimes I still need silence, pen and paperβbut it does free up time for just that. And, of course, a final output canβt be vibed. In my case, it often has to be written β as it is now, typed deep into the evening. [β¦]
Vibe working is more than efficiencyβit feels like a fundamental shift in cognitive work. By having AI handle the structuring, refinement and boring bits, we free our minds for what, for now, we excel at: intuition, creativity, and judgment.β
Β» Introducing the vibe worker by
π Algorithmic Complacency
This framing ties in with quite a few other pieces Iβve shared recently (e.g. the death of βI donβt knowβ). Like the YouTuber below, Iβm also perceiving a new level of de-agencyfication (is that a word?) in society due to the growing prevalence of recommendation algorithms all over the access layer of the internet. Weβre moving from being addicted to being dependent on these algorithmic filters.
βRecommendation algorithms end up putting content in front of our eyes using methods almost nobody really understands (but probably have something to do with maximizing revenues) and, well, I think itβs breaking our brains.
When you have that finely-tuned, algorithmically-tailored firehose of information just coming at you like that, you might feel like youβre having a good time and learning some interesting things, but youβre not necessarily directing your own experience, are you? Is your train of thought really your own when the next swipe might derail it?
Now, I am by no means the first person to ask that question. [β¦] But hereβs what I think might be new, or at least under-discussed: I am seeing mounting evidence that an increasing number of people are so used to algorithmically-generated feeds that they no longer care to have a self-directed experience that they are in control of.
The more time I spend interacting with folks online, the more it feels like large swaths of people have forgotten to exercise their own agency. That is what I mean by algorithmic complacency. More and more people donβt seem to know or care how to view the world without a computer algorithm guiding what they see.β
π΅βπ« Industrial Distraction
Maybe youβve also noticed how certain weekly news shows, at times even daily ones, are becoming needless to watch because by the time they are airing (or online), the news they are reporting on has already changed, flipped or moved to the background because something even more bizarre happened?! Well, welcome to the age of industrial distraction.
βLast week, President Donald Trump signed an executive order banning paper straws. No, I didnβt exactly seek out this information. It crept into my feed, and instead of ignoring it, my eyes and mind betrayed me, lingering on yet another piece of manufactured noise disguised as news. [β¦]
Whether itβs debates over minor sources of plastic waste or news about those debates or sensationalised political theatrics packed with βalternative facts,β baseless claims, and outright fiction, so much of todayβs information landscape seems to serve one purpose only: to distract us from what truly matters. Of course, somewhere in the mix, thereβs still the real real news and stories that actually warrant our attention. But trying to find them is like searching for plastic straws in an ocean full of plastic waste. And thatβs exactly the point, isnβt it? [β¦]
In a recent paper published by Cambridge University Press, philosophers of science Cailin OβConnor and David Peter Wallis Freeborn argue that this is a clear example of what they term βindustrial distraction.β In a nutshell, itβs various techniques big corporationsβββand their handmaidensβββuse to shift public focus and policy in their favour. This usually involves funding and promoting research that, while technically accurate and high-quality, can be misleading. And as OβConnor and Freeborn note, it takes three main forms:
βAt its heart, industrial distraction involves changing how targets understand some causal system in the world. Typically it shifts public understanding towards some distracting potential cause of a public harm, and away from a known industrial cause of the same harm. A second variation uses inaccurate information to introduce distracting mitigants of industrial harms. And a last variant shifts public beliefs about downstream effects of policies to focus on distracting harms they may cause.β [β¦]
As political scientist Adnan Rasoo [β¦] explains:
βIn a βrule by distractionβ situation, the survival of the administration depends on people not being able to process the complete information. By creating multiple simultaneous distractions, the administration overloads the attention of its citizens. In essence, then, they are not lying to the people, they are just creating enough alternative explanations that βtruthβ becomes debatable.ββ
Β» Distraction Is The Whole Point by
π Re-Framings
Three quite useful reframings that Iβve recently stumbled across: