Rabbit Holes š³ļø #108
From vibe working to algorithmic complacency, industrial distraction, from going round in circles to spiraling, scaling wide ā scaling deep, ZIRP-era ā AI-era, and spiritual innovations
Hello!
Itās a new month, spring-like weather in Berlin, and a few changes are coming to this newsletter. Why? Because I want to make this newsletter as crisp and as valuable as possible for you:
From now on, each Rabbit Holes issue will have a small paywall towards the middle. As you'll see in today's issue, free subscribers will still receive a lot, but paid subscribers will receive more.
For paid subscribers, there will be one deep dive per month instead of two. However, this one will be in a new, more visual, keynote/report-like format. The idea here is to focus on quality and value over quantity.
So all in all, there will be 3 Rabbit Hole issues + 1 enhanced deep dive per month. Aaand I might occasionally throw in a short and free post, but only when time allows and when I have something insightful to say.
Now, let us get into this weekās Rabbit Holes:
THIS WEEK ā
š¼ļø Framings: Vibe Working // Algorithmic Complacency // Industrial Distraction
š Re-Framings: Circles ā Spirals // Scaling Wide ā Scaling Deep // ZIRP-Era ā AI-Era
𧬠Frameworks: Petal Model Of Regenerative Transition
šØ Works: Spiritual Innovations // OneCourt // AI x Biomimicry
ā³ Reading Time: 10 minutes
š¼ļø Framings
Naming Framing it! Giving something we all feel more prominence in a way that promotes a deeper reflection.
š¤ Vibe Working
āAI wonāt take your job, but someone vibing with AI will.ā š Look, Iām AIās first critique: I think that most of todayās AI use cases are shit, that there is a massive AI hype balloon still flowing over our heads, and Iām very concerned about the ongoing de-humanization that AI might accelerate (see next Framing). Thereās a different path, though, one in which AI helps us be more human. And thereās a small number of pioneers out there who already use AI in such a way. Azeem Azhar gives it an interesting name but framing:
āIf you are anything like me, vibes, half-baked thoughts based on snippets of evidence and tons of experience are constantly fizzing in your brain. They are inklings of ideas, essays to write, research to conduct, advice for a founder, new audiences to reach, new ways of doing things, gut feel, and judgment.
Vibes are the raw material of genius.
Historically, turning these vibes into something tangibleāa detailed plan, a memo, a piece of codeāhas been a slog. That final 20% of clarity often demands 80% of the effort, as we wrestle our intuitions into structured form. But AI is changing that.
As LLMs improve, theyāre becoming adept at deciphering our incoherent ramblings. You can throw a jumbled idea at themāāI want something that does this, kind ofāāand theyāll figure out your real intention, delivering a workable starting point. Itās reminiscent of a parent interpreting a childās babbling needs: it might sound fanciful, but itās already happening. It is vibe working. [ā¦]
Quality work isnāt just about vibes. It is also about analysis, detail and precision. [ā¦] AI canāt replace the human gut entirelyāsometimes I still need silence, pen and paperābut it does free up time for just that. And, of course, a final output canāt be vibed. In my case, it often has to be written ā as it is now, typed deep into the evening. [ā¦]
Vibe working is more than efficiencyāit feels like a fundamental shift in cognitive work. By having AI handle the structuring, refinement and boring bits, we free our minds for what, for now, we excel at: intuition, creativity, and judgment.ā
Ā» Introducing the vibe worker by
š Algorithmic Complacency
This framing ties in with quite a few other pieces Iāve shared recently (e.g. the death of āI donāt knowā). Like the YouTuber below, Iām also perceiving a new level of de-agencyfication (is that a word?) in society due to the growing prevalence of recommendation algorithms all over the access layer of the internet. Weāre moving from being addicted to being dependent on these algorithmic filters.
āRecommendation algorithms end up putting content in front of our eyes using methods almost nobody really understands (but probably have something to do with maximizing revenues) and, well, I think itās breaking our brains.
When you have that finely-tuned, algorithmically-tailored firehose of information just coming at you like that, you might feel like youāre having a good time and learning some interesting things, but youāre not necessarily directing your own experience, are you? Is your train of thought really your own when the next swipe might derail it?
Now, I am by no means the first person to ask that question. [ā¦] But hereās what I think might be new, or at least under-discussed: I am seeing mounting evidence that an increasing number of people are so used to algorithmically-generated feeds that they no longer care to have a self-directed experience that they are in control of.
The more time I spend interacting with folks online, the more it feels like large swaths of people have forgotten to exercise their own agency. That is what I mean by algorithmic complacency. More and more people donāt seem to know or care how to view the world without a computer algorithm guiding what they see.ā
šµāš« Industrial Distraction
Maybe youāve also noticed how certain weekly news shows, at times even daily ones, are becoming needless to watch because by the time they are airing (or online), the news they are reporting on has already changed, flipped or moved to the background because something even more bizarre happened?! Well, welcome to the age of industrial distraction.
āLast week, President Donald Trump signed an executive order banning paper straws. No, I didnāt exactly seek out this information. It crept into my feed, and instead of ignoring it, my eyes and mind betrayed me, lingering on yet another piece of manufactured noise disguised as news. [ā¦]
Whether itās debates over minor sources of plastic waste or news about those debates or sensationalised political theatrics packed with āalternative facts,ā baseless claims, and outright fiction, so much of todayās information landscape seems to serve one purpose only: to distract us from what truly matters. Of course, somewhere in the mix, thereās still the real real news and stories that actually warrant our attention. But trying to find them is like searching for plastic straws in an ocean full of plastic waste. And thatās exactly the point, isnāt it? [ā¦]
In a recent paper published by Cambridge University Press, philosophers of science Cailin OāConnor and David Peter Wallis Freeborn argue that this is a clear example of what they term āindustrial distraction.ā In a nutshell, itās various techniques big corporationsāāāand their handmaidensāāāuse to shift public focus and policy in their favour. This usually involves funding and promoting research that, while technically accurate and high-quality, can be misleading. And as OāConnor and Freeborn note, it takes three main forms:
āAt its heart, industrial distraction involves changing how targets understand some causal system in the world. Typically it shifts public understanding towards some distracting potential cause of a public harm, and away from a known industrial cause of the same harm. A second variation uses inaccurate information to introduce distracting mitigants of industrial harms. And a last variant shifts public beliefs about downstream effects of policies to focus on distracting harms they may cause.ā [ā¦]
As political scientist Adnan Rasoo [ā¦] explains:
āIn a ārule by distractionā situation, the survival of the administration depends on people not being able to process the complete information. By creating multiple simultaneous distractions, the administration overloads the attention of its citizens. In essence, then, they are not lying to the people, they are just creating enough alternative explanations that ātruthā becomes debatable.āā
Ā» Distraction Is The Whole Point by
š Re-Framings
Three quite useful reframings that Iāve recently stumbled across: