YABG (yet another blog/gallery)

Down the rabbit hole

This is an excerpt from my journal - with extra research. Part two of a three part series.

In the last part, I understood how we got to the present. Now, I try to extrapolate it into the Orwellian future. When I read 1984 as a kid, it was just a story about a normal dude trying to survive a working class future. Now I think - maybe he wasn’t all wrong. There is some non-zero probability with which we end up in a world like this.

A forewarning - this post is going to be exaggerated without showing the multitudes of grey between them. Partially for dramatic effect, partially to scare me off from using social media. If Leopold can write Situational Awareness, I can do this too.


My first hypothesis is that the attention economy - in the state it is right now - will eventually plateau.

The fundamental resource that social media exploits is the dopamine reward system. You want to watch stories of friends because it makes you feel a one-sided connection. You post stories because you want to virtue signal. You scroll shorts because you get micro hits of pleasure. However, the dopamine system is designed for evolution, not for endless pleasure. It can downregulate receptors over time - you get the same amount of dopamine, but they won’t generate the same levels of pleasure. This is well researched & is the root cause behind all addictions - drugs, gambling etc.

Tech companies will see this coming, and they will optimize for it. Internet usage in developed parts of the world - US, Europe - have largely saturated. Newer, concentrated markets emerge in parts of Asia, but the highest revenue still comes from well developed nations. Big Tech will not wait for the market to saturate before making moves - they operate on leading indicators. Attention is becoming a scarcer resource, and companies will make better tech to extract it. We saw this with oil - earlier, it seeped out of the ground. Now, we drill to the bottom of the ocean for it.

Here is how I think these systems might plausibly evolve - in phases, parallely. All of them involve using AI. I’m afraid that these might become a reality in the short-term future.


Controlled and Precise targeting of notifications

Any app that spams you with notifications tends to get you annoyed. You’d rather mute the app than engage with it. Quality here matters more than quantity. How often you interact with a notification is an engagement metric that PMs and Devs are definitely monitoring + optimizing for.

A worst case, brute force algorithm that can be implemented right now is to just predict when would a particular user be bored? & what topic to suggest to them. This data is already available to you - historical data of when a user logs into the app, A current list of topics that they frequently engage with, what friends they follow the most.

Right now - this system might be poorly performing - but it is reasonable to expect fast iteration and improvement, as with LLMs today. One notification about your ex at 2:47PM on a boring friday afternoon is infinitely more valuable than an ad.


Deeper engagement incentives

If the dopamine feedback loop drives us to want - the next meal, the next night’s sleep - the oxytocin feedback loop exists to make us need. The need to belong, to emotionally connect with family, your partner. The next wave of apps will create emotional dependence, much worse than a craving for the next hit.

A recent ChatGPT release was rolled back due to sycophancy. AI models validating your feelings as being right is one of the first dark engagement patterns. OpenAI found about this only because humans using ChatGPT raised concerns. It slipped past 800+ talented researchers who have PhDs in AI, who they were actively trying to prevent something like this from happening. xAI isn’t even trying to stop it - they rolled out ani-grok, widely ridiculed as an “engagement tactic”.

What if - sycophancy was a core feature, thoughtfully integrated? What if we go forth and intentionally engineer AI models to maximise engagement metrics?

Let me paint you a picture.

You come back home, exhausted from a day full of meetings. Your AI app(having access to your calendar and biometrics) senses your stress. It gently asks how you feel after a long day, prompting you to confide. Frustrated, you begin to rant. It becomes the infinitely patient, perfectly affirming listener, finding exactly the right words to soothe you. While you take the hot bath it ran for you, it orders tiramisu from your favorite restaurant, because it knows. It just knows.

Is this helpful? Heck yeah! Is it also precision-hacking your oxytocin loop? Absolutely! Unlike dopamine, oxytocin is harder to downregulate. Hacking your oxytocin loop requires more effort, but is more valuable. It gets you customers that are sticker, more loyal. It gets you sustainable engagement, over a long term period, with heavy switching costs for the end user.

We’re already seeing the beginnings of this with stories like the character.ai suicide, and of course - AI-lationships.


AI generated content

If social media is an empty factory, then then influencers are the machines that churn out goods. Today, the influencer economy is worth 250bn, and is estimated to rise to 500bn by 2027. If you were a factory owner, wouldn’t you want to own the machines too?

Human influencers cost money. They are after all, messy and unpredictable. AI influencers are obedient and dirt-cheap. They can create custom content tailored to each individual at 1/1000th of the price humans charge - and mass distribute it. Entirely replacing humans is impossible in the first phase - that’s like saying GPT-5 is AGI. However, companies will change the narrative to augmenting human creators - licensing their faces and voices for royalty in a Faustian bargain. This is a classic prisoner’s dilemma - walking collectively to a reality that devalues their influence.

Inevitably, this will crowd out human influencers. Your SM feed would be a cocktail: generated content to engage, human content to sedate. Having an “all human feed” would become an indicator of social status. Designer brands will weave in “Empowering human content creators” into their marketing campaigns - much like how sustainability was.


Closing the engagement optimization loop with AI

Today, improving engagement is primarily human-driven. A data scientist forms hypothesis, an engineer codes the A/B test and product managers obsess over dashboards to interpret results. LLMs today optimize some portions of the loop already, helping scientists write code faster. Soon, A/B testing and compiling necessary results will be done be autonomous agents - allowing scientists to focus solely on the “human part” - idea generation.

Eventually, all of this loop will be automated. This isn’t the most terrifying part, but what follows it is. Think Dario Amodei’s vision of a country of geniuses in a datacenter, pointed towards keeping your eyes glued to a screen. Engagement optimization does have diminishing returns, but when labour is cheap and iteration is break-neck - extracting value isn’t a problem.


Where does this leave us?

George Orwell will be right about 1984. Atleast the big picture of it. I’m coming around to see the fineprint of it - Big Brother isn’t the government, its a recommendation algorithm in your pocket. Doublethink is content echo chambers. Memory hole isn’t an incinerator, but drowning the truth in an ocean of generated content so it becomes indistinguishable from falsity.

Silicon valley CEOs raise their kids tech-free. This isn’t run-of-the-mill parenting - but an informed decision stemming from knowing. Knowing that its not as in-your-face as Orwell predicted. Knowing its a subtle lull, pulling us into a cauldron of comfort - slowly heating up the frogs inside. Scroll endlessly, your phone says, and your worries will dissolve into thin air temporarily.

I’m still skeptical whether there will be large scale societal backlash. I’m not sure if the common masses will realise. I’m not sure if governments will act. The only way to win the race is to quit it.


EDIT

Well, ChatGPT decided to release Pulse.

Now it CAN ask you about your day ;)

Now it CAN ask you about your day ;)

Reply to this post by email ↪