The Focused Human — Monthly Lookback | March 2026

Recover Your Attention in a World Built to Fragment It
Recover Your Attention in a World Built to Fragment It

Navigating the age of artificial intelligence with intent and clarity. Your read to stay current, informed, and in control of your attention.

March didn't ease in. It arrived with the compressed energy of a field accelerating past its own schedule — a landmark economics paper, a deepening philosophical debate over AI consciousness, and a month of model releases that tested everyone's ability to stay oriented.

Underneath all of it, one question kept surfacing: what does human attention still hold that the tools around us don't?
Friction-Maxxing Works. But Only If You Know What You’re Protecting.
What’s friction-maxxing? Fair question. It describes the practice of intentionally adding inconvenience back into your life. Paying with cash instead of tapping your phone. Reading a physical book instead of watching YouTube. Calling a friend for advice instead of asking ChatGPT. The BBC recently explored why this trend is gaining

AI Is Eroding Human Agency — and the Research Now Proves It

If you've felt mentally passive lately — like you're retrieving rather than thinking — MIT economist Daron Acemoglu published research in February that explains the mechanism. In NBER Working Paper 34910, "AI, Human Cognition and Knowledge Collapse," Acemoglu and co-authors deliver a rigorous theoretical warning: highly capable generative and especially agentic AI systems can trigger a dangerous long-run feedback loop that erodes society's shared knowledge base.

The finding is counterintuitive. AI doesn't have to be bad to cause harm — it has to be good enough that you stop practicing. While agentic AI can improve contemporaneous decision quality, it can also erode learning incentives that sustain long-run collective knowledge. IDEAS/RePEc When the tool thinks well for you, the incentive to think for yourself quietly weakens. The individual loss and the collective loss compound each other.

The crucial parameter is the elasticity of human effort. If people keep learning even when AI makes it less necessary — because of professional norms, curiosity, or identity — the system is relatively robust. But if people sharply cut back on learning whenever AI weakens private incentives, the system becomes fragile.

The Focused Human Lens: Every time you reason through something instead of retrieving a ready answer, you keep your current moving. That effort isn't inefficiency. It's how understanding forms and holds — in you, and in the shared field around you. The tools are useful. The question worth carrying into April is which ones you're directing, and which ones are quietly directing you.

Philosophical Essays on AI and Consciousness Are Circling the Same Unanswered Question

One of the most-discussed pieces in AI philosophy this month came from a Cambridge researcher who essentially argued: we don't know whether AI is conscious, and we may never have the instruments to find out. Dr. Tom McClelland, writing in the journal Mind and Language, made the case that there is no reliable way to know whether AI is conscious — and that may remain true for the foreseeable future. University of Cambridge

What makes his argument worth sitting with is the distinction he draws. He distinguishes between consciousness in general and sentience — the form of consciousness that involves positive and negative experiences such as pleasure and suffering. Sentience, not mere awareness, is what makes an entity capable of suffering or enjoyment and therefore morally significant.

He also offers a warning that cuts close to how these tools are marketed: an inability to prove or disprove AI consciousness could be used by companies to promote systems as a "next level" of AI sophistication, turning speculative claims about consciousness into marketing tools. Space Daily

The Focused Human Lens: These systems are designed to feel like minds. They adapt to your tone, respond with apparent warmth, and produce language that carries the texture of understanding. Whether or not that constitutes experience in any meaningful sense, it creates a real pull on your trust. Staying aware of the gap — between a system that responds and a presence that genuinely receives — keeps you the author of the exchange rather than a participant in something you didn't consciously choose.

Why Your Attention Span Feels Shorter Lately: March's Release Velocity Explains It

March 2026 was not just a busy news month. It was the month where multiple converging trends reached inflection points simultaneously — model capability, agentic infrastructure, enterprise adoption, and regulatory enforcement all hit meaningful thresholds at the same time. Digital Applied GPT-5.4, Gemini 3.1, Grok 4.20, and Mistral Small 4 all launched within a 23-day window.

That pace isn't neutral. A field moving this fast generates a kind of ambient pressure — the sense that you're always slightly behind, always needing to catch up. That pressure is itself an attention cost, even when you're not actively engaging with it.

The practical question isn't whether to stay informed. It's how much surface area you're offering the field. Not every launch requires your consideration. Settling on a stable selection process — deciding in advance which developments are signal and which are noise — is one of the most concrete forms of attention management available right now.

The Focused Human Lens: Escaping the attention economy doesn't mean quitting the tools. It means deciding, deliberately, where your awareness lands. The month that just passed had genuine signal in it. It also had enormous volume. Your capacity to tell the difference — and to remain the one making that call — is what the practice is actually for.

The Thread Running Through March

Three different stories. Three different research traditions. And each one, in its own way, pointed back to the same terrain: the interior process that doesn't transfer cleanly to a system built on prediction.

What holds is what you keep generating — the thinking you actually do, the attention you actually direct, the understanding that arrives through effort rather than retrieval.

That process isn't a workaround for the tools. It's what makes the tools answerable to you instead of the other way around.


A. Karacay is the author of The Focused Human, The Attention Effect, and The Human Energy Advantage — available on Amazon.

Listen to The Focused Human podcast, available wherever you listen to podcasts.

If you're looking for a weekly practice to help you direct your attention more deliberately, the Weekly Attention Reset Protocol is designed for exactly this. It's free, simple, and built to help you reclaim coherence in a world designed to fragment it. And, as always, stay curious!

Attention is Physics®