The new digital dependency: AI as a second brain
- Johan Steyn

- 2 days ago
- 4 min read
Convenience is rising fast, but so is the risk of losing the mental muscle that creates insight.

Audio summary: https://youtu.be/sQHn_7pjxBc
Follow me on LinkedIn: https://www.linkedin.com/in/johanosteyn/
A new kind of dependency is forming in plain sight. Many people no longer use AI only to write or summarise. They use it to remember. They drop in meeting notes, ask for instant context, request action lists, and rely on AI to “hold” the thread of complex projects. The idea of a “second brain” is not new, but AI has supercharged it. What used to require disciplined note-taking systems now happens through a conversation. You can ask, “What did we decide last week?” or “Remind me what my key arguments were,” and get a tidy answer. The appeal is obvious. But there is also a risk we should name: when memory becomes a service, thinking can start to atrophy.
CONTEXT AND BACKGROUND
Humans have always offloaded memory to tools. We wrote on paper, kept diaries, stored documents in filing cabinets, and later searched emails and cloud drives. Each step improved efficiency and reduced cognitive load. The promise was simple: free the brain for higher-order thinking.
AI changes the nature of offloading. Traditional tools stored information, but they did not interpret it. You still had to read, select, and connect ideas yourself. AI can now do that interpretive layer: it can summarise, extract themes, create a narrative, and suggest next actions. That is a powerful shift because it moves from storage to sensemaking.
We should be honest about why this feels so good. Modern life overloads attention. People are busy, distracted, and managing multiple responsibilities. An AI second brain offers relief: a place to dump complexity and retrieve clarity. But a tool that consistently provides clarity can also become a crutch, especially if we stop doing the mental work that produces understanding.
INSIGHT AND ANALYSIS
The first risk is shallow comprehension. When AI summarises everything for you, you may stop engaging deeply with the underlying material. You can feel informed without being grounded. Over time, that weakens the ability to spot nuance, challenge assumptions, or detect when something doesn’t add up.
The second risk is weakened recall and learning. Memory is not only a storage function. It is part of how learning happens. When you work to remember and explain something, you strengthen your mental model. If AI handles recall and explanation, you may retain less, not because you are lazy, but because the environment no longer requires effort.
The third risk is over-trust. AI “second brains” are not neutral archives. They can omit, misinterpret, or hallucinate. A transcript can be wrong. A summary can flatten disagreement. A timeline can reorder events. Yet the output looks authoritative, which tempts us to treat it as truth. The more we rely on AI for context, the more dangerous small errors become.
There is also a psychological risk: identity drift. Our memory is part of who we are. It shapes how we tell our story, what we value, and what we believe we have learned. If a system becomes the primary keeper and narrator of our experiences, it can subtly influence our sense of continuity. That may sound abstract, but it matters when people begin outsourcing reflection itself.
IMPLICATIONS
For individuals, the goal is not to reject AI second brains, but to use them in a way that strengthens rather than replaces cognition. Treat AI as a retrieval assistant, not an authority. When it gives a summary, ask for the supporting detail. Periodically write your own synthesis before you consult the machine. Use AI to test your thinking, not to avoid thinking.
For organisations, the key is to separate convenience from competence. AI meeting summaries and knowledge assistants can boost productivity, but teams still need verification habits and clarity on what counts as the official record. Leaders should also recognise a training risk: if juniors learn through AI summaries instead of engaging with primary material, they may develop fast, polished outputs without strong judgement.
For educators and parents, the question is not whether students will use AI to remember. They will. The task is to protect the mental muscles that matter: comprehension, reasoning, and the ability to explain an idea in your own words. Education must shift from memorisation to sensemaking, but it cannot outsource sensemaking entirely.
CLOSING TAKEAWAY
AI as a second brain can be liberating. It can reduce admin, improve organisation, and help us manage complexity. But cognitive independence is not guaranteed. If we outsource too much remembering and interpreting, we risk trading short-term relief for long-term fragility: weaker attention, weaker learning, and higher susceptibility to error and manipulation. The right path is not to treat AI as a brain replacement, but as a tool that supports a healthier relationship with our own thinking. The future belongs to people who use AI to think better, not to think less.
Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net






Comments