top of page

Who will become senior if AI does the junior work?

Automating the basics creates a skills gap that will surface years later as leadership and expertise shortages.





One of the strangest side effects of AI is that it is exceptionally good at the work that used to train people. The “junior tasks” in many professions are repetitive, time-consuming, and tightly structured: drafting first versions, producing summaries, compiling research, formatting documents, cleaning data, preparing slide decks, writing basic code, and triaging requests. These are exactly the tasks that large language models and AI copilots can now do quickly. That sounds like pure productivity. But it raises an uncomfortable question: if AI does the junior work, how do juniors become seniors? The issue is not just jobs. It is the pipeline of competence.


CONTEXT AND BACKGROUND

Most careers are built through apprenticeship, whether formal or informal. Early on, you do simpler tasks under supervision. You make mistakes in a relatively safe environment. You learn the standards of the craft. Over time, you internalise patterns and develop judgement. Eventually, you are trusted with higher-stakes decisions.


This system was never perfect, but it worked because work and learning were intertwined. The messy, boring tasks were not only labour-intensive. They were training. They created familiarity with detail, context, and the consequences of small errors.


AI breaks that pattern by separating output from understanding. A junior can now produce something that looks senior, without necessarily grasping why it is correct, what trade-offs were made, or what risks are hiding in the gaps. The organisation sees speed, but may be losing depth.


INSIGHT AND ANALYSIS

The first risk is shallow competence. If juniors mostly supervise AI outputs, they may never develop the mental models that come from doing the work end-to-end. In law, it is the discipline of reading cases carefully and noticing the exceptions. In finance, it is understanding how numbers move through assumptions. In software, it is learning how systems break and why. In consulting, it is the craft of structuring a problem and spotting what the client is not saying. AI can accelerate these tasks, but it can also bypass the struggle that produces skill.


The second risk is the illusion of productivity. When AI produces a draft in seconds, it feels like time saved. But someone must verify it. If verification is done poorly, errors slip through. If verification is done properly, it becomes a high-skill activity that juniors may not be ready for. The paradox is that AI can push juniors into responsibilities they have not been trained to carry: quality assurance, judgement calls, and risk management.


The third risk is the missing middle. If organisations reduce entry-level hiring because “AI can do it”, they may discover later that they have too few people who grew into senior roles. The pipeline doesn’t break immediately. It breaks quietly, and then all at once. Five years later, you cannot find enough competent managers, reviewers, team leads, and subject experts. You have plenty of AI output and not enough humans who can tell what is safe, what is misleading, and what is strategically sound.


There is also a cultural risk. Apprenticeship is not only technical learning. It is social learning: how to work with others, how to handle conflict, how to navigate ambiguity, how to accept feedback, and how to build resilience. If early-career work is reduced to “prompting and polishing”, we may create professionals who are fluent in presentation but fragile in practice.


IMPLICATIONS

The response cannot be to ban AI or pretend the old world will return. The smarter response is to redesign development pathways deliberately. If AI removes certain training tasks, organisations must replace them with new forms of practice that build the same underlying capabilities.


That could mean structured “manual reps” where juniors must still do portions of the work without AI, especially in high-stakes domains. It could mean better review rituals: teaching juniors how to verify, how to test assumptions, and how to document decisions. It could mean simulation-based training: creating realistic scenarios where people learn to handle edge cases, not just produce polished outputs.


Leaders also need to rethink how they measure talent. Output volume is no longer proof of competence. The differentiator is judgment: can someone explain why a decision was made, what risks were considered, and what evidence supports it? In an AI workplace, promotion should be tied to reasoning quality, not just speed.


CLOSING TAKEAWAY

AI can be the most powerful training tool humanity has ever built, but only if we use it with intention. If we let it replace the apprenticeship layer without replacing the learning, we will create a skills gap that surfaces later as a shortage of capable seniors. The question is not whether AI will do junior work. It will.


The question is whether organisations will redesign how people become competent, trustworthy professionals in a world where first drafts are cheap. In the end, the scarce resource will not be content. It will be a judgment.


Author Bio: Johan Steyn is a prominent AI thought leader, speaker, and author with a deep understanding of artificial intelligence’s impact on business and society. He is passionate about ethical AI development and its role in shaping a better future. Find out more about Johan’s work at https://www.aiforbusiness.net

 
 
 

Comments


Leveraging AI in Human Resources ​for Organisational Success
CTU Training Solutions webinar

bottom of page