The gap between fear at the top and ambiguity at the bottom is not a skills problem. It is a conversation problem.
A colleague of mine, someone with a depth of engineering and leadership experience comparable to mine, said something recently that neither of us followed up on in the moment. We were talking about how people around us are now building working solutions in minutes that used to take hours. We noted it. We moved on. But neither of us said what we were both clearly thinking.
What does that mean for us?
That silence was its own kind of answer.
I have been sitting with that conversation since. What we were circling around was not a technical observation about AI productivity. It was a leadership one. When the execution that used to define engineering competence -- writing solid code, running careful analysis, producing a structured architecture document -- becomes something a motivated junior with a good AI tool can produce before lunch, the question of what experienced leadership actually means becomes uncomfortable in a very specific way.
I want to name that discomfort. Not because naming it solves anything, but because I have noticed that the people who most need to talk about it are the ones least likely to start the conversation.
Two Groups, Two Silences
In the months since that conversation with my colleague, I have paid close attention to how people at different career stages are responding to the same underlying shift. What I see is not chaos. It is something quieter and, in some ways, more damaging.
At the leadership level, there is fear. It is rarely stated directly. It dresses itself up as healthy skepticism about AI, as concerns about quality, as the reasonable position that experience still matters. All of those things are also true. But underneath them, in the conversations leaders are not quite having with each other, is something more personal. A worry that the judgment and pattern recognition that took two decades to build can now be approximated in a conversation with a language model. A worry that the things that made them valuable are precisely the things that got commoditized first.
At the entry level, there is ambiguity. Not laziness. Not entitlement. Genuine uncertainty about what to aim for, what to develop, and what will actually matter. The signals organizations send about what good looks like have not been updated. We still interview engineers on coding proficiency alone. We still assess graduates on whether they can produce a clean deliverable from scratch. We are measuring the old floor and calling it a ceiling, and young professionals know something is off but cannot name what.
Here is what neither group is hearing from the other: anything useful.
Leaders are not telling juniors that the skill map has changed and here is what they are actually looking for now. Juniors are not telling leaders that the tools have changed everything about how they approach a problem and they need to understand what leadership actually looks like when execution is no longer the differentiator. The silence between these two groups is not a technology gap. It is a communication gap, and it is producing real organizational damage.
This article is an attempt to bridge it. Not with a listicle. With an honest look at what the evidence shows is happening on both sides.
The Skill Ladder Is Collapsing from the Top Down
The World Economic Forum's Future of Jobs Report 2025, based on surveys from over 1,000 companies globally, found that analytical thinking is now considered an essential core skill by seven in ten employers -- the single most cited skill across all roles for 2025. Resilience, creative thinking, and technological literacy round out the top five. These are not aspirational leadership attributes. They are hiring filters.
What happened? The skills that historically separated experienced leaders from early-career contributors have migrated down the organizational chart. They are now the baseline.
Eight of them, specifically:
Critical thinking -- the ability to evaluate claims, challenge assumptions, and form an independent view -- used to earn you a seat at the strategy table. Now McKinsey reports that only 35% of employers believe new graduates are adequately prepared for the workforce, and the primary gap cited is in digital skills and adaptability. Critical thinking is no longer rare enough to differentiate.
Analytical thinking and pattern recognition were management-level capabilities a decade ago. Interpreting dashboards, recognizing trends in data, forming a view from incomplete information -- these were specialist skills. Today, 67% of employers prioritize data interpretation even for non-technical roles. Entry-level job descriptions now list it next to "good communication skills."
Problem finding -- identifying what the actual problem is before being told -- was always a management distinction. Individual contributors were expected to solve. Managers were expected to identify. That boundary has collapsed. Employers increasingly prize talent that can ask the right questions and weigh competing data points, treating data as input, not instruction.
Clear, structured communication has moved from executive competency to day-one expectation, especially as AI generates the first draft of nearly everything. The human contribution is now the judgment of whether that draft is right for the specific audience and context, not the ability to produce prose.
Beyond these four, the evidence points to four more that have made the same migration. Data literacy -- reading and critically evaluating AI outputs -- is now expected of roles that have nothing to do with data science. Learning agility, once a high-potential leadership indicator in executive assessment frameworks, is now what organizations mean when they say they want someone "adaptable." Resilience under ambiguity, once a leadership virtue assessed in 360 reviews, is now the second most-cited core skill expected of all workers. And stakeholder awareness -- knowing how to tailor communication to a specific audience -- has moved from executive presence training to graduate job descriptions.
Eight skills. All with roots in leadership development frameworks. All now expected at entry level.
The Right Interview Question Has Changed
We have not updated our hiring practices to reflect this. Let me show you what that failure looks like in a room.
Last year, I was interviewing a candidate with average academic scores. The first round was conducted by a senior person on my team and his feedback was unambiguous: this is the person we want. I went in for my round with no reason to be skeptical.
Every question I asked, relevant to the job description, came back with an immediate, confident response. I was genuinely impressed. Then I started pushing into territory that should have been difficult for someone with that academic background. Still excellent answers. I went deeper, into advanced distributed systems questions. Same quality of response.
Then I asked about database sharding.
He started answering about image sharpening.
I did not stop him. I kept going, this time deliberately asking questions that had no logical connection to where the conversation had been. A tangent. A non-sequitur framed as a technical follow-up. Every time, he had an answer. Not a pause to say "I am not sure how that connects to what we were discussing." Not a moment of genuine human confusion. Just an answer.
By the end of the conversation, I understood what was happening. He had an AI engine reading responses to him in real time. I declined to make an offer.
What stayed with me was not the candidate. It was the technique I had to develop on the spot to find the truth. Ask a question that is irrelevant to the conversation. A human will catch it. They will pause. They will push back. They will say "that seems disconnected, can you help me understand what you are getting at?" An AI pipeline pulling answers to detected keywords will just answer.
I use that technique in every interview now. An irrelevant question in the middle of a conversation is not a trick. It is a test of whether a person is present.
But the candidate was not the real lesson. The real lesson was about the system that produced him. He was not malicious. He was operating in a world where nobody had told him clearly what the actual measure was. He knew AI could produce correct-sounding answers. He used it to pass the test he thought was being administered. He was optimizing for the wrong signal because we had not replaced the old signal with an honest one.
That is the clarity problem. The right question for an entry-level engineering hire today is not "can you write the code?" It is "can you analyze the problem correctly, frame it precisely, and use AI tools to solve it well?" Those are different cognitive tasks. The first tests execution. The second tests judgment before execution.
The irony is that the tools have made execution dramatically more accessible while making judgment dramatically more valuable. An entry-level engineer who approaches a problem by immediately prompting an AI without first understanding the constraint space will produce output that looks like a solution and functions like a liability. The organizations learning this the hard way are the ones that automated the craft without investing in the judgment that should precede it.
The Mercer Global Talent Trends report for 2024 put this plainly: AI is creating high-skill, high-responsibility roles faster than organizations can fill them, with few clear pathways for junior talent to step in. That pathway problem is partly a skills problem. But it is mostly a clarity problem. We have not told entry-level people what judgment looks like in an AI-augmented role. We are expecting them to reverse-engineer it from job descriptions that still ask for framework proficiency.
Some Have Already Cracked It
Before we talk about what leaders should do, it is worth sitting with what is already happening among those who have not yet learned to be afraid.
During my EMBA at IIT Bombay, I had the chance to interact with undergraduate students who are still completing their degrees. Several of them are building AI companies. Some are running three ventures simultaneously, before graduation, and earning more than many senior professionals with a decade of experience. They are not exceptional anomalies. They are early indicators.
What they have figured out -- intuitively, without anyone writing a framework for them -- is exactly what this article is arguing. They are not competing on execution. They are competing on judgment, speed of insight, and the ability to move from problem identification to working solution faster than any organization that is still arguing about process can keep up with. They have internalized the new skill stack not as a theory but as a daily operating reality.
This is the part of the story that most conversations about AI and work leave out. The shift is not just a threat to existing structures. It is a genuine opening. The barrier to building something real has never been lower. The tools that used to require a funded team, months of development, and specialized expertise can now be orchestrated by a single person with clarity of thought and good judgment about what to build.
The undergrads running companies are not smarter than experienced engineers. They are less encumbered by the mental model of what building is supposed to look like. That is a learnable condition. It is not reserved for the young.
The skill shift is real and undeniable. The question is whether you respond to it as a threat to what you have built or as an invitation to build something new.
The Leadership Reckoning
Here is where I want to be honest, because this part of the conversation rarely happens in professional settings.
There is discomfort at the leadership level that deserves to be named rather than managed. The capabilities that took experienced leaders the longest to build -- synthesizing complex information quickly, seeing the pattern before the data fully resolves, communicating with precision across audiences -- are the capabilities that AI has made more accessible earlier in people's careers. The gap that experience used to fill is narrowing. Not to zero. But noticeably.
What makes this harder to process is that the old signals of leadership readiness are still being used. Someone who produces a thorough, well-argued strategic document is still treated as more capable than someone who produced it with AI assistance in a quarter of the time, even if the quality is identical. The scaffolding of how we recognize leadership potential has not caught up to the reality of what is producing good outcomes.
Gallup's 2024 research found that U.S. employee engagement dropped to 31%, the lowest level in a decade, with 17% of employees actively disengaged. That number reflects, in part, a workforce sensing incoherence between what organizations say they value and what they actually reward. Leaders who are disoriented about their own value proposition are not well-positioned to provide clarity to anyone else.
But here is the other side of that reckoning, and it is the more important one. The things AI cannot do -- hold accountability, set aspiration, build trust, exercise judgment when values conflict, read a room and decide what the moment actually requires -- are not small things. They are the whole game at the leadership level. McKinsey's January 2026 research is clear: generative AI cannot set aspirations, make tough calls, build trust among stakeholders, hold team members accountable, or generate truly new ideas. Those capabilities are not narrowing. They are becoming more valuable as everything adjacent to them gets automated.
The HBR research by Hougaard and Carter puts this as an opportunity: the adoption of AI creates both uncertainty and a chance to refocus on the human skills that matter most -- self-awareness, clear communication, and compassion -- and the leaders who use this moment to double down on those capabilities will be better positioned than they were before AI arrived, not worse.
What Leadership Actually Means Now
I want to be careful here, because the easy version of this section is a list of buzzwords -- emotional intelligence, ethical AI, systems thinking -- that sound important and commit to nothing. The harder version requires distinguishing between what is foundational and preserved, what is foundational but changed in character, and what is genuinely new.
The foundational layer holds. Ethical judgment, decision-making under pressure, visionary thinking, strategy development -- these do not go away. Organizations still need leaders who can set direction in conditions of genuine uncertainty, hold accountability across complex systems, and make calls when values conflict and time is short. McKinsey's January 2026 research on building leaders in the AI age is clear: generative AI cannot set aspirations, make tough calls, build trust among stakeholders, hold team members accountable, or generate truly new ideas. That work remains deeply human.
But several of these foundational skills have changed in character. Ethical judgment now includes a question that had no analogue before: who is accountable when an AI system your organization deployed makes a consequential error at scale? Decision-making under pressure now includes a prior step that did not exist before: should I trust this AI output, override it, or interrogate it further? Getting that wrong in either direction has real costs. Research on human-machine teaming identifies trust calibration -- knowing when to defer to, verify, or challenge an AI system -- as a distinct capability that cannot be derived from traditional decision-making training alone.
Then there are skills that are genuinely new. Not elevated versions of prior competencies. Competencies that did not appear in any leadership framework before because the conditions that require them did not exist.
AI governance and accountability architecture is one. Traditional ethical leadership asked what your values are and whether your decisions reflect them. AI governance asks who is responsible when a system you deployed, trained on historical data, made a discriminatory decision at scale, affecting people you never met. Hoque, Davenport, and Nelson writing in MIT Sloan Management Review in 2025 identified this as a non-negotiable leadership competency: the ability to design governance structures, define AI use case approval processes, and assign accountability when the decision-maker is partly non-human.
Sensemaking from AI-generated complexity is another. Research on organizational intelligence and leadership cognition describes the new leadership task: identifying which signals among vast AI-generated data streams are meaningful, translating probabilistic outputs into coherent organizational narratives, and framing AI-generated insights within organizational purpose so that people understand why decisions are being made. This is a different skill from traditional sensemaking because the signal source has changed. AI outputs look authoritative and may be wrong in ways that are not obvious without domain depth.
Human-AI workflow design is perhaps the most underestimated new leadership skill. Leaders now need to make structural decisions that have no prior analogue: in this workflow, which tasks belong to AI, which to humans, which require human oversight of AI output, and which require human sign-off before action? McKinsey describes the shift as moving from command-and-control leadership toward creating the context -- guardrails, decision rights, and new definitions of quality -- within which teams can navigate AI-informed process changes. This is closer to systems engineering than to people management, and most leaders have no practiced intuition for it.
Finally, narrative leadership -- the ability to tell true stories with genuine stakes in a world drowning in plausible AI-generated text -- is becoming genuinely rare and therefore genuinely valuable. Research on AI-enabled leadership identifies this as critical: LLMs are language models, not knowledge models, capable of producing high linguistic quality with questionable substance. A leader who can articulate why a direction was chosen, what was sacrificed to get there, and why it is right -- in human language that owns a consequence -- cuts through in a way that no generated content can replicate.
What Needs to Change: Colleges and Organizations Both
The skill shift is not anyone's fault. But it is everyone's responsibility to respond to it. Two institutions are most overdue for the conversation.
For colleges and universities, the reckoning is structural. Curricula built around execution -- write the code, run the analysis, produce the document -- are teaching the exact capabilities that AI has commoditized. The measure of a technically educated graduate is no longer whether they can produce a clean output. It is whether they can frame the right problem, evaluate an AI-generated solution critically, and make a judgment call about when the tool is helping and when it is misleading.
Some institutions are beginning to adapt. Stanford, MIT, and a handful of forward-looking engineering schools are integrating AI fluency into core curricula -- not as a separate course but as a lens across disciplines. But they are the exception. Most institutions are still graduating students optimized for a job market that has already changed. Only 35% of employers believe new graduates are adequately prepared for the workforce, and the primary gap is in digital skills and adaptability, not technical knowledge.
The deeper curriculum shift is harder. It is not about adding an AI tools module. It is about redesigning how problems are taught -- moving from "here is a problem, find the solution" to "here is a situation, identify what the actual problem is, then solve it." That is a pedagogy change, not a course addition. It requires faculty who are themselves practicing these skills, not just teaching about them.
For organizations, the changes are both faster to implement and more politically difficult.
Hiring criteria need to be updated now. The coding test as the primary engineering interview format is measuring the wrong thing. What predicts contribution in an AI-augmented engineering environment is problem framing, judgment under ambiguity, and the ability to critically evaluate AI outputs. Those can be assessed. Structured problem-framing exercises, ambiguous scenarios with no clean answer, and the irrelevant-question technique described earlier in this article are all practical starting points.
Onboarding assumptions need to be rebuilt. The traditional model -- spend the first year learning the tools and processes, demonstrate competence, then begin contributing to higher-order decisions -- is broken at both ends. New hires are arriving with AI tools that compress the execution curve dramatically. Organizations that do not channel that capability toward judgment development within the first months will find it used to paper over the judgment gaps instead.
Performance management needs new language. "Good output" when AI can generate output requires a more precise definition. The question is no longer whether the deliverable is technically sound. It is whether the person exercised the judgment to define the right deliverable, used AI tools appropriately to produce it, and retained accountability for the result. That distinction requires managers who have thought explicitly about where judgment lives in their team's work -- and most have not been asked to do that yet.
Leadership development programs need to move the curriculum up the value chain. Developing critical thinking, communication, and analytical skills in leaders is still important -- but it is no longer sufficient differentiation. The new leadership curriculum is: trust calibration with AI systems, AI governance and accountability design, sensemaking from AI-generated complexity, and the human capabilities of narrative, psychological safety architecture, and ecosystem orchestration that no model can replicate.
None of this is as complicated as it sounds. It mostly requires one thing: an honest conversation about what has actually changed, conducted at the level where decisions about curricula and hiring and performance are made.
A New World Is Calling
Let me close where I began. With a colleague, a silence, and a question neither of us said out loud.
That conversation has a different texture now. The question "what does this mean for us?" has an answer, and the answer is more interesting than the fear suggested.
It means the value of experience is real, but it is not automatic. It has to be invested in deliberately, the way craft was once invested in. The judgment that took two decades to build does not disappear when AI can produce a first draft. It becomes more necessary, because now someone has to know whether the first draft is pointing in the right direction.
It means the ceiling for what a motivated early-career person can build has been raised dramatically. The IIT Bombay undergrads running multiple companies are not outliers. They are early movers in a landscape that will reward anyone who figures out what they figured out: that the bottleneck is no longer execution. It is judgment, vision, and the courage to act on an insight before consensus catches up.
It means colleges and organizations have work to do -- not to protect the structures that existed but to build the ones that this moment requires. That work is not a threat to manage. It is a design problem to solve.
And it means the conversation between leaders and the people they lead -- the honest one, not the performance review version -- has never been more important or more overdue. Senior leaders who name what has changed in their own roles, and who invite their teams to do the same, will build something that no AI tool can generate: a shared understanding of what contribution actually looks like in the work they are all doing together.
The shift is real. The discomfort is real. And so is the opportunity.
The question is not whether this new world is arriving. It has arrived. The question is whether you are going to stand at the edge describing it or step into it and start building.
A new world is calling. Have you embarked on the journey?
If this resonates, I would like to hear what that conversation looks like in your organization. Not the official version. The honest one.
References
World Economic Forum. (2025). The Future of Jobs Report 2025. Geneva: WEF. https://reports.weforum.org/docs/WEF_Future_of_Jobs_Report_2025.pdf
Conley, C. (2024, August 2). Why "wisdom work" is the new "knowledge work." Harvard Business Review. https://hbr.org/2024/08/why-wisdom-work-is-the-new-knowledge-work
Hoque, F., Davenport, T. H., & Nelson, E. (2025, April 9). Why AI demands a new breed of leaders. MIT Sloan Management Review. https://sloanreview.mit.edu/article/why-ai-demands-a-new-breed-of-leaders/
Sternfels, B., Brende, B., & Pacthod, D. (2026, January 12). Building leaders in the age of AI. McKinsey & Company. https://www.mckinsey.com/capabilities/strategy-and-corporate-finance/our-insights/building-leaders-in-the-age-of-ai
Hougaard, R., & Carter, J. (2025, October). How gen AI can create more time for leadership. Harvard Business Review. https://hbr.org/2025/10/how-gen-ai-can-create-more-time-for-leadership
Zaidi, S. Y. A. et al. (2025). Leaders' competencies and skills in the era of artificial intelligence: A scoping review. Applied Sciences, 15(18). https://www.mdpi.com/2076-3417/15/18/10271
Kuzmanov, I. (2025). Organizational intelligence and leadership cognition in AI systems. ResearchGate. https://www.researchgate.net/publication/399107704
Wallraff, B. (2025). Artificial intelligence is transforming organizations, but sustainable impact depends less on technology than on leadership and culture. AI.MAG, 05-2025. https://www.researchgate.net/publication/398772648
International Journal of Science and Research Archive. (2025). Leadership in the AI era: Navigating and shaping the future of organizational guidance. 15(03), 1737-1747. https://doi.org/10.30574/ijsra.2025.15.3.1875
Potential Project / Hougaard, R. (2024). AI and human leadership: Research report. https://www.potentialproject.com/ai-and-human-leadership
Mercer. (2024). Global Talent Trends 2024. https://www.mercer.com/global-talent-trends/
Gallup. (2024). State of the global workplace 2024. https://www.gallup.com/workplace/349484/state-of-the-global-workplace.aspx
Jain, R. (2024). Collaborative causal sensemaking: Closing the complementarity gap in human-AI decision support. arXiv. https://arxiv.org/pdf/2512.07801
Harvard Business Impact. (2025). Rethinking roles in the age of intelligent machines. 2025 Global Leadership Development Study. https://www.harvardbusiness.org/insight/the-fluid-future-of-work-rethinking-roles-in-the-age-of-intelligent-machines/
Heidrick & Struggles / UNLEASH. (2025). Leadership imperatives for thriving in a chaotic world. https://www.unleash.ai/strategy-and-leadership/leadership-imperatives-for-thriving-in-a-chaotic-world/
