Attending Villanova University for my Executive Master of Business Administration (EMBA) was a pivotal experience in my professional development. Systems Thinking, taught by Jamshid Gharajedaghi, was central to the program's design and was the deciding factor when I compared it with others. All the professors were top-notch, including Steve Andriole and Tim Monahan, who played a key role in solidifying my interest in data usage and ultimately led me to pursue a second Master's in Predictive Analytics from Northwestern University (Villanova's program was not yet established at the time).

My studies at Villanova occurred from 2008 to 2009, and Tom Davenport recently published his book titled "Competing on Analytics." For those unfamiliar with the book, it examines the use of data to build high-performing companies, developing competitive strategies that focus on sophisticated data analysis rather than merely depending on analytics for operational support or forming strategies based on intuition.
So, when Tom, Steve, and others began raising the likelihood of job disruption and/or displacement caused by AI, I paid close attention and spent a considerable amount of time reflecting on what I had observed over the prior 12 months.
In his post, "AI and the Entry-Level Problem," Professor Davenport’s argument was not about AI replacing experts (although there are some questions there as well)—it's whether it will eliminate the path to becoming one. This mirrors many of the conversations I have had at AI-focused events and workshops in the last year. My working theory entering the year is that companies will use AI as a means to accelerate technology development. However, conversations ultimately shifted to cost-cutting. While managers sought to procure AI tools to augment their existing workforce, corporate Chief Financial Officers (CFOs) focused on how much they could reduce current-year costs, which is another way of saying “reduce headcount.” This naturally extended to pauses in new hiring activity, as many managers preferred to retain their current staff rather than hire someone new with whom they were unfamiliar. This resulted in some uncomfortable conversations while I was at HumanX, as graduating college students participated in the discussion.
This unlocked a new perspective on what we are witnessing. Often, companies seek to reduce costs by laying off more experienced and typically more expensive employees while hiring younger, less costly ones. However, as artificial intelligence rapidly advances, we seem to be witnessing something different: the systematic erosion of entry-level positions across industries. This isn't just another automation story. It's perhaps a fundamental restructuring of how careers begin, how expertise develops, and how organizations build their talent pipelines.
The Diamond Emerges
Traditional corporate hierarchies resembled pyramids, with broad bases of entry-level workers supporting narrower layers of middle management and executives. However, AI may be reshaping this structure into something more akin to a diamond: far fewer entry-level positions, a more substantial middle of specialized experts, and the same small number of senior leaders at the top.
This shift is already underway. LinkedIn data reveals a troubling trend in entry-level hiring, particularly acute in software development, where AI-powered code generation has dramatically reduced demand for junior programmers. However, the impact extends far beyond tech. Financial services companies are restructuring their entire organizational models, executives report, as AI capabilities render many traditional entry-level tasks obsolete.
The implications are staggering. If companies aren't hiring inexperienced workers, where will tomorrow's experienced professionals come from?
Consider software development: junior developers typically start by fixing bugs, writing simple functions, and maintaining existing code. These tasks, once essential stepping stones to expertise, are increasingly handled by AI tools that can generate, debug, and optimize code faster than human beginners. However, without this foundational experience, how does someone develop the deep understanding necessary to design complex systems or make strategic technical decisions? Without deliberate action, we risk watching an entire “middle” of the talent pipeline collapse.
“But AI can’t do that!”
One common counterargument to this assertion of a trend is that AI cannot yet perform specific tasks. I am being purposefully vague, as the argument could genuinely be anything. But I would like you to reflect on where AI was 12 to 18 months ago. The early models were entertaining but had noticeable gaps if you wanted to use them for any serious work. Hallucinations and images of people with six fingers on their hands can be amusing fodder for memes. Still, they are not necessarily artifacts you would want to rely on for serious business.
Yet, if we reflect on progress made over this time, we are starting to see fewer hallucinations, and images are becoming so realistic that now people are worried about second-order effects if they are used as gasoline for misinformation. Some tools are introducing footnotes that allow users to validate text generated from the models. To assume that technology would remain static would be to ignore history.
The more recent news that people are grasping is that current AI is only a reasoning model and not Artificial General Intelligence (AGI). This misses two critical points.
First, these systems were never AGI to begin with. Large language models process vast datasets, encode patterns into high-dimensional representations, and generate responses by drawing on these learned associations. While sophisticated, this approach differs fundamentally from human-like general intelligence—a fact that researchers have long understood.
Second, and more importantly, the AGI threshold is largely irrelevant to the current disruption. Today's AI systems are already transforming industries, automating complex workflows, and augmenting human capabilities in ways that seemed impossible just years ago. The question isn't whether they constitute AGI, but how rapidly they're eliminating the practical barriers that once seemed insurmountable.
The Skills That Matter
While I think it is flawed thinking, I can understand the emotions underpinning some of the dismissiveness. The combination of fundamental change at a rapid pace can be scary. In the face of this anxiety, it is perfectly logical to search for reasons why this change will not occur. Perhaps those of us who believe that AI is going to cause massive disruption to the status quo are wrong, but I personally have seen enough to convince me that this change is real.
So, if we acknowledge the reality of this change, the next question is: “What do we do to make sure people are not left behind?” I don’t think this is necessarily a death sentence for new graduates and career changers. The key is understanding which capabilities remain uniquely human and are likely to stay that way.
Deep Domain Knowledge Becomes Premium: As AI handles routine tasks, the value of subject matter expertise skyrockets. Someone who understands both AI capabilities and the intricacies of supply chain management, healthcare regulations, or customer psychology becomes invaluable. The combination of domain knowledge and AI literacy creates a competitive moat that pure technical skills alone cannot provide.
Critical Thinking and Information Processing: Entry-level workers once spent a significant amount of time gathering and organizing information. Now, they need skills in evaluating, questioning, and improving AI-generated outputs. Can you spot when an AI model's assumptions are flawed? Can you identify missing context in a generated report? Can you ask better questions that lead to more useful AI responses?
Human-AI Collaboration Fluency: This isn't just about knowing how to write prompts. It's about understanding when to trust AI, when to doubt it, and how to combine AI capabilities with human judgment. It's knowing which tasks benefit from AI assistance and which require purely human insight.
The Path Forward
For those entering the workforce, the strategy is clear: become irreplaceable not by competing with AI but by complementing it. Additionally, commit to continuous learning and remain agile in your ability to adapt to a more dynamic environment. Develop deep expertise in a specific domain and master the art of human-AI collaboration. Cultivate critical thinking skills that allow you to improve on AI-generated work rather than just accept it.
The diamond-shaped organization isn't necessarily a dystopia—it could represent a more efficient allocation of human talent, with people focused on higher-value work from the start of their careers. But only if we successfully navigate the transition.
What do you think? Are we witnessing the end of traditional career ladders, or the evolution toward something better? How should educational institutions and employers adapt to prepare workers for this new reality?
Insightful. Thanks, Brian