Your Engineering Career Ladder Needs a Rewrite — AI Changed the Job

By Stephen Ledwith April 28, 2026

A while back, I wrote about building a modern engineering career ladder — the competency dimensions, the dual-track progression, the principle that growth should be behavior-based, not time-based. I still stand behind all of it.

But I also have to be honest: if your career ladder was written before 2025 and hasn’t been touched since, it’s already out of date. Not because the fundamentals changed — they didn’t. Because the job changed.

AI tools have shifted what great engineering work looks like at every level. The behaviors that distinguish a senior engineer from a staff engineer, or a mid-level from a senior, are not the same as they were two years ago. If your ladder still doesn’t reflect that, you’re measuring people against a standard that no longer maps to the work.

Here’s how to fix it.


What Actually Changed

The core competency dimensions I outlined before — technical execution, collaboration, impact, leadership, business context, initiative — are still the right ones. The problem is that what “good” looks like within each of those dimensions has shifted, because the tools available have shifted.

A few years ago, technical execution at the senior level meant: writes clean, well-tested code; designs systems thoughtfully; handles complexity without being handed solutions. That’s still true. But it’s no longer sufficient as a definition.

Today, a senior engineer who ignores AI tooling entirely is operating at a meaningful disadvantage — and more importantly, is modeling the wrong behavior for the engineers around them. Conversely, an engineer who leans on AI tools without understanding their output, validating their assumptions, or applying real judgment is producing risk, not value.

The differentiator has shifted. It’s no longer just what you can produce. It’s the quality of judgment you apply to what AI produces — and how you use that output to create impact beyond your own hands.


Updating the Competency Dimensions

You don’t need to throw out your existing ladder. You need to update the behavioral anchors within it. Here’s how I’d approach each core dimension:

Technical Execution

Add explicit language about AI-augmented output at mid-level and above. The question isn’t “did they use AI?” — that’s table stakes. The question is:

  • Do they evaluate AI output critically, or accept it uncritically?
  • Do they know when to trust AI-generated code and when to dig in?
  • Do they use AI to increase the scope and quality of their work, not just the speed?
LevelOld AnchorUpdated Anchor
Mid-levelImplements well-scoped featuresDelivers well-scoped features, using AI tooling to raise quality and throughput
SeniorDesigns systems across a teamDesigns systems across a team; evaluates and validates AI-generated solutions with strong judgment
StaffSets technical direction across orgsSets technical direction, including AI tooling strategy and standards for responsible AI use

Collaboration

The collaboration dimension needs to account for the fact that AI is increasingly part of the working environment. Engineers need to be able to:

  • Share context in ways that work for both human and AI collaborators (clear specs, documented decisions, structured context)
  • Contribute to team standards for how AI tools are used, reviewed, and trusted
  • Actively help less-experienced teammates develop sound AI judgment, not just technical skills

Impact

Impact language should reflect the changed relationship between effort and output. An engineer who has mastered AI-augmented workflows can produce significantly more than one who hasn’t — and that should be visible in how impact is defined and evaluated at each level.

Watch out for a common trap: inflating everyone’s output expectations without adjusting the human factors. AI raises the ceiling, but it doesn’t eliminate the need for focus, prioritization, and judgment. Impact at senior levels is still about choosing the right problems — AI just helps execute faster once you do.

Leadership

At staff level and above, leadership now includes a responsibility for AI culture on the team. That means:

  • Modeling good AI practices (using tools effectively and critically)
  • Establishing standards for AI-generated code review and testing
  • Helping the team navigate ambiguity about when AI output is trustworthy versus when it needs more scrutiny

This isn’t a new competency — it’s an extension of the “shapes culture” language that’s already in most senior ladders. But it needs to be explicit, because if you don’t name it, it won’t happen consistently.


The New “Invisible IC” Problem

In the original article, I called out the problem of invisible ICs — engineers doing meaningful work that doesn’t surface in obvious metrics because it’s not tied to features shipped. Mentoring, internal tooling, documentation, code quality.

AI has created a new version of this problem. There are engineers on your team right now who are:

  • Quietly doing exceptional AI-augmented work that looks “too easy” to observers
  • Figuring out how to use AI to solve problems that used to require twice the headcount
  • Building internal AI workflows that your team relies on without formal recognition

If your ladder doesn’t have language that captures this kind of multiplier work, you’ll undervalue the people who are already doing it — and lose them.

The New Invisible Work
“The engineer who figured out how to do in two days what used to take two weeks isn’t showing off — they’re showing you the future. Make sure your ladder rewards that, not just the engineer who logs the most hours.” — Stephen Ledwith

What to Do With Engineers Who Aren’t Adapting

This is the question I get most often from engineering managers right now: what do you do with a senior engineer who’s resistant to AI tools — someone who’s technically excellent by the old standard but isn’t adapting?

My honest answer: be patient, be direct, and be clear about what the expectation is.

Resistance to AI tools often comes from three places:

  1. Skepticism — they’ve tried the tools and found them unreliable for their specific work
  2. Identity — their self-concept is tied to doing the work themselves, and AI feels like a threat to that
  3. Workflow inertia — they have well-established patterns that work, and the activation energy to change feels high

The first one deserves respect. Engage with it. Ask what specifically they’ve tried and why it didn’t work. Sometimes the skepticism is valid and reveals real limitations in specific tooling for specific domains.

The second and third are leadership challenges, not technical ones. They require honest conversations about what the team expects, what career growth looks like going forward, and what support you’re providing to help people develop new skills.

What you shouldn’t do is quietly lower expectations for experienced engineers who aren’t adapting while raising them implicitly for everyone else. That creates inequity and erodes your ladder’s credibility.


Making the Update Practical

You don’t need to rewrite your entire ladder from scratch. Here’s a practical approach:

  1. Identify the highest-impact dimensions: Technical execution and leadership are where AI fluency shows up most directly. Start there.
  2. Involve your senior engineers: The people already doing this work well are your best source of behavioral anchors. Ask them what good AI judgment looks like at their level.
  3. Update behavioral anchors, not the structure: Keep your levels and tracks. Update the language within them to reflect AI-augmented work.
  4. Communicate the change clearly: Engineers deserve to understand why the ladder is being updated and what it means for their current standing. Don’t surprise people with new expectations at review time.
  5. Make it a living document: This space is moving fast. Plan to revisit the AI-related language at least annually.

Final Thought

A career ladder that doesn’t reflect the current reality of the work isn’t just unhelpful — it sends the wrong signal about what your organization values. Engineers read ladders carefully. If yours still doesn’t mention AI fluency, judgment, or multiplier impact at levels above mid-career, the message it sends is that those things don’t matter.

They do. Update the ladder before your best people figure that out before you do.

“A good career ladder doesn’t tell people where to go. It shows them how far they can grow — and that requires knowing what growth actually looks like today.” — Stephen Ledwith


For more on engineering org design and leadership, read the original Building a Modern Engineering Career Ladder post, or reach out if you’re working through a ladder redesign.