Which Roles AI Is Really Changing — and Which Are Just Getting New Paperwork

Which Roles AI Is Really Changing — and Which Are Just Getting New Paperwork

WEF projects 92 million roles displaced by 2030, but which jobs are truly transforming? Data from Oxford, McKinsey, GitClear, and IAPP cuts both ways.


Davos ended last week. The keynotes agreed, as they always do, that AI is reshaping work — but offered little on the texture of that reshaping. Which functions are genuinely changing? Which are watching the same work pile up, only now with an AI layer on top?

The data from 2025 and early 2026 is good enough to answer this with some precision. Not perfectly — the landscape moves fast — but well enough to draw a useful distinction between roles AI is automating and roles AI is burdening with new overhead. That distinction matters enormously for how you plan.

For the task-level data on what AI can technically automate, see What AI Actually Automates in 2026: A Data-First Look.

Key Takeaways

  • The WEF projects 92 million roles displaced by 2030 — clerical functions dominate the decline list (WEF Future of Jobs 2025, Jan 2025)
  • Data entry keyers carry a 0.99 probability of computerisation — the highest of any knowledge-work occupation studied (Oxford Martin School, 2013)
  • AI coding tools have made code review harder, not easier — cloned code blocks rose 48% between 2020 and 2024 (GitClear, 2025)
  • Only 1.5% of organizations expect to need no additional AI governance staff within a year — oversight is the new growth function (IAPP, 2025)

Which Roles Face Genuine Automation Risk?

Clerical, repetitive, and rules-based functions are bearing the brunt. The World Economic Forum’s 2025 Future of Jobs report projects administrative assistants and executive secretaries losing 6.1 million positions by 2030 — the second-largest absolute decline of any occupation category studied (WEF, Jan 2025). Accounting and payroll clerks follow at 1.65 million projected losses. Material-recording and stock-keeping clerks: 2.64 million.

The pattern is consistent. Roles built around moving information from one place to another — filling forms, updating spreadsheets, logging transactions — are in structural decline. This isn’t prediction. For many of these functions, the automation is already deployed and running.

The numbers date back further than recent headlines suggest. In 2013, Frey and Osborne at Oxford Martin School assigned data entry keyers a 0.99 probability of computerisation — the highest automation risk of any knowledge-work occupation across 702 roles evaluated (Oxford Martin School, 2013). That analysis used narrower technology assumptions than today. What was a high-probability risk twelve years ago is closer to operational reality now. The U.S. Bureau of Labor Statistics projects a 26.1% decline in data entry keyer employment between 2022 and 2032 — the steepest drop of any administrative occupation (BLS Employment Projections, 2024).

Citation capsule: Oxford Martin School’s 2013 study assigned data entry keyers a 0.99 probability of computerisation — the highest automation risk of 702 occupations examined. The U.S. Bureau of Labor Statistics has since confirmed a projected 26.1% employment decline for this role between 2022 and 2032, the steepest drop in any administrative category. (Oxford Martin School, 2013; BLS, 2024)

WEF Projected Job Losses by 2030 — Selected Clerical Roles (millions) WEF Projected Job Losses by 2030 — Selected Clerical Roles (millions) Admin Assistants & Exec Secretaries −6.1M Material-Recording & Stock-Keeping Clerks −2.64M Accounting & Payroll Clerks −1.65M Data Entry Clerks −0.5M Source: WEF Future of Jobs Report 2025
Source: World Economic Forum, Future of Jobs Report 2025

What Is — and Isn’t — Happening in Customer Service

Tier-1 customer service — handling FAQs, account lookups, status checks, basic complaints — is the clearest example of automation already deployed at scale. Gartner estimates agentic AI will autonomously resolve 80% of common customer service issues without human intervention by 2029 (Gartner, March 2025). That’s a sharp projection — and it’s directionally consistent with what organizations are deploying right now.

But what’s happening at the human end of that deployment? In a March 2025 survey, 95% of customer service leaders said they planned to retain their human agent workforce rather than reduce it. The work is shifting: from handling the first call to handling the escalation. From resolving issues to auditing whether the AI resolved them correctly.

Stanford and MIT’s call center study found AI tools boosted overall worker productivity by 14% — and for the least-skilled workers, gains reached 35% (Stanford/MIT, via CNBC, 2023). AI compressed the experience gap — two-month-tenure agents performing at the level of six-month-tenure agents. Productivity went up. Volume handled per agent went up. Headcount held steady.

That’s not automation. That’s augmentation — with a workload attached.

The Developer Productivity Paradox

Code generation tools — GitHub Copilot, Cursor, Gemini Code Assist — are among the most widely adopted AI tools in professional settings. The productivity numbers look good in isolation: faster completion times, more features shipped, fewer context-switch interruptions.

Harvard Business School’s consulting study found that professionals using AI completed 12.2% more tasks, finished 25.1% faster, and produced work judged 40% higher quality (HBS, Dell’Acqua et al., 2023). Those are real gains. They’re also conditional on staying inside AI’s capability zone — the same paper identifies a “jagged frontier” where AI fails unpredictably on tasks it looks capable of handling.

GitClear’s analysis of 211 million changed lines of code between 2020 and 2024 shows the downstream cost. As AI coding tools spread, copy-pasted and cloned code blocks rose from 8.3% to 12.3% of all code output — a 48% increase (GitClear, 2025). Developers are shipping faster. Code reviewers are inheriting more duplicated logic, more inconsistency, and more surface area to check.

Basic code generation is becoming automatable. Code review — specifically the review of AI-generated code — is getting harder.

The roles gaining the most overhead from AI are often the ones positioned directly downstream of the roles it’s automating.

Citation capsule: GitClear’s analysis of 211 million changed lines of code (2020–2024) found that cloned and copy-pasted code blocks rose from 8.3% to 12.3% of all output as AI coding tools spread — a 48% increase. Developers ship faster; reviewers inherit more duplicated logic. Basic code generation becomes easier; code quality assurance becomes harder. (GitClear, 2025)

The New Paperwork: Roles Growing Because of AI

Here’s what doesn’t surface in most automation coverage: the functions expanding because AI deployment requires human oversight.

The International Association of Privacy Professionals’ 2025 AI Governance Profession Report found that only 1.5% of organizations expected to need no additional AI governance staff within the coming year (IAPP, 2025). Near-universal demand for oversight roles. And 23.5% of respondents cited difficulty finding qualified AI governance professionals as a top barrier to their programs.

The pattern we encounter with clients mirrors this consistently. An organization automates a document processing function — 80% of the volume moves to AI. What grows in its place: a review function to catch the 20% the AI misclassifies, an audit trail to satisfy compliance requirements, a governance process to manage model updates, and a prompt management workflow to keep outputs consistent as models change. Four new task categories, in exchange for one old routine.

This is not exceptional. It is the normal shape of responsible AI deployment. The WEF projects 170 million new roles created by 2030 against 92 million displaced — a net gain of 78 million — but the net figure obscures how different the new roles are from what they replace (WEF Future of Jobs 2025, Jan 2025). New demand concentrates in technology and AI oversight functions. Displaced demand concentrates in clerical and routine administrative work. Those two populations don’t share the same skills, geography, or retraining path.

Citation capsule: The IAPP’s 2025 AI Governance Profession Report found only 1.5% of organizations expected to need no additional AI governance staff within a year — signaling near-universal demand for human oversight roles. Meanwhile, 23.5% cited difficulty finding qualified AI professionals as a primary barrier. AI is creating governance jobs faster than it’s filling them. (IAPP, 2025)

What “Changing” Actually Means for a Role

The question that matters isn’t “is this role automatable?” It’s “which tasks within this role are automatable, and what does the remainder look like?”

McKinsey’s November 2025 analysis is explicit: even roles with high technical automation potential still require people “to guide, supervise, and verify” (McKinsey Global Institute, Nov 2025). That language describes a specific kind of work — judgment about what good output looks like, catching failure modes the AI doesn’t flag, explaining AI decisions to people who need to act on them.

For roles where verification is straightforward and errors are low-cost, this oversight work stays light. For roles where errors carry legal, financial, or safety consequences — regulated environments, high-stakes decisions, customer-facing judgment calls — the oversight load is substantial. The difference between a role that’s “changing” and a role that’s “just getting new paperwork” often comes down to whether verification is harder or easier than the task it replaced.

That diagnostic — what does verification look like for this function? — is more useful than any general automation percentage.


Map Your Own Functions Before the Market Maps Them for You

The roles changing fastest are where task-level automation is straightforward and error costs are low. The roles accumulating new overhead are where AI output requires review before anyone can act on it. Both patterns matter for planning — and neither shows up cleanly in headline job loss projections.

The clearest next step is to map the functions in your organization against those two categories. Which tasks are routine enough to automate reliably? Which will generate verification work downstream? Which existing roles become oversight functions if the underlying task is automated?

If you’d like to run that analysis on a specific function or team, the diagnostic at canihireanai.com is built for exactly that question — and it takes less time than another Davos keynote.


Frequently Asked Questions

Which job functions face the highest documented AI automation risk right now?

Data entry, basic administrative support, and tier-1 customer service carry the highest documented risk. Oxford Martin School assigned data entry keyers a 0.99 probability of computerisation — the highest of 702 occupations examined. The U.S. BLS projects a 26.1% employment decline in this category by 2032. WEF data places administrative assistants second in projected absolute job losses by 2030.

Is AI actually creating new jobs, or just eliminating old ones?

Both — but the populations don’t overlap cleanly. WEF projects 170 million roles created by 2030 against 92 million displaced. New demand concentrates in technology, green economy, and AI oversight. Displaced demand concentrates in clerical and administrative functions. The skill profiles, geographies, and retraining paths between those groups rarely align.

Why do developers still need to do code review if AI can generate code?

Because AI-generated code quality is declining in specific ways. GitClear’s analysis of 211 million changed lines found cloned and copy-pasted code blocks rose 48% as AI coding tools spread (2020–2024). Code generation volume is up. Code coherence and uniqueness are down. Reviewers and architects — who catch duplication and maintain system integrity — are under higher pressure, not lower.

What does “new paperwork” from AI actually look like in practice?

When a function is partially automated, the roles downstream inherit oversight tasks: reviewing AI output before it’s acted on, auditing decisions for compliance, managing prompts as models update, and handling escalations the AI can’t resolve. IAPP data shows 98.5% of organizations expect to add AI governance staff within the year. The oversight function is growing; it just isn’t being hired for yet.


The jobs gaining the most new paperwork from AI are often the ones nobody warned about. Not the data entry clerk. Not the tier-1 support agent. The reviewer sitting downstream of the automated function, now responsible for everything the model gets wrong.

That’s the shape of AI’s near-term impact on work. Not wholesale elimination. A redistribution — with a quality-assurance bill attached.