The emerging job of training AI agents will be accessible to non-technical experts. The only critical skill will be leveraging deep domain knowledge to identify where a model makes a mistake, opening a new career path for most knowledge workers.
Instead of choosing a career based on its perceived "safety" from AI, individuals should pursue their passions to quickly become domain experts. AI tools augment this expertise, increasing the value of experienced professionals who can handle complex, nuanced situations that AI cannot.
To move beyond general knowledge, AI firms are creating a new role: the "AI Trainer." These are not contractors but full-time employees, typically PhDs with deep domain expertise and a computer science interest, tasked with systematically improving model competence in specific fields like physics or mathematics.
Emerging AI jobs, like agent trainers and operators, demand uniquely human capabilities such as a grasp of psychology and ethics. The need for a "bedside manner" in handling AI-related customer issues highlights that the future of AI work isn't purely technical.
Instead of searching for new "AI" job titles, non-coders should focus on applying AI capabilities to traditional roles like marketing or sales. Companies are prioritizing existing positions but now require AI fluency, such as building custom GPTs or using AI assistants, as a core competency.
As AI tools become operable via plain English, the key skill shifts from technical implementation to effective management. People managers excel at providing context, defining roles, giving feedback, and reporting on performance—all crucial for orchestrating a "team" of AI agents. Their skills will become more valuable than pure AI expertise.
If AI were perfect, it would simply replace tasks. Because it is imperfect and requires nuanced interaction, it creates demand for skilled professionals who can prompt, verify, and creatively apply it. This turns AI's limitations into a tool that requires and rewards human proficiency.
Instead of repeatedly performing tasks, knowledge workers will train AI agents by creating "evals"—data sets that teach the AI how to handle specific workflows. This fundamental shift means the economy will transition from paying for human execution to paying for human training data.
Knowledge work will shift from performing repetitive tasks to teaching AI agents how to do them. Workers will identify agent mistakes and turn them into reinforcement learning (RL) environments, creating a high-leverage, fixed-cost asset similar to software.
AI models have absorbed the internet's general knowledge, so the new bottleneck is correcting complex, domain-specific reasoning. This creates a market for specialists (e.g., physicists, accountants) to provide 'post-training' human feedback on subtle errors.
The most valuable AI systems are built by people with deep knowledge in a specific field (like pest control or law), not by engineers. This expertise is crucial for identifying the right problems and, more importantly, for creating effective evaluations to ensure the agent performs correctly.