Formal AI competency frameworks are still emerging. In their place, innovative companies are assessing employee AI skills with concrete, activity-based targets like "build three custom GPTs for your role" or completing specific certifications, directly linking these achievements to performance reviews.

Related Insights

Don't hire based on today's job description. Proactively run AI impact assessments to project how a role will evolve over the next 12-18 months. This allows you to hire for durable, human-centric skills and plan how to reallocate the 30%+ of their future capacity that will be freed up by AI agents.

Treating AI evaluation like a final exam is a mistake. For critical enterprise systems, evaluations should be embedded at every step of an agent's workflow (e.g., after planning, before action). This is akin to unit testing in classic software development and is essential for building trustworthy, production-ready agents.

When employees are 'too busy' to learn AI, don't just schedule more training. Instead, identify their most time-consuming task and build a specific AI tool (like a custom GPT) to solve it. This proves AI's value by giving them back time, creating the bandwidth and motivation needed for deeper learning.

Don't let performance reviews sit in a folder. Upload your official review and peer feedback into a custom GPT to create a personal improvement coach. You can then reference it when working on new projects, asking it to check for your known blind spots and ensure you're actively addressing the feedback.

To accelerate company-wide skill development, Shopify's CEO mandated that learning and utilizing AI become a formal component of employee performance evaluations. This top-down directive ensured rapid, broad adoption and transformed the company's culture to be 'AI forward,' giving them a competitive edge.

Don't view AI tools as just software; treat them like junior team members. Apply management principles: 'hire' the right model for the job (People), define how it should work through structured prompts (Process), and give it a clear, narrow goal (Purpose). This mental model maximizes their effectiveness.

Recognizing that providing tools is insufficient, LinkedIn is making "AI agency and fluency" a core part of its performance evaluation and calibration process. This formalizes the expectation that employees must actively use AI tools to succeed, moving adoption from voluntary to a career necessity.

To transform a product organization, first provide universal access to AI tools. Second, support teams with training and 'builder days' led by internal champions. Finally, embed AI proficiency into career ladders to create lasting incentives and institutionalize the change.

Traditional hiring assessments that ban modern tools are obsolete. A better approach is to give candidates access to AI tools and ask them to complete a complex task in an hour. This tests their ability to leverage technology for productivity, not their ability to memorize information.

Instead of waiting for external reports, companies should develop their own AI model evaluations. By defining key tasks for specific roles and testing new models against them with standard prompts, businesses can create a relevant, internal benchmark.

Tie AI Competency to Performance Reviews with Activity-Based Goals | RiffOn