Effective AI policies focus on establishing principles for human conduct rather than just creating technical guardrails. The central question isn't what the tool can do, but how humans should responsibly use it to benefit employees, customers, and the community.

Related Insights

Regardless of an AI's capabilities, the human in the loop is always the final owner of the output. Your responsible AI principles must clearly state that using AI does not remove human agency or accountability for the work's accuracy and quality. This is critical for mitigating legal and reputational risks.

When creating AI governance, differentiate based on risk. High-risk actions, like uploading sensitive company data into a public model, require rigid, enforceable "policies." Lower-risk, judgment-based areas, like when to disclose AI use in an email, are better suited for flexible "guidelines" that allow for autonomy.

The primary focus for leaders should be fostering a culture of safe, ethical, and collaborative AI use. This involves mandatory training and creating shared learning spaces, like Slack channels for prompt sharing, rather than just focusing on tool procurement.

Organizations must urgently develop policies for AI agents, which take action on a user's behalf. This is not a future problem. Agents are already being integrated into common business tools like ChatGPT, Microsoft Copilot, and Salesforce, creating new risks that existing generative AI policies do not cover.

To effectively leverage AI, treat it as a new team member. Take its suggestions seriously and give it the best opportunity to contribute. However, just like with a human colleague, you must apply a critical filter, question its output, and ultimately remain accountable for the final result.

Don't view AI tools as just software; treat them like junior team members. Apply management principles: 'hire' the right model for the job (People), define how it should work through structured prompts (Process), and give it a clear, narrow goal (Purpose). This mental model maximizes their effectiveness.

Pega's CTO warns leaders not to confuse managing AI with managing people. AI is software that is configured, coded, and tested. People require inspiration, development, and leadership. Treating AI like a human team member is a fundamental error that leads to poor management of both technology and people.

A tangible way to implement a "more human" AI strategy is to use automation to free up employee time from repetitive tasks. This saved time should then be deliberately reallocated to high-value, human-centric activities, such as providing personalized customer consultations, that technology cannot replicate.

Employees hesitate to use new AI tools for fear of looking foolish or getting fired for misuse. Successful adoption depends less on training courses and more on creating a safe environment with clear guardrails that encourages experimentation without penalty.

Adopt a 'more intelligent, more human' framework. For every process made more intelligent through AI automation, strategically reinvest the freed-up human capacity into higher-touch, more personalized customer activities. This creates a balanced system that enhances both efficiency and relationships.