The core PM drive to maximize value for the largest addressable market (TAM) inherently leads to excluding edge cases and marginalized users, which is the root cause of bias and irresponsibility in AI systems.
Shift the view of AI from a singular product launch to a continuous process encompassing use case selection, training, deployment, and decommissioning. This broader aperture creates multiple intervention points to embed responsibility and mitigate harm throughout the lifecycle.
When faced with an ethically questionable directive, refusing outright can be career-limiting. A more effective strategy is to research and propose an alternative product that solves the same underlying business problem in a more responsible way, thereby redirecting the conversation.
To manage risks from 'shadow IT' or third-party AI tools, product managers must influence the procurement process. Embed accountability by contractually requiring vendors to answer specific questions about training data, success metrics, update cadence, and decommissioning plans.
Instead of treating a complex AI system like an LLM as a single black box, build it in a componentized way by separating functions like retrieval, analysis, and output. This allows for isolated testing of each part, limiting the surface area for bias and simplifying debugging.
When working in complex organizations like the UN or federal government, don't try to master their internal language. Instead, find and partner with internal experts who can translate your goals into the organization's native operating system to achieve impact.
The Data Nutrition Project discovered that the act of preparing a 'nutrition label' forces data creators to scrutinize their own methods. This anticipatory accountability leads them to make better decisions and improve the dataset's quality, not just document its existing flaws.
