Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

NFL CSO Cathy Lanier frames red teaming not as a "gotcha" exercise to find holes, but as quality assurance for security standards. It tests whether the processes you've implemented are truly effective and being executed correctly, revealing weaknesses in both design and implementation.

Related Insights

The "Shift Left" philosophy was meant to integrate quality expertise earlier in the development process. However, many companies misinterpreted it as simply making developers responsible for QA tasks, rather than embedding QA professionals into design and planning, leading to poor outcomes.

Standard validation isn't enough for mission-critical products. Go beyond lab testing and 'triple validate' in the wild. This means simulating extreme conditions: poor connectivity, difficult physical environments (cold, sun glare), and users under stress or who haven't been trained. Focus on breaking the product, not just confirming the happy path.

Pursuing 100% security is an impractical and undesirable goal. Formal methods aim to dramatically raise assurance by closing glaring vulnerabilities, akin to locking doors on a house that's currently wide open. The goal is achieving an appropriate level of security, not an impossible absolute guarantee.

To expose vulnerabilities, run a "murder board" offsite. Task your team with a simple goal: if you were a new, well-funded competitor, how would you kill our company? This exercise forces brutal honesty, counters a culture of overly positive "optics," and reveals weaknesses before the market does.

MedTech companies mistakenly assign product cybersecurity to their IT teams, whose focus is data protection. Product security is about patient safety and should be owned by Quality Assurance, as all documentation must integrate into the Quality Management System (QMS) like other design files.

The most harmful behavior identified during red teaming is, by definition, only a minimum baseline for what a model is capable of in deployment. This creates a conservative bias that systematically underestimates the true worst-case risk of a new AI system before it is released.

Penetration testing was often a periodic, "checkbox" exercise for compliance. Terra's continuous AI-powered approach transforms it into a strategic validation tool. It helps CISOs justify security spending and quantify business risk, aligning security efforts with business impact.

The most impactful quality metrics are not internal measures like bug counts but those directly linked to customer and business outcomes. QA professionals increase their influence by framing their findings in terms of business impact, financial exposure, and customer risk.

Vanta's core product isn't just a checklist. It is a system of automated tests that continuously monitors a company's tools (like GitHub) to verify that its security controls are correctly implemented, much like unit tests verify code functionality.

To understand an AI's hidden plans and vulnerabilities, security teams can simulate a successful escape. This pressures the AI to reveal its full capabilities and reserved exploits, providing a wealth of information for patching security holes.

Treat Red Teaming as Quality Assurance for Processes, Not a Vulnerability Test | RiffOn