Red teaming is widely celebrated for its ability to challenge assumptions and expose vulnerabilities, but it is not immune to failure. In fact, misuse of red teams can be more dangerous than having none at all. When red teams are rigged to confirm preconceived notions or when their findings are ignored, organizations risk false confidence and catastrophic surprises.
A notorious example is the Millennium Challenge 2002 war game, where the red team initially outmaneuvered US forces but was constrained by leadership seeking a scripted victory. This manipulation stifled honest critique and undermined the exercise’s value. Similarly, the 1976 Team B experiment suffered from ideological bias, compromising objectivity and credibility.
Ignoring red team warnings is another common failure. Before 9/11, FAA red teams identified serious airline security gaps, but their concerns were dismissed, leaving critical vulnerabilities open. Such disregard discourages dissent and leads to cultural silence, where employees fear speaking up.
Freelance red teaming—conducted without institutional coordination—can cause disruption and confusion. While well-meaning, these independent efforts may lack access to necessary information and fail to integrate findings into decision-making processes, reducing impact and trust.
To avoid these pitfalls, organizations must ensure red teams have clear mandates, independence, and leadership support. Cultivating a culture that values honest feedback and protects dissenters is essential. Transparency about roles and expectations helps prevent misunderstandings and misuse.
Ultimately, recognizing and addressing the dark sides of red teaming enables organizations to build robust, trustworthy programs that deliver real value and help avoid disaster.
References: 1 , 2 , 3
Want to explore more insights from this book?
Read the full book summary