These are not hypotheticals  ·  The Failure Files™ · Source: AI Incident Database
Air Canada
2024 · Documented AI Governance Failure

Their chatbot made a legal promise the airline then had to honor in court. No human in the loop. No escalation path.

UnitedHealthcare
2023 · Documented AI Governance Failure

An AI algorithm denied 90% of claims reviewed — more than any human reviewer had ever rejected. No override structure existed.

Zillow
2021 · Documented AI Governance Failure

An automated pricing model destroyed $500M in inventory. Leadership approved the AI strategy. Nobody owned the exit decision.

Amazon Rekognition
2020 · Documented AI Governance Failure

Facial recognition misidentified 28 members of Congress as criminals. The system was deployed before governance was built.

Apple Card
2019 · Documented AI Governance Failure

An algorithm issued credit limits 20× lower to women than men — including co-applicants at the same address. Structural bias, invisible at deployment.

Boeing 737 MAX
2019 · Documented AI Governance Failure

MCAS overrode pilots 346 times without their knowledge. No governance mechanism existed to intervene at execution.

COMPAS
2016 · Documented AI Governance Failure

A recidivism algorithm flagged Black defendants as high risk at twice the rate of white defendants. No audit structure caught it for years.

IBM Watson Health
2017 · Documented AI Governance Failure

Watson recommended unsafe cancer treatments that oncologists flagged immediately. No escalation path existed to stop deployment.

Optum Health
2019 · Documented AI Governance Failure

An AI tool systematically underserved Black patients by using cost as a proxy for need. The model ran for years before exposure.

Twitter/X
2020 · Documented AI Governance Failure

The photo-cropping algorithm consistently framed white faces over Black faces. Deployed globally before internal audit surfaced the pattern.

Air Canada
2024 · Documented AI Governance Failure

Their chatbot made a legal promise the airline then had to honor in court. No human in the loop. No escalation path.

UnitedHealthcare
2023 · Documented AI Governance Failure

An AI algorithm denied 90% of claims reviewed — more than any human reviewer had ever rejected. No override structure existed.

Zillow
2021 · Documented AI Governance Failure

An automated pricing model destroyed $500M in inventory. Leadership approved the AI strategy. Nobody owned the exit decision.

Amazon Rekognition
2020 · Documented AI Governance Failure

Facial recognition misidentified 28 members of Congress as criminals. The system was deployed before governance was built.

Apple Card
2019 · Documented AI Governance Failure

An algorithm issued credit limits 20× lower to women than men — including co-applicants at the same address. Structural bias, invisible at deployment.

Boeing 737 MAX
2019 · Documented AI Governance Failure

MCAS overrode pilots 346 times without their knowledge. No governance mechanism existed to intervene at execution.

COMPAS
2016 · Documented AI Governance Failure

A recidivism algorithm flagged Black defendants as high risk at twice the rate of white defendants. No audit structure caught it for years.

IBM Watson Health
2017 · Documented AI Governance Failure

Watson recommended unsafe cancer treatments that oncologists flagged immediately. No escalation path existed to stop deployment.

Optum Health
2019 · Documented AI Governance Failure

An AI tool systematically underserved Black patients by using cost as a proxy for need. The model ran for years before exposure.

Twitter/X
2020 · Documented AI Governance Failure

The photo-cropping algorithm consistently framed white faces over Black faces. Deployed globally before internal audit surfaced the pattern.

Every one of these institutions had a governance framework. None of them had someone who knew what to ask.

Human Signal Town Hall · Pre-Sale Live Now

The Strict Reality
of AI Governance

Your institution is not failing because of a bad model.
It is failing because no one owns the decision.

This is not a webinar. It is a live strategy briefing on institutional survival — recorded on tape with the audience holding the microphone. C-suite operators, risk officers, and governance leads who need unfiltered intelligence on autonomous system failure, not polished talking points, are the intended audience.

Bring the question your legal team said you cannot ask in public. This is where it gets answered.

Secure Your Access — $97 Secure checkout via Stripe  ·  Price rises to $147 on May 1  ·  50 seats total
50
Total Seats Available
Intentionally intimate. Capacity is structural, not a marketing tactic.
Date Wednesday, May 14, 2026
Time 2:00 – 3:00 PM Eastern
Format Live Virtual · Zoom · Recorded
Access Live Mic · On-Set Production Experience
Pre-sale closes May 1.
Current: $97 per seat
May 1 onward: $147 per seat
After sell-out: no waitlist

"Most institutions will not fail because of a bad AI model. They will fail because of a broken governance structure around it."

— Dr. Tuboise Floyd · Human Signal Driving Thesis

"Permitted is not the same as admissible. The gap between those two words is where institutional liability lives."

— Human Signal · Trust Gap Framework v3
Authority signals · Verified credentials at the table
 Former CISO · City of Atlanta
 Col. USA Ret. · Army Reserve Command
 AI Senior PM · Microsoft Copilot GTM
 TAIMScore™ Framework Creator · HISPI
 Federal Reserve Bank · Cybersecurity Strategy
 AFCEA International Board · Homeland Security
 Goodpods #1 Indie Tech · Apr 2026

Not a Panel Discussion.
A Recorded Strategy Briefing.

The structure is deliberate. Phil Donahue-style live mic access means your question shapes the record — not a moderator's agenda. Fifty seats means every voice is heard.

01 · Live Mic Access

Your Question. On Tape.

Audience members hold the microphone directly. No PR handlers. No pre-screened submissions. Your real governance question — about the failure your organization won't name publicly — goes straight to practitioners who have reverse-engineered it before.

02 · Unfiltered Intelligence

No Corporate Scripts Permitted.

This briefing is produced by an independent research platform with no vendor relationships and no underwriter influence over editorial content. The analysts at this table are not selling a product adjacent to the question. Independence is not a feature here. It is the product.

03 · On-Set Production

Inside a Live Recording.

You are not in a passive call. You are on set — virtually — watching how a professional episode is produced in real time. Your voice and your challenge become part of the final recorded artifact. This is the record your organization will reference when the failure case arrives.

04 · Failure Anatomy

Real Cases. Real Liability Exposure.

Drawing from the Human Signal Failure Files™ — twelve documented AI governance failures across Air Canada, UnitedHealthcare, Zillow, and others — the panel will dissect how structural governance gaps created institutional liability, and what the escalation path should have looked like.

05 · Frameworks Applied Live

GASP™ and L.E.A.C. in Operation.

The Human Signal diagnostic frameworks — GASP™ (Governance As a Structural Problem) and the L.E.A.C. Protocol™ — will be applied in real time against the questions the audience brings. This is applied governance intelligence, not theoretical compliance documentation.

06 · 50 Seats. Hard Cap.

Structural Scarcity. Not Theater.

Fifty seats preserves the analytical depth of the room. Large audiences produce passive listeners. This format requires active participants. When the seats are gone, there is no waitlist. The recording is the residual. Access to the live session is the differentiator.

Practitioners Who Have
Sat in the Room Where It Failed.

Every person at this table has operated inside institutional failure — not as an observer, but as the person responsible for the structure that should have prevented it.

Dr. Tuboise Floyd, PhD
Dr. Tuboise Floyd, PhD
Founder & Chief Sensemaking Officer · Human Signal™
Host · The AI Governance Briefing

Auburn University doctoral researcher in Adult Education and Systems Theory. Over 15 years operating inside large institutions and federal contracting structures. Creator of the GASP™ diagnostic, the L.E.A.C. Protocol™, and the Failure Files™ pedagogical instrument — the only AI governance curriculum grounded in andragogy rather than compliance documentation. Author of the canonical position paper The Pedagogy Problem in AI Governance (SSRN, April 2026). TAIMScore™ Certified Assessor, HISPI, March 2026. Goodpods #1 Indie Tech · Apple Top 100 Global Management & Leadership, April 2026.

Kathy Swacina, Col. (USA Ret.)
Col. Kathy Swacina, USA (Ret.)
CEO/CIO · SherpaWerx  ·  Chair, HISPI Cerebellum AI Think Tank
AFCEA International Board · Homeland Security Subcommittee

Colonel, U.S. Army (Retired), with senior command experience at U.S. Army Reserve Command as Deputy Chief of Staff. Master of Strategic Studies, University of Texas at Austin. Decorated career spanning DCS Information Management (G6), DCS Operations and Knowledge Management (G3), and BRAC Project Office Chief. Currently leading responsible AI deployment strategy at the intersection of defense infrastructure, public safety, and smart city systems. AFCEA International board member, Homeland Security Subcommittee. Founder of the Col. Harding-Swacina STEM Scholarship for women entering technology fields. When she says an AI deployment decision should have been stopped, she has signed the orders that proved it.

Taiye Lambo
Taiye Lambo
Founder & CAIO · Holistic Information Security Practitioner Institute (HISPI)
Creator · TAIMScore™ Framework  ·  First & Former CISO, City of Atlanta

31 years of information security leadership across four continents. Former Director of Cybersecurity Strategy at the Federal Reserve Bank of Atlanta. First and former CISO of the City of Atlanta — a city that experienced one of the most consequential municipal ransomware failures in U.S. history, and rebuilt from it. Founder of HISPI, eFortresses, and CloudeAssurance. Creator of the TAIMScore™ framework — the only practitioner-built, NIST-aligned AI trust scoring instrument currently in institutional deployment. CISSP, CISM, CISA, ISO 27001 Auditor. The only person in this room who has rebuilt an institution's security posture from zero after catastrophic failure.

Cotishea Anderson
Cotishea Anderson
AI Senior PM, GTM Strategy · Microsoft
Copilot Enterprise Deployment · Fortune 500 Scale

Senior Product Manager at Microsoft, embedded in the go-to-market strategy for Copilot — the largest enterprise AI rollout in commercial history. She is the person moving autonomous AI systems from pilot approval to production deployment inside Fortune 500 organizations. That means she has seen every governance gap that surfaces between the boardroom pitch and the operational reality. Her role gives her direct visibility into where institutional readiness frameworks break down at execution — not in theory, but at scale, in real time. She will tell you what the corporate version of this story leaves out.

Paul Wilson, Jr.
Paul Wilson Jr.
Founder · Paul Wilson Global Solutions
Strategic Advisory · Government & Enterprise AI Risk

Strategic advisor to businesses and government agencies operating in high-stakes AI environments. Paul Wilson Global Solutions specializes in organizations that cannot afford to get the governance structure wrong and are not willing to wait for a failure case to find out they did. His advisory practice is built on the premise that AI governance failure is a structural problem, not a model problem — a thesis he has tested across government and commercial clients for whom the cost of getting it wrong is institutional, not just financial. He has sat in the rooms where these decisions were made. He is here to say what happened inside them.

Dr. Rhonda Farrell
Dr. Rhonda Farrell
Founder & CEO · Cyber & STEAM Global Innovation Alliance
Global Innovation Strategies  ·  USMC Veteran · ASQ Fellow · IEEE Senior Member

20+ years driving excellence in enterprise transformation, AI, cyber, and innovation. Founded the Cyber & STEAM Global Innovation Alliance — building toward 10,000 partners serving 1,000,000 people globally in STEM, cyber, and innovation. As CEO of Global Innovation Strategies, she specializes in governance, policy integration, and strategic alignment across the DoD, DHS, NSA, and IC ecosystems. She has sat in the rooms where AI decisions get approved that should have been stopped.

Dr. Dawn-Nicole McIlwain
Dr. Dawn-Nicole McIlwain
Founder & President · ProcuraFind®  ·  Co-founder · Skilldora Inc.
TEDx Speaker  ·  BCG Veteran

Award-winning CEO, business educator, multi-published author, and global strategist focused on strengthening small business participation in corporate contracting. Founder of ProcuraFind® — a certifying authority for small business contract readiness — she has helped clients secure more than $6M in corporate contracts within two years. Her work sits at the intersection of supplier readiness, economic development, and AI-enabled workforce education.

Michelle Houston
Michelle Houston
Co-Host · Audience Moderator · Human Signal Town Hall

Michelle Houston controls the mic and the room. As Co-Host and Audience Navigator, she manages the live production flow, directs audience participation, and ensures the microphone reaches the right person at the precise moment — so Dr. Floyd and the panel stay locked on the conversation that matters. Her role is not decorative. In a 50-seat room with unfiltered access to practitioners, the moderator determines whether the session produces institutional intelligence or noise. She produces signal.

If Your AI Strategy Ignores
These Four Constraints, You Are Leaking Value.

The L.E.A.C. Protocol™ is the Human Signal framework that names the four physical and economic forces most AI governance conversations omit. Until you account for them, your governance structure has no structural gravity.

L.E.A.C.
Protocol™ · Human Signal
L
Lithography
Semiconductor fabrication constraints determine which AI models can be deployed at what cost and at what timeline. Most governance frameworks assume chip availability as a constant. It is a variable. Your strategy should treat it as one.
E
Energy
Large language model inference and training at institutional scale carries energy costs that most C-suite AI roadmaps do not model until a data center contract is already signed. This is a governance failure that appears as a financial surprise.
A
Arbitrage
Compute pricing, geographic regulatory differences, and model provider incentive structures create arbitrage conditions that vendor contracts are optimized to exploit. Institutional buyers who do not recognize this are not buying AI capability. They are buying vendor dependency.
C
Cooling
Thermal infrastructure requirements for sustained AI deployment at scale are a hard physical constraint. Institutions that approve AI pilots without modeling cooling capacity are approving deployments that cannot survive their own success.

One Seat. One Briefing.
No Vendor Agenda.

This is priced as a practitioner briefing, not a conference ticket. The analysis in this room is not available through any vendor-sponsored channel. Independence has a cost. It is the only reason the intelligence is uncaptured.

General Admission · May 1 Onward
$147
Per seat · If seats remain after May 1
  • Same access as pre-sale — if seats remain after May 1
  • No guarantee of availability at this tier
  • 50-seat hard cap applies regardless of price tier
  • No waitlist offered after sell-out
Available May 1
Secure pre-sale access now at $97 before this tier activates

On the price point: A single hour of institutional AI governance consulting from a practitioner with the credentials assembled at this table runs $500–$2,500. This briefing offers direct microphone access to five of them simultaneously for $97. That is not a discounted webinar. It is a structural access decision — keeping the room accessible to operators who need the intelligence and are not running a vendor-approved AI initiative budget to fund it. The 50-seat cap is what preserves the depth of the room. When they are gone, they are gone.

Your Institution Has Not Failed Yet.
That Is Not the Same as Being Prepared.

The governance structure that failed Air Canada, UnitedHealthcare, and Zillow was in place before the failure. The question this briefing answers is whether yours can intervene at execution — or only explain afterward what went wrong.

Secure Your Access — $97
⚠ 50 seats total · Price increases May 1 · No waitlist after sell-out