Human Signal HS monogram Human Signal
Human Signal

Presence Signaling Architecture®

Govern the Machine

The future requires human signal to overcome artificial noise. The machine must not win. Equip yourself with frameworks to navigate institutions disrupted by artificial intelligence.

This is for you if —

You lead AI governance, compliance, or risk for a federal agency or enterprise
You're accountable for AI deployment decisions but lack a structured governance framework
You want unfiltered answers — no vendor pitch, no keynote theater

The Platform

The Human Signal Architecture

Founded by Dr. Tuboise Floyd

Editor in Chief, The AI Governance Record | Host, The AI Governance Briefing | Chief Sensemaking Officer | Founder, Human Signal | TAIMScore™ Certified Assessor

Human Signal is an independent research and media platform dedicated to artificial intelligence governance and institutional risk.

We reverse engineer institutional failures and build frameworks operators can actually use — when organizations treat artificial intelligence as a procurement problem instead of a systems design problem.

This is a presence-first architecture. We guide how you build earned trust and restore visibility in systems designed for observation rather than recognition.

Canonical IP · Human Signal

Frameworks

The intellectual architecture of Human Signal. Each framework is a standalone diagnostic or operational tool for institutional operators governing AI.

Analysis

The Trust Gap

Two levels of institutional AI governance failure. Structural absence. Structural insufficiency. Permitted is not the same as admissible.

Read the framework →

Diagnostic

GASP™

Governance As a Structural Problem. Most institutions do not have a governance problem because they lack the right software. They have a governance problem because they never built the right structure.

Read the diagnostic →

Thesis

The Workflow Thesis

Institutions deploying AI fail not because of underperforming models, but because of broken governance structures. The primary risk is never a bad model — it is governance failure.

Read the thesis →

Practice

Noise Discipline

The algorithm is rewriting your source code. Cognitive defense for operators drowning in vendor hype and feed-induced source amnesia. Four interventions to restore your signal.

Read the brief →

Framework

The L.E.A.C.™ Protocol™

Four physical constraints every AI strategy must address: Lithography, Energy, Arbitrage, Cooling. If your strategy does not address all four, you are leaking value.

Read the protocol →

Architecture

PSA® · AIaPI™

Presence Signaling Architecture and AI as Presence Interface — frameworks for restoring human visibility in systems designed to observe, not listen.

Read the architecture →

Signal Validation

Hyperprompt™

An emergent lexicon entry from the PSA® runtime environment. The fusion of a scored identity-coded signal and a prompt, resulting in a presence-optimized query.

Read the lexicon →

Live Diagnostic · GASP™

Live Governance Diagnostic

5 questions. Instant GASP™ structural diagnosis. No login required. Find out if your institution has the governance structure to govern AI accountably — right now.

Run the diagnostic →

Open Access · Human Signal Frameworks

Framework documentation is publicly available for research, education, and non-commercial use. PDF copies and source documents are hosted on GitHub. Licensing terms are included in each repository.

github.com/drtfloyd

Free Study Tool

Study the TAIMScore™ Framework. All 72 Controls.

GOVERN · MAP · MEASURE · MANAGE. Every control from the Trusted AI Model — formatted as interactive flashcards. Study before the workshop. Drill the framework. Test your recall.

Launch Flashcards →

19

GOVERN

20

MAP

18

MEASURE

15

MANAGE

TAIMScore™ In Action

See Real Incidents Scored

Human Signal applies the TAIMScore™ framework to real AI failures on the podcast. These Failure Files™ show exactly what you'll practice in the workshop.

Tap any card to read the full breakdown

Failure File 01 of 12

Accountability & Training

When Your AI Learns to Hate — On Company Time

GOVERN 2.2 MEASURE 2.6 MANAGE 2.1
Microsoft TAY · AIID #6 Flip ↻
FF 01 · Breakdown

Microsoft spent $0 on adversarial input controls before releasing TAY in March 2016. Within 16 hours it published racist propaganda. GOVERN 2.2 failure: no accountability structure for what happens when your AI learns from the internet without guardrails. Four TAIM domains failed simultaneously — no testing protocol, no kill-switch SLA, no non-AI fallback, no deactivation authority.

Join the Next Session →
Failure File 05 of 12

Privacy Risk & Socio-Technical Design

OpenAI Scraped the Internet. Your Data Was in It.

MAP 1.6 MEASURE 2.10
OpenAI Class Action · AIID #561 Flip ↻
FF 05 · Breakdown

A 157-page class action alleged ChatGPT was trained on private data without consent — including children's data and PII. FTC investigation opened. Every org that deployed ChatGPT in a regulated environment without asking "what data was this model trained on?" inherited this risk on sign-up. MAP 1.6 + MEASURE 2.10: privacy risk existed but was never formally scored. Active exposure under HIPAA, TRAIGA, EU AI Act simultaneously.

Join the Next Session →
Failure File 09 of 12

Bias, Fairness & Contextual Deployment

The Algorithm Said It Was Him. It Wasn't.

MAP 1.2 MEASURE 2.11
Wrongful Arrests · AIID #74 · #896 Flip ↻
FF 09 · Breakdown

Three Black men. Three wrongful arrests. Facial recognition never validated for the population it was used to identify. Detroit Police acknowledged a 96% misidentification rate when used in isolation. Detroit settled for $300,000. MAP 1.2: no demographic performance analysis documented. MEASURE 2.11: fairness and bias evaluated after arrests made national news — not before deployment.

Join the Next Session →
Failure File 12 of 12

Feedback Systems & Context-Appropriate AI Use

The Condolence Email That Wrote Itself

MEASURE 3.3
Vanderbilt / ChatGPT Flip ↻
FF 12 · Breakdown

Vanderbilt sent students a post-mass-shooting condolence email. At the bottom: "Paraphrase from OpenAI's ChatGPT." National backlash. Public apology. Permanent reputational damage. MEASURE 3.3 failure: no feedback mechanism existed to flag high-stakes communication contexts where AI output must be reviewed, escalated, or prohibited. The governance layer that asks "must a human own these words?" did not exist.

Join the Next Session →

12 Incidents · 12 TAIM Controls

Every failure is a practice scenario.
See the full set — scored, sourced, and mapped.

The AI Governance Briefing

Failure File of the Month

A forensic autopsy of a real-world AI governance failure. New case every month.

Tap the card to read the full breakdown

April 2026 · AI Liability

Chatbot Governance & Accountability Structure

When Your AI Invents Policy

GOVERN 1.1 GOVERN 1.7 MANAGE 1.1 MANAGE 4.1
Air Canada · BC Civil Resolution Tribunal · 2024 Flip ↻
April 2026 · Breakdown

Air Canada's chatbot told a grieving customer he could apply for a bereavement fare retroactively within 90 days. That policy didn't exist. When he submitted the claim, Air Canada denied it — and argued the chatbot was a separate legal entity not binding on the airline. The BC Civil Resolution Tribunal rejected that defense and awarded the customer $812. The precedent: you own what your AI says. GOVERN 1.1 — no accountability structure for AI policy representations. MANAGE 4.1 — no monitoring to detect hallucinated policy claims before they reached customers.

Read Full Analysis →

Now Broadcasting

The AI Governance
Briefing
with Dr. Tuboise Floyd

A Human Signal Production

Rapid-fire episodes on AI governance, institutional risk, and finding your human value when the machine noise gets loud.

Apple Podcasts
Top 100 · Global Rankings
Management Category
Goodpods Top 100 Management Indie Podcasts Goodpods Top 100 Leadership Podcasts
Latest Episode Guest Interview · 2026

Making Digital Accessibility Work In The AI Era

Dr. Tuboise Floyd sits down with Dr. Michele A. Williams to explore how AI is reshaping — and challenging — digital accessibility. What does meaningful inclusion look like when institutions race to automate? And who gets left behind when the signal isn't designed for everyone?

Watch on YouTube

AI Governance

The Automation Paradox

Finding your human value when AI rewrites the rules. Why "leverage" is replacing judgment — and how to protect your signal.

BAR Method

Break the Cycle

Background, Action, Result as a personal reinvention framework. Run the ultimate self-interview and rewire your next move.

Identity

Who Are You Beneath the Noise?

Five steps to break autopilot, claim solitude, and find purpose through serving others. A framework for genuine growth.

Career

Market Signals, Not Just Skills

How market signals shape career opportunities. Stand out by amplifying your story at the right moment, in the right system.

Never miss an episode

Subscribe wherever you listen.

New episodes drop weekly. Rapid-fire. No noise.

Common Questions

What is TAIMScore™?

TAIMScore™ — the Trusted AI Model Score — is an enterprise AI maturity and risk assessment framework developed by HISPI Project Cerebellum. It gives auditors, executives, and compliance professionals a structured methodology to score, audit, and manage an organization's AI readiness. Human Signal is an authorized affiliate partner promoting the official TAIMScore™ Assessor Workshop — virtual, hands-on, 6 CPEs.

Who is Dr. Tuboise Floyd?

Dr. Tuboise Floyd is the founder of Human Signal and an independent AI governance researcher. He developed the LEAC Protocol and the Noise Discipline Framework — tools for restoring human visibility and institutional signal in automated, high-noise environments. He hosts the Human Signal podcast and leads the quarterly Town Hall for institutional operators navigating AI disruption. Read his full mission.

What is the LEAC Protocol?

The LEAC Protocol is a macro diagnostic tool built from forensic market analysis — not a governance framework. It identifies the physical infrastructure constraints that determine which AI companies survive the infrastructure war and where value erodes.

The market has split in two. While the consumption economy ghosts high-value talent, the investment economy is quietly hardening the physical layer. The four components represent the binding constraints:

  • L — Lithography
    Control of the semiconductor supply chain, particularly photolithography equipment, is critical. Without direct access to silicon manufacturing, you are dependent on the capacity of others. (Signal: ASML)
  • E — Energy
    The electrical grid becomes the limiting factor. AI training and inference require massive power, so securing gigawatt-scale power contracts is essential. (Signal: Crusoe, Leidos)
  • A — Arbitrage
    Retail electricity pricing is unsustainable for large-scale AI operations. Success requires finding arbitrage opportunities — stranded energy, flare gas, off-peak power — to reduce compute costs. (Signal: Lambda, CoreWeave)
  • C — Cooling
    Thermodynamics is the ultimate constraint. High-performance computing generates enormous heat. Without adequate cooling infrastructure, clusters cannot run. This is a fundamental solvency issue. (Signal: Path Robotics, Array Labs, Varda Space Industries, VulcanForms, Hadrian, Shift5)

If your AI strategy does not address all four constraints, you are leaking value. Companies that solve these physical infrastructure challenges will outlast those focused purely on algorithmic improvements.

Read the full L.E.A.C.™ Protocol →

What is the Human Signal podcast?

Human Signal with Dr. Tuboise Floyd is an AI governance podcast for the Builder Class — leaders, auditors, and institutional operators navigating AI-disrupted systems. Dr. Floyd examines the physics of institutional failure, the limits of automation, and what it takes to govern the machine. Available on Spotify and Apple Podcasts.

How can my organization underwrite Human Signal?

Human Signal offers three underwriting tiers — from per-episode Signal Drop Sponsorships ($1,500) to full Seasonal Signal Partnerships ($6,000/quarter) and the Signal Brief Presenting Partner package ($12,000/quarter), which includes named sponsorship of the Quarterly Town Hall and direct introductions to Dr. Floyd's institutional network. See full tier details and inquire.

Human Signal
Town Hall

Wednesday, May 14, 2026  ·  2–3 PM ET  ·  Virtual via Zoom

After this hour, you'll know —

The specific governance failures that expose your organization to fiduciary liability right now

How agentic AI systems create accountability gaps your current policies don't cover

What institutional operators are actually doing — not what vendors are selling

Your specific question, answered on tape, by six practitioners with no scripts

Session Structure · 60 min

0:00

Intro & framing — the governance reality check

0:10

Panel discussion — agentic risk, fiduciary duty, accountability debt

0:35

Audience microphone — your questions, live, no filter

0:55

Close + what's next

Pre-Sale Pricing

$50 / seat

Increases to $75 on May 1

  • Live virtual seat + mic access
  • On-set recording experience
  • Recording sent to all ticket holders
  • Audience mic is opt-in — never required
Reserve Your Seat — $50 →

Secure checkout via Stripe · Can't attend live? You still get the recording.

Hear the room before you buy a seat

Dr. Floyd and the panel have been building this conversation for months on the podcast. The format is real. The questions are hard. The answers are unscripted.

▶ Watch a Preview

Human Signal

Join the
Visible Human Community

Get the Failure Files™ digest, monthly session reminders, and AI governance framework updates. Take the Visible Human Pledge — and join practitioners, auditors, and governance leaders who refuse to let the machine win.

Take the Visible Human Pledge →

We do not sell your data.