STRUCTURAL GOVERNANCE DIAGNOSTICS
Your vendor passed
the compliance review.
Do you know who is
accountable if it harms someone?
Most automated systems deployed by government agencies go through some kind of review. None of those reviews answer the question that determines what happens after something goes wrong: who is accountable, what happens next, and how does the person who was harmed find out? HSRL is the diagnostic built specifically for that gap.
380
Documented cases studied
260 failures · 120 that held up
100%
Accuracy identifying failures
No exceptions in the dataset
75
Types of systems studied
Benefits, policing, healthcare & more
0.6
Score separating every failure
from every system that held up
97yr
Range of cases: 1926–2024 Same five gaps appear across all of them
WHAT THIS IS
The gap between
having policies and
actual governance
When a government agency deploys an automated system: benefits decisions, predictive policing, child welfare screening, fraud detection there is usually some kind of review. A vendor checklist. A privacy impact assessment. A compliance sign-off. Sometimes a full audit against NIST or COSO.
Those reviews matter. But none of them answer the question that determines what happens after something goes wrong: Is there a specific named person accountable for this system? Has someone independent actually tested whether it works for your population? Is there a written plan for what happens when it makes a mistake?
HSRL studied 380 documented cases. Systems that harmed people and systems that held up under pressure and found the same five structural gaps in every single failure. Not in the vendor's product. In the deploying organization's governance. The same gaps. Every time.
The diagnostic tells you exactly which of those gaps exist in a system before it goes live or before you sign the contract.
What the data shows
Every failure had the same gaps.
Every resilient case filled them.
Across 380 documented cases, 57 diagnostic questions, and 97 years of history. The pattern holds without exception. The score that separates failures from resilient cases — 0.6 — was found in the data, not decided by committee.
Why problems don't get reported
In 97% of failures, someone
knew and couldn't say so.
Internal warning signals were suppressed in nearly every documented failure. When there is no protected path to report problems, harm continues an average of 4 additional years after the first warning appeared.
How long people wait
5 years from deployment to
the first person paid back.
That's the median. The longest documented case took 25 years. In criminal justice, child welfare, and healthcare, no payment restores what was taken. The only effective intervention is before deployment.
The Five Questions
Before you sign the contract,
five questions need documented answers.
Not in the vendor's documentation in your organization's governance structure. These are the five structural gaps present in every documented failure. If you can't answer yes to all five with evidence, the system has a governance gap.
C1
Is there a specific named person — not a department or team — accountable for this system?
One individual with written authority to stop the system and answer for what it does. Not a committee. A person with a name.
Missing in 240 of 260 failures
C4
Has your organization written down — before deployment — what happens when this system harms someone?
A document that exists before the first deployment: who is accountable, how a mistake is corrected, and how the affected person is notified.
When missing: 3 extra years of harm after the first incident
C2
Has someone independent of the vendor tested whether it works for your specific population?
Not the vendor's own accuracy studies. An independent party with no financial relationship, testing against your population and use context.
Missing in 249 of 260 failures
C5
Can people inside your organization report problems without it going through the vendor or the supervisors whose metrics depend on the system continuing?
The channel has to bypass both. Without it, someone who sees a problem has no safe path to say so.
Someone knew in 97% of failures — and couldn't say so
C3
Is someone reviewing what this system is doing, with authority to act on what they find?
A named individual, on a defined schedule, with documented authority to escalate or stop the system. Not a dashboard. A person with obligations.
Missing in 241 of 260 failures
0.6
The score separating all 260 failures from all 120 resilient cases.
Found in the data. Not decided.
100%
Classification accuracy.
Zero failures above the threshold.
Zero resilient cases below it.
5yr
Median time from deployment
to the first person paid back.
Maximum observed: 25 years.
What the Timeline Data Shows
How long harm runs
before anyone stops it.
Across 380 documented cases, the same pattern: warning signs appear early, governance gaps keep them from being addressed, and by the time someone outside the organization discovers what's happening, years of harm have already occurred.
Pre-Deployment
Zero
cost to victims
Running the diagnostic before deployment closes the entire harm window for 71% of documented failures. This is the only point where the cost to victims is zero.
First Signal
3yr
median harm after first warning sign
In most documented failures, warning signs appeared at or before deployment. Someone saw them. But with no accountable owner and no protected reporting path, the warnings went nowhere.
Discovery
4yr
median deploy-to-discovery
In 92.5% of documented failures, the problem was discovered by a journalist, an advocate, or a lawsuit — not by the organization running the system. By then, 4 years of harm had already run.
Justice Lag
5yr
median deploy-to-victim-paid
Even after a failure becomes public, people wait years. Median: 5 years from deployment to first person paid back. Maximum: 25 years. In irreversible domains, no payment restores what was taken.
WHO USES HSRL
If your agency buys, oversees, or is affected by automated systems-
this is for you.
🏛
Procurement Officers & Government Buyers
Before you sign the contract: run the diagnostic. The five accountability questions need documented answers before deployment — not remediated after someone is harmed. HSRL tells you exactly which governance gaps exist in the system you're buying, and gives you language to require those gaps be filled as a contract condition.
Pre-contract governance assessment — know what you're buying before you sign
RFP language requiring named accountability and independent validation
The Agency SGL: a live governance accounting system tracking every system, every diagnostic, every condition score, and every re-audit schedule — included when HSRL is embedded in your workflow
Without the Agency SGL: your practitioner holds your records and delivers PDF documents — the Agency SGL is what gives your team live access
⚖
Oversight Professionals, Advocates & Affected People
The diagnostic is free and public. You don't need a law degree, a research institution, or technical expertise to run a structured governance assessment on a system that affects you or your community. The result tells you exactly what accountability structures are missing — in plain language, with documented evidence.
Run the diagnostic on any system and download the finding
Compare against documented cases that most closely match
Generate a structural evidence summary for legal, media, or oversight use
Understand which questions to ask — and which answers aren't acceptable
🔬
Practitioners, Auditors & Safety Engineers
HSRL practitioners are certified to issue verified governance findings — documents that hold up in procurement review, regulatory submission, and legal proceedings. The methodology bridges to aviation, nuclear, and infrastructure safety engineering and connects algorithmic governance to the established science of how complex systems fail.
Practitioner certification and verified finding issuance
380-case dataset for empirical research and comparative analysis
Pre-harm record architecture with immutable audit trail
Revenue infrastructure: diagnostic platform, directory listing, mentorship
How HSRL Fits With Frameworks You Already Use
NIST. EU AI Act. COSO.
They cover what they were built to cover. HSRL covers what they weren't.
These are serious frameworks built by serious people. The gap they all leave open - by design, not by oversight - is the accountability question. HSRL doesn't replace any of them. It covers the space they leave.
What existing frameworks cover well
✓Risk management processes and controls (NIST, ISO 31000)
✓Cybersecurity frameworks and incident response (NIST CSF)
✓Bias documentation and transparency requirements (EU AI Act)
✓Enterprise risk management and internal controls (COSO ERM)
✓Post-incident investigation and root cause analysis (NTSB)
✓Privacy impact assessment and data handling standards
The gap they were not built to fill
—Whether a specific named person is accountable for this system's decisions
—Whether the system has been independently validated for your population — not the vendor's test conditions
—Whether there is a written plan for what happens when the system harms someone
—Whether people inside your organization can report problems without it going through the vendor or the chain of command that depends on the system continuing
—Whether a pre-deployment record exists that cannot be retroactively changed after harm occurs
How HSRL works alongside the frameworks you already use
NIST / ISO 31000Organizations that follow these frameworks are better positioned for HSRL assessment. Documented resilient cases in the dataset include NIST-compliant organizations. HSRL adds the accountability layer those frameworks leave to organizational discretion.
EU AI ActThe Act requires high-risk AI systems to have human oversight and accountability mechanisms. A GCR-001 pre-deployment record documents that those requirements have been met — and creates an evidence trail that holds up if they're ever questioned.
COSO ERMEnterprise risk management identifies and categorizes risks. HSRL tests whether the specific accountability structures that prevent those risks from becoming harm are actually in place — not just documented as policies, but operationally real.
NTSB MethodologyNTSB investigates failures after they happen. HSRL applies the same structural analysis before deployment — the same question, asked at the point where the answer can still prevent harm.
Where to Go Next
Everything else
has its own page.
01 · The Diagnostic
The Five-Condition Diagnostic
Run the diagnostic on any automated system. Free, public, anonymous. See which of the five conditions are present, partial, or absent — and which documented failures most closely match.
Run free diagnostic →
02 · After the Diagnostic
The Remediation Pathway
What happens when a diagnostic finds gaps. The four phases: findings, organization builds, re-audit, certification. The GCR record types, the SGL ledger, and real pathway examples for facial recognition and NLP report writing.
See the pathway →
03 · Self-Service Documents
Generate Your Own Documents
Summary reports, full governance assessments, and procurement letters — generated instantly from your diagnostic inputs. Pay per document. Not reviewed by HSRL staff, but structured, evidence-cited, and formatted for institutional use.
See document types →
04 · For Agencies
Agency Integration & the Agency Structural Governance Ledger
Agencies with HSRL embedded in their workflow get the Agency SGL — a governance accounting system that tracks every system assessed, every diagnostic run, every condition score, and every re-audit schedule in one live ledger. Practitioners credential into the Agency SGL to run scheduled diagnostics directly inside it. Agencies without embedded HSRL receive PDF documents from their practitioner — no live system access. The Agency SGL is what changes that.
Learn about the Agency SGL →
05 · Practitioners
Build a Practice on This Framework
Certification, license tiers, revenue structure, chapter leadership. For attorneys, auditors, government technology consultants, veterans advocates, social workers, and safety engineers.
See practitioner program →
06 · About
Built from the Cases, Not from Theory
How the 380-case dataset was built, how the threshold was found, what HSRL is analogous to (building codes, NTSB, credit ratings), and why the nonprofit structure matters for the diagnostic's independence.
About HSRL →
The record should exist
before the harm.
Run the free diagnostic. Request a pre-deployment governance record. Or contact HSRL for practitioner certification and partnership inquiries.
Public diagnostic launching April 2025 · Practitioner access Q3 2026 · Research inquiries welcome