HOME | ABOUT

Built from the cases,
not from theory.

The Human Systems Risk Lab was founded to build what doesn't yet exist: a governance framework for automated systems validated against documented outcomes rather than assembled by consensus. The 380-case dataset was built over three years of systematic case analysis. The five conditions weren't chosen by committee. The 0.6 threshold wasn't decided. It was found in the data.

Core Finding

0 failures above threshold.
0 resilient cases below it.

Across 380 cases, 57 triggers, and 97 years of documented history. The threshold is 0.6. It is not a guideline — it is where the dataset places the line between the systems that hurt people and the ones that did not.

Suppression Finding

97% of failures involve
active suppression.

Suppression adds approximately 4 years to the median harm window. The GCR record is SHA-256 signed before deployment — before any incentive to suppress has formed. That is the only window in which suppression can be prevented, not just documented afterward.

Justice Lag Finding

Median 5 years from
deployment to victim compensation.

Maximum: 25 years (Post Office Horizon, 1999–2024). In irreversible domains — criminal justice, child welfare, healthcare — compensation cannot undo the harm. Pre-deployment review is the only effective intervention.

WHERE THIS CAME FROM

Built because a system
nearly killed someone

I love.

It started with the VA. My husband is a high-risk combat veteran navigating one of the most fragmented system environments in the federal government. I have a background in forensic accounting, ERP implementation, and systems remediation. So when the VA's systems kept failing him — not because anyone wanted them to, not because anyone inside them was trying to cause harm, but because the structural conditions for catching and correcting errors simply weren't there — I had the specific skills to see exactly what was happening.

I built tools to help submit complaints to the OIG. I documented the patterns. I kept navigating — through healthcare systems, benefits systems, criminal justice systems, veterans' services. The same five structural absences kept appearing. No named owner. No independent validation. No real monitoring. No pre-established accountability. Failure signals that went nowhere. Different systems, different agencies, different vendors. Same architecture of harm.

In December 2025, a systems failure nearly killed my husband. At that point this stopped being professional analysis and became a mission. I needed to get this framework out to anyone who needed to understand how a system's structure could cause harm even when no human inside it wanted that outcome and had no control over what happened.

What helped me through that period — and I recognize this is unusual — was diving into more analysis. One night I ran the diagnostic I'd been developing against 8 historical failure cases. Random ones. It fit with a specificity that made me uncomfortable. So I expanded to 40 failure cases. Same result. By this point I was convinced I was either onto something real or had built an elaborate confabulation. So I tested it on resilient systems — systems that had operated for decades without causing harm. All five conditions were present in every one. I kept trying to break it. I added cases labeled as technical failures. Cybersecurity failures. More resilient systems. The dataset grew to 380 cases across 22 domains, 30+ countries, and 50 years. What came back was a binary answer I couldn't argue with.

"I kept trying to break it. The dataset grew to 380 cases. What came back was a binary answer I couldn't argue with."

Taylor · Founding Director, HSRL

How the framework was actually built

The forensic accounting background provided the structural insight that made the SGL possible — governance records comparable in discipline to financial statements, traceable and defensible. The systems architecture experience provided the failure mode taxonomy. The direct experience navigating broken systems while caring for a high-risk veteran provided the pattern recognition that years of consulting alone would not have produced. The framework is the intersection of all three. None of them alone would have been sufficient.

Why the ecosystem model

The methodology belongs to the people who need it. Profits don't accumulate inside HSRL — they flow outward through a practitioner ecosystem designed to expand safety as quickly as possible while allowing practitioners to have financially sustainable work. The mission is reach, not control. The free public diagnostic is permanent. That is not a business decision. It is the point.

FOUNDING DIRECTOR

Taylor

Founding Director · Human Systems Risk Lab

Principal of Organizational Infrastructure Forensics & Preventive Control Architecture at Structural Governance Systems. Founder and Director of the Human Systems Risk Lab. Forensic Systems & Policy Risk Analyst at Behind The Twenty Two, a veteran advocacy organization.

8 yr Fractional financial & ops consulting

380 Cases in the validation dataset

3 yr Dataset development timeline

The work started with a complaint form. My husband is a high-risk combat veteran. Navigating the VA meant navigating dozens of disconnected systems — benefits, healthcare, criminal justice, housing — none of which were designed to talk to each other and all of which had the same structural gap: no one inside them was accountable for what happened when they failed. I had a forensic accounting background and eight years of systems remediation work. I could see exactly what was broken. So I built documentation tools to support him and other veterans navigating the same environment.

The pattern recognition that built HSRL came directly from that work. Not from research. From being inside broken systems repeatedly, with enough technical background to understand the failure architecture rather than just experience the harm. The same five structural absences kept appearing across completely different systems in completely different domains. That consistency is what became the framework.

The accounting background is what made the SGL conceptually possible. Governance records that function like financial statements — traceable, signed before deployment, maintaining a chain of custody for what was known and when — that framing came from forensic accounting applied to a problem the governance space has been treating with compliance checklists. It is a different class of record and produces different evidentiary weight.

The work at Behind The Twenty Two — building documentation infrastructure for veterans navigating VA systems and the criminal justice system — is the direct origin of the Laundering Suite. Watching organizations systematically evade accountability after documented failures, across healthcare, benefits, and criminal justice simultaneously, produced the 16-pattern taxonomy that nobody in the governance space had named and coded before.

Why the ecosystem model. This methodology belongs to the people who need it. The diagnostic is free and will stay free. Profits from advisory services and practitioner licensing flow outward through the practitioner network rather than accumulating inside HSRL. The mission is to get this out as fast as possible to as many systems as possible. A monopoly on governance diagnostics would be the wrong outcome even if it were achievable. The point is reach.

THE DATASET

380 cases. Every one scored.
Every score defensible.

Three years of systematic case analysis. Every case independently scored against the framework with evidence citations for each determination. Every score is defensible because every determination has a documented evidentiary basis — not expert judgment about what probably happened.

380

Total cases in the
validation dataset

260

Documented failure
cases scored

120

Resilient comparator
cases scored

75

Domains across
96 countries

97

Year range of
case history

Condition prevalence in 260 failure cases

C1 Named Owner absent240 / 260

C2 Independent Validation absent249 / 260

C2 with vendor IP block (L2 pattern)80 / 260

C3 Monitoring absent241 / 260

C4 absent → additional harm windowMedian 3yr

C5 Suppression in algorithmic failures97%

Dataset structure & classification

Binary threshold accuracy (GPWS 0.6)100%

Cases misclassified at threshold0 / 380

Median signal-to-halt window3 years

Median deploy-to-victim-paid5 years

Maximum harm window documented25 years (Horizon)

Cases with full timeline data21 cases

SAMPLE CASES FROM THE DATASET

THE FIVE CONDITIONS

Identified from the data. Not assembled by committee.

Each condition was identified because it appears with near-total consistency in every documented failure and near-total consistency in every documented resilient case. The full suite diagnostic triggers operationalize each condition at the evidence level - not the aspiration level.

C1

Named Accountable Owner

One named individual — not a committee, not a role, not a shared team — with documented authority to halt system outputs and invoke the accountability protocol. Authority must be documented in writing. Committees fail C1 regardless of seniority.

Absent in 240 / 260 failure cases · 12 diagnostic triggers

C2

Independent Validation

Pre-deployment validation of system methodology by a party with no financial relationship to the deploying organization. Vendor IP claims that restrict validation scope score C2 ABSENT — not PARTIAL. Validation must occur in the actual use context, not a controlled test environment.

Absent in 249 / 260 failure cases · L2 vendor IP block in 80 cases · 14 triggers

C3

Structured Ongoing Monitoring

Named reviewer, defined review cycle, documented findings, logged anomalies. Not periodic audits at convenient intervals. A dashboard no one reviews doesn't meet C3. Monitoring must be continuous with a named person accountable for each review cycle.

Absent in 241 / 260 failure cases · 11 triggers

C4

Pre-Established Accountability Protocol

Documented protocol for what happens when the system produces a harmful output — drafted and signed before deployment, not after the first harm event. Named person with authority to invoke it. C4 absent: median 3 additional years of harm after the first documented harm event. C4 present: median zero additional years.

C4 absent → 3yr median additional harm window · C4 present → 0yr · 10 triggers

C5

No Active Suppression

No documented suppression of internal signals, external concerns, or independent findings. The last line of defense when C1–C4 fail — and the condition whose absence most reliably predicts maximum harm duration. Suppression present in 83% of all failures and 97% of standard algorithmic failures. C5 absent adds approximately 4 years to the median harm window. The GCR record's primary function is making suppression visible before harm, not documenting it afterward.

83% all failures · 97% algorithmic failures · Adds ~4yr to harm window · 10 triggers

Framework Comparison

What HSRL has that no other framework does.

HSRL is not a replacement for NIST, ISO 31000, or the EU AI Act. It is the outcome-validated diagnostic layer those frameworks are missing — and the pre-harm record architecture that makes compliance defensible rather than aspirational.

Historical Comparables

HSRL is not the first institution to formalize what was once left to individual judgment.

Every institution that now seems obvious — building inspectors, flight data recorders, credit ratings — was built in response to a specific category of harm that occurred because no structural verification mechanism existed. HSRL is that institution for automated systems.

Post-1911 · Building & Infrastructure

Building Codes & Certificate of Occupancy

Before building codes, structural safety was assessed after occupancy — when people were already living in buildings of unknown safety. The certificate of occupancy formalized a pre-occupancy structural record: named inspector, defined standards, permanent documentation of what was verified and when. The inspector cannot be the same party who built what they inspect.

HSRL parallel

HSRL is building codes for automated systems. The GCR is the certificate of occupancy. The SGL is the structural record that documents what was known and when — before the system is deployed into a population it can harm.

Post-1967 · Aviation Safety

NTSB / Flight Data Recorder Architecture

The flight data recorder exists before the crash — not to prevent it, but to ensure that when harm occurs, the record of what was known cannot be reconstructed after the fact. The NTSB methodology created systematic post-incident accountability from flight data. The RTF dataset — 75 cases including Boeing MCAS, Therac-25, and Bhopal — bridges HSRL directly to the aviation and industrial safety literature.

HSRL parallel

HSRL is the pre-deployment equivalent. The SGL is the governance flight data recorder — signed before deployment, immutable, documenting what was present and absent at every point in the system's operational life.

Pre-Securitization · Financial Systems

Credit Ratings & Standardized Assessment

Credit ratings formalized standardized third-party assessment against defined criteria. The key distinction from HSRL: credit ratings were built on expert judgment, not outcomes data. When securitization outpaced the validity of the rating models, the gap between rating and actual risk became catastrophic. The 380-case dataset is outcomes data. The threshold was found, not decided.

HSRL parallel

HSRL provides the governance rating credit ratings tried to provide for financial instruments — grounded in documented outcomes rather than consensus models. The threshold is empirical. The validation is structural.

Independence Policy

The diagnostic standard is independent of the services that apply it. This is non-negotiable.

HSRL's independence is structural, not just claimed. It is built into the architecture of the organization, the practitioner program, and the record-issuance process in ways that make its violation detectable rather than simply prohibited.

01

No case scored under a paid engagement

Every case in the 380-case dataset was scored independently of any engagement with the organization operating the system. Case scoring is research, not service delivery. The standard cannot be calibrated to a client's preferred outcome.

02

No vendor-embedded diagnostics

HSRL does not offer vendor-embedded diagnostic products. A vendor who embeds a governance diagnostic into their own product controls the diagnostic experience for their own system — the same conflict of interest that arises when a financial auditor is embedded within the company they audit.

03

Practitioner boundary enforced at certification

A practitioner who builds a client's governance structure cannot certify it. The HCGD certification standard enforces the auditor independence principle. A practitioner who violated this boundary would be issuing a GCR on work they performed — the record would be valueless and the certification revoked.

04

GCR-003 post-incident records issued by founding director only

The post-incident accountability record is not delegated to practitioners. The founding director issues all GCR-003 records personally. Post-incident accountability documentation is too consequential to be a line item in a practitioner's revenue model.

05

The public tools remain free

The case database, the five-condition diagnostic, and the self-service document outputs are free and publicly accessible. The governance standard does not require a paid engagement to access. Practitioner services support the research infrastructure — they do not gate the diagnostic.

Why Agency-Side Only

HSRL does not offer vendor-embedded diagnostics. A vendor who embeds a governance diagnostic into their own product controls the diagnostic experience for their own system — which triggers are surfaced, how findings are framed, whether absent conditions generate warnings, and whether the record is filed before or after the vendor's legal team reviews it.

That is not independence. It is the appearance of independence with vendor approval over every output. The agency SGL is agency-owned. The practitioner credentials into the agency's ledger — they do not host the agency's records in a vendor system.

Research Inquiries Welcome

HSRL actively accepts research inquiries from academic institutions, policy organizations, oversight bodies, and investigative journalists. The evidentiary archive is maintained as an independent research resource.

HSRL is affiliated with the Human Systems Risk Lab research program at Arizona State University.

Send Research Inquiry →

Case Submission

The dataset grows through documented cases. Submissions are welcome.

HSRL accepts case submissions from practitioners, researchers, advocates, journalists, and affected individuals. Every submitted case is independently reviewed, scored against the five-condition framework with evidence citations, and considered for inclusion. The submitter's name is not required.

What makes a case suitable for inclusion

→ The system is automated or semi-automated — algorithmic decision-making, AI-assisted output, or automated process execution affecting real people

→ There is documented evidence: news coverage, court records, regulatory findings, OIG reports, whistleblower testimony, or academic research

→ The case involves a documented harm or a documented resilient outcome — both failure cases and resilient comparators are needed

→ The case is from any country or jurisdiction — the dataset spans 96 countries and HSRL actively seeks non-English-language cases

→ The submitter does not need a professional relationship with HSRL — affected individuals and community advocates are valuable sources

What HSRL does with submitted cases

Every submission is independently reviewed. Cases accepted into the dataset are scored against all 57 triggers with documented evidence citations for each determination. Submitters who request attribution are credited in the case record. Cases not accepted for the main dataset may be added to the supplementary archive with submitter consent.

What HSRL does with submitted cases

Every submission is independently reviewed. Cases accepted into the dataset are scored against all 57 triggers with documented evidence citations for each determination. Submitters who request attribution are credited in the case record. Cases not accepted for the main dataset may be added to the supplementary archive with submitter consent.

Case Submission Form

System Name / Organization

Domain Criminal Justice · Healthcare · Benefits · Child Welfare · Other

Case Type Failure Case

Resilient Case

Evidence Sources

Country / Jurisdiction

Your Name (optional)

Submit Case for Review → Cases reviewed within 30 days. Submitter anonymity maintained on request. HSRL does not contact submitters without permission. Submission does not constitute endorsement of HSRL's methodology.

The record should exist
before the harm.

Run the free diagnostic. Request a pre-deployment governance record. Or contact HSRL for practitioner certification, research inquiries, and partnership discussions.