It begins, as so many medical journeys regrettably do, with an act of faith. A person in crisis — frightened, disoriented, clinging to the thinnest thread of resolve — presents themselves to a system that adverts itself as a sanctuary. One imagines an orderly progression: distress recognized, risk assessed, treatment initiated, follow‑up secured. That is the mythology.
What actually unfolds bears little resemblance to such reassuring narratives. Instead, the patient is ushered through a succession of assessments — often repeated, conducted by staff stretched thin — and then discharged into a statistical void with nothing but vague promises of follow‑up. All of this proceeds with the serene confidence of an institution that knows no one is counting the outcomes. It is, in its quiet way, sinister.
I do not employ that word carelessly. “Sinister” is reserved for matters in which harm is not incidental but structural: the result of machinery designed without regard to the human beings ground within it. One thing becomes abundantly clear when tracing these medical peregrinations: the system is configured to avert its gaze precisely at the moments it should stare hardest.
The Growing Gap Between Demand and Capacity
Recent data show dramatic and sustained growth in demand for mental health services across England. In 2024/25, there were on average 453,930 new referrals to secondary mental health services every month — a 15 % increase compared to 2022/23 (CQC, 2025). Yet despite this surge, systemic capacity has not scaled accordingly. Waiting times remain protracted, and bottlenecks continue to accumulate.
According to the most recent Care Quality Commission (CQC) “Community Mental Health Survey 2024,” which collected responses from over 14,000 people, a third (33%) reported waiting three months or more between their assessment and their first treatment appointment, and 14% waited more than six months (CQC, 2025). Meanwhile, two in five (40%) felt the waiting time was too long, and 42% reported their mental health worsened during that wait (CQC, 2025).
These findings reflect a severe mismatch — the system is accepting referrals, but it cannot guarantee timely treatment. And for many, “timely” is no longer meaningful if measured in months.
What the Data Does Not Capture — and Why That Silence Matters
If one draws a schematic of the typical pathway for a person in crisis — referral → assessment → treatment → outcome (improvement, stabilization, deterioration, or death) — a robust system would record every node. But in the current configuration of the national data‑sets, especially the Mental Health Services Data Set (MHSDS) and associated reporting frameworks, outcome data is scant or absent.
Specifically, publicly-available data rarely track whether:
each assessment (particularly crisis referrals) resulted in a first treatment contact within a clinically reasonable timeframe;
the person’s condition improved, stayed stable, deteriorated, or resulted in self-harm or suicide during the waiting period;
the assessment was conducted by appropriately qualified personnel (psychiatrist vs nurse vs unqualified staff);
there was continuity of care, repeated contacts, discharge, re-referral, or follow-up;
demographic variables — such as socioeconomic status, region, ethnicity — influence access, delay, or outcomes.
In short: there is no epidemiological “upstream‑to‑outcome” tracking for mental health crisis care. A system so structured effectively guarantees that failures — deterioration, relapse, suicide — may occur without ever being attributed back to the system’s delays or mismanagement. That “data‑void” is not incidental — it is functional. By omitting outcome‑tracking, the system immunises itself against systemic accountability.
The Human Cost — Testimony Speaks Where Quantitative Outcome Data Is Silent
Where quantitative, gold‑standard longitudinal outcome data fails, qualitative testimony still shows a consistent pattern of suffering and abandonment. In the 2025 survey by Rethink Mental Illness, many respondents described being left in crisis for months or years without meaningful support. The report quotes one individual:
“I received no help at all until it was too late. My psychosis was full‑on, and an attempted suicide was the only thing that got me help.” (Rethink Mental Illness, 2025, p. 7)
In that same survey, 3 in 4 respondents (73%) said they did not receive the right treatment at the right time (Rethink Mental Illness, 2025). A majority (83%) said their mental health had deteriorated while waiting, and approximately one in three (31%) reported they had attempted to take their own life during that wait (Rethink Mental Illness, 2025). Additional harms included increased self-harm behaviours, substance use, job loss, and repeated emergency‐service contact (Rethink Mental Illness, 2025).
When such qualitative testimonies are aggregated — repeated across hundreds of respondents — they form a pattern. A consistent motif of abandonment, institutional invisibility, and human cost. That this is experienced across different regions, conditions, and backgrounds suggests systemic failure — not just misfortune or isolated poor service.
Crisis Referrals: Escalation Without Resolution
The pressure on crisis services has surged. According to CQC 2024/25 data, the number of “very urgent” referrals to crisis teams rose sharply — to 60,935 in 2024/25, marking a 77 % increase compared with 2023/24 (CQC, 2025). Yet the capacity to respond has not kept pace: many people endure long waits, receive no follow-up, or are discharged after assessment without treatment. The report notes “inconsistencies in commissioning” and “huge variation in care depending on geography” (CQC, 2025).
These are not nominal failures — these are failures at the very moment of acute risk, when prompt intervention might make the difference between life and death.
The Structural Invisibility of Harm — Why “No Data” Means “No Accountability”
When a system fails to measure its outcomes, it removes the possibility of accountability. That is not just bureaucratic oversight — it is structural self‑preservation. Because we do not record:
how many people deteriorated or attempted self‑harm while waiting for treatment,
how many died by suicide following referral‑and-wait,
how many had repeated assessments without ever entering true treatment pathways,
which demographics are disproportionately harmed —
the system can survive waves of crisis, budget cuts, rising demand — and still claim “we met demand,” because what it counts is inputs (referrals, contacts, assessments, crisis calls) — not outcomes (recovery, stabilization, harm, death).
That is a disservice to the patients who fall through—and a betrayal of the social contract between public health and public trust.
Toward a Minimum Data Framework — What Real‑World Accountability Would Look Like
If one were to design a system that actually protected patients, rather than protected itself, one would demand the following data be collected and published (anonymised, aggregated, but with sufficient granularity):
Referral‑to‑treatment latency: for every referral or crisis assessment, record the date of first treatment contact; compute median, mean, distribution, disaggregated by risk level, region, demographic.
Longitudinal clinical outcomes: at defined intervals (e.g. 1, 3, 6, 12 months), record clinical status: stable, improved, worsened, self‑harm, suicide attempt, suicide.
Provider credentials data: for every assessment and treatment contact, record the role/qualifications of staff (psychiatrist, nurse, support worker, peer‑support, etc.).
Continuity and care trajectory: for each patient — number of repeated assessments, number of actual treatment interventions, discharges, re‑referrals, drop‑outs, follow‑up rates.
Equity / demographic metadata: age, gender, ethnicity, socioeconomic status, region — to reveal systemic inequalities and postcode‑lotteries.
Transparency and public reporting: annual publication of anonymised, aggregated outcome data — with sufficient detail to detect systemic failures, variation, and inequality.
In research‑terms: what is needed is a prospective longitudinal registry — analogous to those used in large‑scale chronic‑illness cohorts — but for mental‑health crisis referrals. Only such a registry could reveal the “mortality” of waiting lists, the morbidity of delay, and the human cost hidden within the clerical columns.
Why the Absence of Data Is Possibly the Strongest Evidence of Institutional Harm
We often regard bad data as a hindrance — something that complicates research. But in this context, “no data” is not an unfortunate oversight. It is likely the mechanism by which the system maintains plausible deniability.
If the system counted suicides that occur after referral‑and‑waiting, it might reveal a high mortality associated with waiting lists.
If it tracked repeated assessments without treatment, it might show that many people never receive care beyond a paper trail.
If it captured outcomes by region, it could expose inequalities and postcode‑lotteries.
If it recorded staff credentials, it would show how many assessments are done by under‑qualified staff — or outside recommended professional standards.
By failing to collect those data, the system ensures that such exposures are impossible.
The result: a healthcare institution that can truthfully claim “we handled X hundred thousand referrals this year,” while a large—and unknown—number of people deteriorated, self‑harmed, or died in limbo.
That is not negligence; that is structural self‑protection.
Conclusion: Silence Is Not Innocence — It’s Evidence
If one accepts that public‑health systems owe patients not only care but accountability, then the absence of outcome data for mental‑health crisis care must be understood as a failure of duty.
We do not have reliable epidemiological data on how many people assessed in crisis go on to receive timely, adequate treatment — nor on how many deteriorate or self‑harm or die while waiting. What we do have — in surveys and qualitative testimonies — is clear evidence that many endure intolerable delay, inadequate or inappropriate care, repeated institutional abandonment.
In research‑terms: this means the “denominator” (people assessed) is known — but the “numerator” (people treated successfully; people harmed; people lost) is invisible. A ratio that can never be calculated. A failure that can never be quantified.
Yet that invisibility is precisely where the greatest harm occurs. It is a void that swallows stories, strips suffering of official recognition, and renders statistical the fate of individuals.
This is not a benign omission — it is a method of institutional self‑preservation.
Until we insist — politically, socially, ethically — that mental health outcomes be tracked with the same rigor as physical health outcomes, the system will continue to shield itself behind the pretense of “data.” But that very pretense is the most damning data of all: the data that tells us the system does not care to know its failures — and in doing so, ensures they continue.
References
Care Quality Commission. (2025). High demand, long waits, and insufficient support, mean people with mental health issues still not getting the support they need [Press release].
Care Quality Commission. (2025). State of Care 2024/25 — Mental health: Access, demand and complexity.
Rethink Mental Illness. (2025). Right Treatment, Right Time 2025 report.
submitted by /u/AdventurousFeeling20
[link] [comments]