Public-interest organisational research

Public evidence. Academic benchmarks. Better accountability.

Observed uses AI-assisted analysis of publicly available information, benchmarked against peer-reviewed academic research, to identify patterns in organisational behaviour that may indicate harm to workers, communities or public trust.

Public evidence only No surveillance, hacking, impersonation or private investigation methods.
Comparison, not accusation The benchmark generates the standard. The gap is where the work begins.
Human-reviewed outputs AI assists the research. A human reviewer remains accountable.

Individual cases get heard. Systemic patterns often do not.

Harmful organisational practices can remain hidden behind polished public claims, disconnected complaints and scattered public evidence. Observed addresses that gap by turning public signals into structured, academically grounded comparison.

Public claims can outpace public reality

Organisations may present strong values, impact and wellbeing messages while public signals point to unresolved questions.

Workplace harm is often treated as isolated

Employee experience signals, public cases and governance indicators may be separated when they need to be viewed as a pattern.

Public funding requires public accountability

Where public money, vulnerable communities or social outcomes are involved, stakeholders need clear and source-cited insight.

Research gives the comparison discipline

Observed does not voice the loudest story. It applies recognised academic and good-practice benchmarks to what is publicly visible.

How the model works

Three components. One disciplined research method.

Every output is built through public signal collection, academic benchmark comparison and structured gap analysis. The goal is not to accuse. It is to compare what is publicly visible against what recognised research says good practice looks like.

1

Public signal collection

AI-assisted collection identifies lawful, attributable public signals from records, media, review platforms, registers, official material and organisation-owned claims.

2

Academic benchmark comparison

Signals are assessed and compared against peer-reviewed research and recognised good-practice frameworks. The benchmark sets the standard.

3

Published gap analysis

Outputs show where public signals align with or diverge from benchmark indicators, with confidence ratings, limitations and right of response.

Founding focus: workplace harm and organisational bullying culture.

Observed begins with workplace harm because the public consequences are serious: the research base is strong, and the gap between public claims and public signals can be significant. The founding benchmark library includes workplace bullying culture, psychosocial risk, psychological safety, organisational culture and governance frameworks.

Quality gates

The tranche process protects the work.

Platform-led research, client-initiated requests and commissioned research all move through the same tranche process. The tranches are quality and methodology gates, not just billing stages.

0

Fit assessment

Checks public-interest relevance, organisational focus, evidence pathway, privacy and whether another pathway is more appropriate.

1

Evidence check

Tests whether public signals exist and whether they are sufficient to justify deeper benchmark analysis.

2

Research development

Builds the source register, timeline and information-gap plan, including OIA, LGOIMA or FOI drafts where relevant.

3

Benchmark analysis

Maps classified signals against academic frameworks and good-practice indicators and accountability questions.

4

Right of response

Provides the named organisation a fair response window before publication where threshold conditions are met.

5

Publication

Publishes and distributes comparative outputs, with confidence ratings and only after all gates are passed.

Governance by design

Responsible analysis needs hard boundaries.

The model is designed to be fair, transparent and defensible. AI assists collection and comparison. The human remains the publisher.

Public evidence only

Every signal must be publicly available, lawfully accessible and attributable. The public evidence boundary is absolute.

Suppression threshold

Named-organisation findings require signals from at least three independent source types before publication is considered.

Conflict register

Observed maintains a written conflict register and will not publish where a conflict exists within the defined window.

Correction process

Published findings can be corrected, updated or withdrawn where new evidence or factual error requires it.

Outputs built for accountability, not outrage.

Observed can produce organisation-specific analysis reports, sector trend reports, evidence checks, information-gap plans and publication-ready content. Each output remains comparative, source-cited and benchmarked.

  • Organisation-specific analysis reports comparing public signals against research frameworks.
  • Sector trend reports identifying patterns across sectors, organisation types or issue areas.
  • Publication and distribution packs tied to approved research outputs and publication gates.
  • Client-initiated research where requests pass the same screening, evidence and human-review process.

The language discipline is non-negotiable.

Use this “Public signals align with research indicators of...”
Never this “This organisation has a bullying problem.”
Use this “The evidence raises a question about...”
Never this “This proves misconduct.”

Request analysis

Have a public-interest concern that deserves structured analysis?

Observed accepts selected requests involving organisational behaviour, workplace harm, public trust, governance weakness, public funding accountability or misleading public claims. Every request is screened before analysis begins.

Observed does not provide legal advice and does not act as a private investigator. All outputs are based on publicly available, lawfully accessible information, academic benchmark comparison and human review.