Methodology

How Observed turns public information into accountable insight.

Observed uses a structured methodology to collect public signals, compare them with academic and recognised good-practice benchmarks, and identify where publicly visible organisational behaviour appears to align with risk indicators or diverge from good practice.

The method in three parts

Public signal collection. Benchmark comparison. Gap analysis.

Observed does not begin with an allegation. It begins with public evidence, applies an independent benchmark, and reports what the comparison can responsibly show.

1

Public signal collection

Observed collects publicly available signals associated with a named organisation from lawful and attributable sources. Signals are extracted and classified by source type and concern category.

The method records what the signal indicates, where it came from, when it was accessed and how it should be treated.

2

Academic benchmark comparison

Classified signals are mapped against peer-reviewed academic frameworks and recognised good-practice standards. The standard being applied exists independently of Observed.

This keeps the work comparative. The benchmark generates the standard, not the platform.

3

Gap analysis

The output is a structured account of where public signals appear to align with risk indicators, diverge from recognised good practice, or raise accountability questions.

Every finding is sourced, classified, benchmarked, confidence-rated, limited and balanced.

Public evidence model

What counts as a public signal?

A public signal is a lawful, accessible and attributable piece of public information that may indicate something relevant about organisational conduct, governance, culture, public claims or stakeholder experience.

Signals are classified, not treated as final proof.

Media reporting is treated as reported information. Anonymous workplace reviews are treated as employee-experience signals. Organisation-owned material is treated as public positioning or self-description. Regulator and tribunal material is treated according to its legal or formal status.

Different source types carry different levels of weight. This affects confidence ratings and publication decisions.

Media reporting Reported public information, not automatically treated as verified fact.
Workplace review platforms Anonymous employee-experience signals such as Glassdoor and SEEK.
Government and regulator records Public records, notices, decisions and formal accountability material.
Public funding records Procurement, grants, contracts, public reporting and accountability material.
Organisation-owned publications Websites, annual reports, impact reports, social media and public claims.
Stakeholder material Public submissions, reports, parliamentary material and official information responses.

Benchmark library

The standard comes from research, not from Observed.

Observed compares public signals against a curated benchmark library of academic frameworks and recognised good-practice standards. The benchmark library is versioned, disclosed and updated as the research base develops.

Why benchmarks matter

Benchmarking avoids opinion-led commentary. It asks whether publicly visible signals align with established indicators already identified by research or recognised professional guidance.

This means Observed does not invent the standard being applied. It explains the standard, applies it consistently and states the limits of the comparison.

How benchmark fit is assessed

Not every framework applies to every matter. Benchmark selection depends on the issue, sector, source base and evidence available.

If a benchmark is not suitable, the matter may be narrowed, paused, held for further research or declined.

Founding benchmark examples

Workplace harm is the first benchmark domain.

Observed’s founding focus is workplace harm and organisational culture risk. The initial benchmark library includes workplace harm, psychosocial risk, psychological safety, organisational culture and governance frameworks.

Framework or sourceWhat it helps assessDomain
Einarsen, Hoel, Zapf and CooperAntecedent conditions and indicators associated with workplace bullying culture environments.Workplace harm
Lutgen-SandvikWorkplace bullying as an organisational process rather than only an individual incident pattern.Workplace harm
Copenhagen Psychosocial QuestionnairePsychosocial risk dimensions and workplace conditions associated with harm.Psychosocial risk
Amy EdmondsonPsychological safety, speaking up, learning behaviour and team climate.Culture
Edgar ScheinVisible artefacts, stated values and deeper assumptions in organisational culture.Culture
WorkSafe NZ guidanceRecognised guidance for managing psychosocial risks in a New Zealand context.Good practice
Institute of Directors New ZealandGovernance responsibility, accountability, board oversight and director good practice.Governance

AI assisted analysis

The AI analyst compares. The human reviewer decides what can leave the system.

Observed uses a nine-pass architecture to reduce overstatement, protect privacy, surface limitations and keep findings tied to verifiable sources.

1

Signal collection

Collects public signals by source type and extracts signal classification categories.

2

Privacy scan

Identifies and removes named individuals before analysis continues.

3

Legal risk scan

Flags active proceedings, vulnerable-person risks and republication concerns.

4

Signal classification

Maps signals to dimensions such as culture, governance, leadership and public claims.

5

Benchmark comparison

Compares classified signal clusters against the benchmark library.

6

Gap analysis

Identifies divergence from good practice and generates accountability questions.

7

Confidence and limits

Rates findings against source diversity, source strength and sample limits.

8

Contradiction check

Actively searches for balancing or contradictory public signals.

9

Human review gate

No output leaves the system without human review and sign-off.

What the method never does

Observed is built to avoid overstatement.

The methodology exists to make careful comparisons, not to manufacture allegations or turn public signals into stronger claims than the evidence can support.

  • It does not infer motive, intent or knowledge from circumstantial information.
  • It does not publish named individuals as the subject of analysis.
  • It does not generate findings without specific, verifiable and attributable sources.
  • It does not assign confidence ratings higher than the source base allows.
  • It does not omit contradictory public evidence where it exists.
  • It does not bypass the human review gate.

Confidence and limitations

Confidence reflects source strength, not factual certainty.

Observed findings are rated by source diversity, source hierarchy, sample strength, contradiction checks and the limits of what public information can show.

Source diversity

Findings are stronger when signals come from multiple independent source types.

Source hierarchy

Formal records, regulator material and public decisions carry different weight from anonymous signals.

Sample strength

Thin or concentrated source pools limit what can responsibly be said.

Contradictions

Public material that moderates or contradicts the concern must be included where found.

Publication threshold

Named-organisation findings are suppressed unless signals are drawn from at least three independent source types. Multiple anonymous reviews alone do not meet the threshold, regardless of volume.

This threshold protects against single-source patterns, thinly sourced findings, small-sample identification and confidence ratings that exceed what the available evidence can support.

Where the threshold is not met, a matter may be held pending further public evidence, added to a sector monitoring watchlist, used only as background context for sector-level analysis or declined.

Right of response

Publication is not automatic.

Before publishing named-organisation findings, Observed provides the subject organisation with a defined opportunity to respond where the methodology requires it.

The response is assessed before publication. Where new evidence changes the analysis, the output is updated. Where no response is received, the absence of response may be recorded.

Publication only proceeds where the response window has closed, relevant material has been considered, legal review has been completed where triggered, proportionality has been assessed and human sign-off is complete.

The organisation is named. The allegation is not made. The academic benchmark does the talking.

Methodology disclosure

Every output should explain how it was produced.

Published outputs include a methodology disclosure that explains what sources were searched, what frameworks were applied, what the analysis can and cannot conclude, how confidence ratings work, and how human review was applied.

Source register

Outputs identify the source types used, when they were accessed and how they were classified.

Framework disclosure

Outputs explain which benchmarks were applied and why they were suitable for the analysis.

Limitations statement

Outputs state what the evidence can support, what it cannot support and where caution is required.

Ready to see how the process works in practice?

The methodology explains how Observed analyses public information. The research process shows how a concern moves through fit assessment, evidence checking, benchmark analysis, right of response and publication review.