Our Methodology
How we turn complex care-home evidence into clear decisions for families
Feel calmer. Decide with confidence.
Choosing care is emotional, urgent, and expensive. This methodology is designed to reduce decision regret for families by turning complex evidence into clear, practical next steps.
- Clear language, not technical noise
- Honest confidence labels for every key claim
- Practical visit checks before commitment
The information exists. Reliable judgement usually does not.
Most families can find ratings, prices, and reviews. The hard part is knowing what still matters today, what conflicts, and what needs checking on a visit. We built this methodology to close that confidence gap.
Raw data is fragmented
Important signals sit across different systems and are rarely interpreted together.
Some data is outdated by nature
Point-in-time records can lag behind day-to-day reality.
Unknowns are often hidden
Many services present missing data as certainty. We do not.
What we evaluate for every home
Each home is assessed across six decision pillars:
Care Quality Trajectory
Regulatory quality history, not just a single snapshot rating.
Operational and Financial Resilience
Signals that indicate stability, continuity, and potential stress.
Clinical and Care-Fit Match
How well care capabilities align with the needs you told us matter.
Cost and Funding Context
Fee positioning against local benchmarks and relevant funding pathways.
Workforce and Family Experience
Patterns from workforce and community sentiment signals.
Location and Daily Living Practicality
Practical context around access, local environment, and day-to-day life.
How the evidence engine works
Every report runs through the same five-stage evidence workflow, with quality controls at each step:
- 1
Collect
We ingest independent regulatory, operational, financial, and community signals.
- 2
Reconcile
Records are matched across provider, location, and ownership identifiers.
- 3
Stress-test
Conflicts, anomalies, and unusual changes are automatically flagged before scoring.
- 4
Score
Decision pillars are weighted transparently against your priorities and care context.
- 5
Review
High-impact or low-confidence outputs go through additional QA checks before publication.
Our confidence layer: clear about certainty, clear about gaps
Every key claim is labelled by evidence strength, so families can see what is confirmed, what is estimated, and what should be checked during visits.
Confirmed
Direct evidence is available and current enough for confident use.
Estimated
Derived from related signals when direct evidence is limited, with conservative assumptions.
Assumed
A conservative baseline is used where related evidence is unavailable.
Unknown
No reliable signal yet - we turn this into visit questions, not false certainty.
- No hidden assumptions. No artificial certainty.
Why two families can see different priorities
Your report is not a generic ranking. The underlying evidence is the same, but weighting adjusts to your care needs, risk profile, location constraints, and budget sensitivity.
- We do not declare a single winner.
- We show where each option is strongest.
- We show what to probe further on visits.
How we handle changing information
Different signals change at different speeds. We manage freshness by signal class, with defined refresh cycles and validity windows.
Event-driven signals
Updated when new official events are published.
Periodic signals
Refreshed on scheduled cycles.
Continuous sentiment signals
Monitored for trend movement and volatility.
Reference context signals
Updated when official baseline datasets are revised.
- Information can change after publication. We always advise direct confirmation before commitment.
What this methodology is - and is not
- It is a decision-support framework for families.
- It is not medical, legal, or financial advice.
- It highlights risk signals; it does not predict outcomes with certainty.
- It does not hide uncertainty behind one black-box score.
- It improves decision quality; it does not replace in-person visits and direct provider checks.
Independence is a methodology choice, not a slogan
Our analysis is funded by families. We do not operate a placement-led recommendation model. This keeps incentives aligned with one objective: useful, evidence-based decisions for your family, with lower conflict risk.
- If the report is not useful, we stand behind it with a clear refund policy.
Methodology FAQ
Where does your evidence come from?
From multiple independent regulatory, operational, financial, and community evidence streams. We verify sources against each other rather than rely on single-source claims.
Why do you describe source categories but not every source by name?
We disclose evidence categories, confidence labels, and methodology controls in full. We do not publish a complete source inventory to protect data integrity and prevent easy system gaming.
Why do some items show as estimated or unknown?
Because honesty improves decisions. We label uncertainty explicitly and turn it into practical visit checks.
Do you recommend one best home?
No. We show comparative strengths, risks, and fit so your family can choose with confidence.
How current is the information?
Freshness varies by signal class. We apply defined refresh and validation policies to keep outputs reliable and transparent.
Can we verify what the report says?
Yes. Reports include evidence context and confidence labels so families can verify critical points directly before decisions.
Ready to apply this methodology to your own shortlist?
Start with a free assessment and see how evidence-based matching works for your family context.