Review audit ReviewLens
Methodology

How we build each report

Each review follows the same structured process so reports are comparable across companies.

Sources we triangulate

We pull signals from Trustpilot, Sitejabber, ComplaintsBoard, Reviews.io, App Store, Google Play, Reddit, ExpatWoman and similar regional forums, news outlets, regulatory filings, and the company's own help/support documentation. We flag platforms where review patterns appear manipulated (sudden 5-star bursts, templated language, agent-named patterns).

What we extract

Each report covers ratings & reputation, buyer perspective, seller perspective (for marketplaces), category-specific feedback (where relevant), customer service, trust & safety / scam patterns, community discussion, news & corporate timeline, competitor comparison, and a final verdict.

Verdict labels

Our verdict is a curated synthesis, not a simple average. We label as Excellent, Good, Mixed, Poor, or Avoid based on the totality of evidence. A high app-store rating doesn't override well-documented transactional friction on independent review sites, and vice versa.

AI-assisted research

Some reports are generated with the assistance of large language models (Anthropic's Claude) using a templated research prompt that performs structured web search and extraction. AI-generated reports are clearly labeled in our admin records. Every source cited can be cross-checked.

What we don't do

We don't accept payment to alter, remove, or skew reports. We don't reproduce review text verbatim — we paraphrase to respect original publishers' copyright. We don't claim our score is the only signal that matters; it's a starting point.