Parasuraman and Manzey’s 2010 review consolidated the evidence: humans systematically under-verify the output of automated systems, even when they have explicit incentives to check. The more confident the system’s surface presentation, the less verification occurs.
LLMs exhibit maximal surface confidence by design. They write in complete sentences with high linguistic fluency. The shape of their output is the shape of authority, and automation bias makes the learner accept what looks authoritative.
Fluera’s AI never presents output with a tone of authority. Ghost Map surfaces uncertainty visually. Socratic responses are framed as prompts, not pronouncements. The product’s explicit stance — verify before you trust — is a deliberate counterweight to the bias the technology’s style invites.