BookCC BY 4.0

Trust After Thinking Machines: Silent Authority, Human Responsibility, and the Future of Legitimate Power

Author Vick, Aaron
Published
Via Zenodo

Abstract

Trust After Thinking Machines follows a simple observation to its uncomfortable end: once institutions can 'think' at scale, they stop treating intelligence as a scarce human resource and start treating judgment as a renewable utility. Automation arrives quietly—as triage screens, risk scores, routing queues, ranking systems, and policy engines—reshaping who gets served, flagged, priced, or denied. The book argues this is not merely a technical shift but a shift in authority itself. The core crisis is not that automated decisions are sometimes wrong; it is that they become unanswerable. Responsibility diffuses across vendors, models, policies, and committees until no one can say with evidence who decided, why, and who owns the outcome. The book builds a practical theory of accountable authority for the agentic era, proposing three enforceable properties for legitimate machine-mediated judgment: boundedness, meaningful contestability, and identifiable responsibility—translated into infrastructure like decision provenance records, reviewable evidence trails, and disagreement architectures.

Keywords

automated decision systemsalgorithmic governancecontestabilitydecision provenanceaccountabilityinstitutional trusthuman oversightauditabilitylegitimate authorityAI governance

Document

Having trouble viewing? Download the PDF directly

Citation

Vick, A. (2026). Trust After Thinking Machines: Silent Authority, Human Responsibility, and the Future of Legitimate Power. Zenodo. https://doi.org/10.5281/zenodo.18682993

Identifier