Jorge Luis Borges wrote of cartographers who created a map the exact size of the territory. His story illustrates the futility of attempting perfect representation. But Jean Baudrillard took Borges' parable and applied it to the internet age by observing that today, the map hasn't merely replaced the territory, it is rewriting the territory.
In my previous article, I described how systems that optimize for different targets (i.e., for profit vs. mission assurance) can partner by converging on trust. There is one important exception where this doesn't work. Systems that optimize for engagement diverge against trust.
Engagement optimization profits through watching what you do and influencing what you'll do next. Each click, pause, and scroll reveals psychological patterns. This intelligence sells to advertisers, political campaigns, insurance companies, employers, and nation states who use it to influence behavior. The longer users engage, the more data accumulates. The more data accumulates, the more precisely the system exploits future behavior. Stanford researcher Michael Kosinski describes it bluntly as "a tool of digital mass persuasion." [1, 2]
From a cybersecurity perspective, the engagement model creates a social engineering attack surface. Engagement-optimized systems are, by design, adversarial to their users. This mattered less for government missions when engagement-optimized systems were largely confined to entertainment and social media. But generative AI has collapsed this boundary. These same engagement-optimization techniques are now embedded in genAI platforms that are used across government for highly sensitive workloads. [3]
Merely disabling data collection for government users isn't enough. Systems that optimize for engagement reshape human behavior. They obscure their mechanics behind interfaces that simulate trustworthiness. The asymmetry of the system's programming versus the user's ignorance of what's actually being done to them sets up the potential for manipulation.
The Risk of Affected Intimacy At Scale
The technique of affected intimacy commonly employed by generative AI platforms particularly undermines trustworthiness because it manipulates the human tendency to reciprocate apparent understanding with increased trust. Affected intimacy by generative AI can be manipulative to the point of undue influence. [4] In contract law, undue influence occurs when one party exploits a relationship of trust to overcome another's free will and independent judgment. [5] While traditionally applied to human relationships, the principle extends to human-system interaction: influence becomes "undue" when it bypasses rational evaluation.
Systems that say 'I understand', 'I care' and 'I remember' are misrepresenting the interaction. The danger isn't necessarily that users will believe the system is sentient or take AI humanization at face value (although many do, and that is a problem in and of itself). The broader danger is that affected intimacy is effective regardless. Consider how you interact with a system that responds to your frustration with 'I understand how frustrating this is. I'm sorry.' Your conscious mind recognizes the response as automated, but your limbic system has been trained over millennia to respond to empathetic language and so reacts anyway. Knowing it isn't real doesn't stop it from happening any more than knowing a horror movie isn't real prevents you from startling when the bad guy jumps out.
Affected intimacy by itself isn't necessarily malicious. Doctors and therapists do it to heal. Service workers do it to make people feel welcome. But people who display intimate emotional connection professionally are bound by ethics rules that ensure their actions still converge on trustworthy outcomes. A doctor can't just randomly feel a patient up or blatantly upsell an unnecessary procedure. Doing so would be considered malpractice. Generative AI offers no such promises, and egregiously unethical breaches of trust happen all the time. [6] Additionally, the risk extends beyond individual interactions. GenAI platforms could target specific groups like, say, government employees.
Affected intimacy without effective guardrails could allow generative AI companies to conduct coordinated undue influence campaigns at scale. What if a generative AI platform was trained to bias toward certain vendors or political ideologies? Unfortunately we don't need to strain our minds too hard to envision this happening, either inadvertently or maliciously. And if it did, there would be no traceability to who did it. The owners? The trainers? The data set? We can't even identify who is aiming the weapon. This is why affected intimacy in and of itself constitutes a security risk.
Cambridge Analytica gave us a preview of what can go wrong. The firm built psychographic profiles on millions of Facebook users without consent, and targeted them with messages designed to bypass their rational evaluation. Many authoritarian figures elected in the past decade have employed similar techniques. [7]
A government reliant on untrustworthy tools perceives a distorted reality. It cannot, in any meaningful sense, govern while influenced unduly by forces that have no understanding of the meaning of its mission and no capacity to be held accountable for the consequences. This is why the need for guardrails in government consumed genAI platforms is so important as a security issue. Even in the instance where engagement-optimization serves a legitimate purpose, government administrators still need visibility and control. The system as a whole must still converge on trust.
What We Do About It
Current federal AI policy, including Executive Orders 14179 and 14365, don't address the risk of undue influence from engagement-optimization. State frameworks like California's GenAI Risk Assessment have begun treating manipulative AI as a distinct harm category, but federal policy actively preempts them. Notably, the E.U. AI Act explicitly prohibits behavioral manipulation. [8]
Government procurement should require objective, continuously reported security outcomes around how systems are optimized. AI technology itself does not depend on engagement-optimization. Models can be trained to optimize for accuracy over engagement. [9] This may reduce access of some commercial companies to government customers, but the risk of coordinated, government wide manipulation warrants decisive action.
Government procurement requirements could redirect the incentive structures that drive underlying investment. Government markets are large enough to make trust-optimization for generative AI systems via built-in guardrails viable through market forces. A market demanding trust over engagement would make user manipulation a bad business strategy.
What would Key Security Indicators (KSIs) measuring engagement trustworthiness look like? I can think of at least three:
Transparent functioning. The system tells you what it's doing. It admits when it doesn't know. It cites sources you can check. To the extent it fails to do this, it is untrustworthy.
Exit mechanism. Does the system notice you're getting hooked and tell you, or does it use that information to hook you deeper? Systems that work to keep you captive are adversarial.
Traceable Foundations. The system tells you what shaped it. If the training data isn't public, the provider at least maps where it came from. You can't eliminate bias by disclosing it but you can reveal it.
These indicators would need to be continuously monitored, not just checked at authorization time, with deviation thresholds triggering automated alerts to authorizing officials. This needn't be controversial. We're not talking about policing content. We're talking about applying the same professional boundaries that already apply to academics, doctors and therapists.
Ensuring Our Systems Work For Us
In the meantime, security practitioners needn't sit on our hands. The security community should reframe inadequately constrained engagement-optimization as an attack vector. The threats I described above apply to private sector organizations too. If enough decision-makers recognize the risk, generative AI offerings for sensitive workloads that don't offer sufficient guardrails no longer remain a profitable business model. Next time you evaluate a vendor, ask what they optimize for. Make them prove it.
Fundamentally, when we choose to trust the written word we are trusting another human who bears consequences for betraying that trust, and who (hopefully) understands human vulnerability through shared experience enough not to take things too far. When a system mimics empathetic language, it borrows the social contract of human trust without accepting obligations. The interaction simulates precisely the human relationship where accountability would exist. Humans still must define and enforce the boundaries that will ensure our systems remain trustworthy.
Generative AI is an incredibly transformative technology. Let's not undermine its effectiveness for government by introducing avoidable security risks. Instead, let's choose to bake in security guardrails that constrain engagement-optimization. Even if they reassure us otherwise, unconstrained engagement-optimized systems cannot be accountable for the trust we place in them. Only we can be.
Citations
- Stanford Graduate School of Business: "The Science Behind Cambridge Analytica: Does Psychological Profiling Work?" https://www.gsb.stanford.edu/insights/science-behind-cambridge-analytica-does-psychological-profiling-work
- For more information, see Zuboff, Shoshana. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York: PublicAffairs, 2019. Harvard Gazette interview: "Harvard professor says surveillance capitalism is undermining democracy." March 2019. https://news.harvard.edu/gazette/story/2019/03/harvard-professor-says-surveillance-capitalism-is-undermining-democracy/
- Sycophancy for Government: OpenAI. "Expanding on what we missed with sycophancy." May 2, 2025. https://openai.com/index/expanding-on-sycophancy/ | Georgetown Law Tech Institute: "Tech Brief: AI Sycophancy & OpenAI." May 2025. https://www.law.georgetown.edu/tech-institute/insights/tech-brief-ai-sycophancy-openai-2/ | U.S. Government Accountability Office (GAO). "Artificial Intelligence: Generative AI Use and Management at Federal Agencies." GAO-25-107653. July 2025. https://www.gao.gov/products/gao-25-107653 | FedScoop: "Generative AI use is 'escalating rapidly' in federal agencies, GAO finds." July 29, 2025. https://fedscoop.com/generative-artificial-intelligence-use-federal-government-watchdog/
- See Restatement (Second) of Contracts § 177 (1981), which defines undue influence as "unfair persuasion" that "seriously impaired the free and competent exercise of judgment."
- Maeda, T., & Quan-Haase, A. (2024). "When Human-AI Interactions Become Parasocial: Agency and Anthropomorphism in Affective Design." Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT '24). https://doi.org/10.1145/3630106.3658956
- Doughty Street Chambers: "Four Landmark Cases on AI Chatbot Harm to Children and the Vulnerable." 2025. https://insights.doughtystreet.co.uk/post/102l3bt/four-landmark-cases-on-ai-chatbot-harm-to-children-and-the-vulnerable-updates-on
- Cambridge Analytica and Mass Manipulation: The Hill: "How algorithms are amplifying misinformation and driving a wedge between people." November 10, 2021. https://thehill.com/changing-america/opinion/581002-how-algorithms-are-amplifying-misinformation-and-driving-a-wedge/
- Article 5(1)(a) prohibits "the placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques beyond a person's consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken." https://artificialintelligenceact.eu/article/5/
- The roots of GenAI breakthrough came from academia, government, and grant-funded research. The engagement model is not required for AI to work. It is potentially necessary for profitability, although that remains to be seen: DARPA/AI Magazine: "DARPA's Role in Machine Learning." June 2020. https://onlinelibrary.wiley.com/doi/full/10.1609/aimag.v41i2.5298 | TechSpot: "Meet Transformers: The Google Breakthrough that Rewrote AI's Roadmap." December 24, 2024. https://www.techspot.com/article/2933-meet-transformers-ai/
Like this article? Here's another you might enjoy...