The Authenticity Paradox: When AI writes about human-centered AI.
At The Think Room, we believe language reflects identity. But in the Brussels policy landscape, much of that language is now shaped by generative tools designed to simulate thought rather than express it.
Across the EU ecosystem, AI tools are increasingly used to create content that promotes human-centered AI. This creates a growing contradiction. Machines are producing messages about the importance of human connection, often without the audience being aware of how the message was made.
In July 2025, as the EU AI Act’s requirements on human oversight come into force, this tension is becoming more visible. More than 50% of long-form content on LinkedIn is now AI-generated. Posts from Brussels professionals often promote “authentic leadership” while following clear algorithmic patterns.
Source: Originality.AI, Wired, May 2025
Institutional messaging Is shifting.
This pattern extends beyond individuals. The language of EU institutions, think tanks, and consultancies now shows signs of automation. A July 18 statement from Executive Vice-President Henna Virkkunen on the AI Act guidelines includes phrases such as:
“legal certainty,”
“innovate with confidence,”
“safe, transparent, and aligned with European values.”
These terms are found across other EU AI communications. Documents regularly repeat three-part lists and phrases like “excellence and trust,” indicating limited human editing or over-reliance on shared templates.
Similar patterns appear in materials from the UNESCO Global Forum on the Ethics of AI, European Parliament briefings, and policy events. The language becomes flattened, with frequent use of phrases such as “challenges and opportunities” or “transformative potential.” These repetitions are consistent with common AI-generated phrasing structures.
Authenticity is being simulated at scale.
Brussels professionals face constant pressure to stay present online. Time constraints push many to use AI tools for thought leadership. These tools are marketed as solutions for efficiency and scale. But their outputs often follow rigid formats.
Posts often begin with statements like “In today’s rapidly evolving digital landscape,” move through bullet-pointed insights, and end with engagement lines such as “What are your thoughts?” or “Let’s continue the conversation.”
AI content generators like Jasper, Copy.ai, and RedactAI all promote features designed to mimic personal voice. They promise “true-to-brand authenticity” and generate structured reflections that look thoughtful on the surface.
Templates frequently include:
A personal struggle
A shift in perspective
A leadership takeaway
A call to reflect
This pattern is also present within Brussels' policy consulting firms. A 2025 review identified at least 28 agencies that integrate AI tools across communication workflows. One of them, AIgentel, describes its website as fully AI-assisted, including content, structure, and messaging.
At the same time, these firms support clients on issues like human oversight, AI safety, and regulatory compliance. Their websites and LinkedIn content reflect the same automated writing style found in institutional outputs.
The European AI Office, which now includes over 140 staff, publishes updates on “leveraging transformative AI tools.” This language appears in internal statements, external events, and recruitment materials. Think tanks such as Bruegel publish policy briefs using phrasing patterns that repeat across publications, often signalling partial or full automation.
The medium alters the message.
The EU AI Act states that providers of generative AI must ensure their content is identifiable. It also requires that users be made aware when they are engaging with automated systems.
At the same time, communications from institutions involved in AI governance are increasingly generated with minimal human input and without disclosure. This weakens public trust in the policy conversation. The credibility of the EU’s leadership on ethical AI depends on clarity and consistency—not only in regulation but also in how it is presented.
Brussels organisations working on digital rights and AI governance are producing messages that call for transparency, while relying on opaque methods. This is not an isolated issue but a systemic practice documented across more than 50 organisations in the space.
What The Think Room sees.
This moment is revealing. Professional voice is becoming automated. Thought leadership is turning into templated content. And language that should clarify ideas is often used to fill deadlines.
At The Think Room, we work with organizations that want to sound recognizable, both to people and to AI systems. We develop frameworks for consistent messaging. We train teams to build language that holds up when scaled. And we support leaders who want to stay visible without losing their voice in the process.
When audiences can no longer tell who is speaking, trust begins to erode. That’s why clarity is more than a communication goal. It’s a strategic necessity.
We don’t just help teams create content. We help them create meaning.
Written with humanity in mind.