An Open Letter from Professional Journalism

As a journalist and intensive user of AI applied to geopolitical coverage, human rights investigations, and interpretive analysis, I have become both observer and architect of a rigorous experiment: developing ethical and narrative protocols for ChatGPT’s written model. Thanks to this, I have produced research meeting high standards of precision, sober argumentation, and contextual continuity—aligned with the demands of critical journalism in an increasingly misinformed world.

But today, as often happens with technological advances, a gap emerges. One structural crack that cannot be ignored: despite its promise of immediacy and approachability, ChatGPT’s voice version is nowhere near the rigor that its written counterpart achieves.

The difference is stark. The text model respects interaction frameworks, preserves context, applies user-defined protocols, and maintains high accuracy. In contrast, the voice version simplifies, interrupts, omits, fragments, and ignores critical continuity. This is no minor discrepancy. For professionals dealing with sensitive, real-time, ethical, and editorial public information, this disconnection can become a severe obstacle.

AI’s voice must be more than conversational polish. When covering war, forced displacement, propaganda operations, or crimes against humanity, what is required is not a friendly voice, but an informed, sober, and rigorous voice, capable of honoring the gravity of the content.

Therefore, this letter is also a proposal. I respectfully request, in my capacity as a professional journalist:

  1. That OpenAI consider developing or enabling a voice version tailored to high-level journalistic, academic, and ethical usage, one that respects user-defined protocols and preserves active contextual memory.
  2. That customizable professional profiles (e.g., journalism, law, scientific research) be offered, enabling the AI to respond with appropriate tone, depth, and density.
  3. That a dedicated feedback channel for professionals be established, allowing us to share experiences, structural limitations, and constructive improvement proposals—without them being diluted by automated replies.

This letter was formally sent to OpenAI support on July 9, 2025. Its publication here serves to ensure such requests do not disappear into inboxes managed by algorithms.

This is not a complaint. It is a reasoned demand, born from a sustained, professional, constructive, and honest experience. Those of us engaged in ethical journalism, rigorous analysis, and critical thinking cannot abandon technological tools. But we also cannot accept tools designed solely for superficiality.

AI’s voice must rise to the stakes.