Hasten Slowly: AI Is Reshaping How Communicators Learn Judgment
Use Came First, Standards Are Following
AI entered communication work through habit before it entered through policy. By the time many leadership teams began discussing standards, communicators were already drafting with ChatGPT, summarizing with Claude, and using AI for monitoring, analysis, and message development inside ordinary workflows. LexisNexis reported in March 2026 that generative AI had moved into routine use across more than 20 industries, with 53% of professionals saying they used it without formal approval. In PR specifically, Muck Rack’s 2026 research found 76% of professionals already using generative AI in their workflow and 82% saying it improves the quality of their work.
The adoption question has been answered by practice. The governance question is still open. That gap matters because communication has always been a judgment profession operating beneath the surface of a production function. Faster output does not change that. It raises the stakes for it.
Judgment Comes Before the Language Does
People outside the function see the output first. The statement, the memo, the media response, and the campaign language. The thinking that makes those outputs viable stays largely out of view.
A strong communicator decides what an organization should say, when it should say it, how much candor the moment can carry, and what reaction the language will create once it reaches employees, journalists, regulators, or investors. That work depends on timing, editorial restraint, institutional memory, and a feel for consequence. Those qualities develop through revision, disagreement, and repeated exposure to pressure.
Junior communicators learn by writing weak drafts, receiving difficult edits, hearing experienced people explain why one sentence calms a situation while another inflames it, and watching what happens after language enters a live environment. Communication builds judgment by putting language into contact with consequence.
This is where the AI story becomes more serious than an efficiency story. A profession still has to decide how its people will develop that judgment once more of the early work is automated.
The Apprenticeship Problem Needs Center Stage
Harvard Business Review put the issue plainly in February 2026. As AI automates formative work, fewer people encounter the situations that once served as training grounds for judgment. Junior employees miss chances to develop it when AI absorbs the messy, repetitive tasks through which that judgment used to form.
In communication, that loss cuts close to the bone. For years, communicators learned through informal apprenticeship. They drafted background notes, revised statements, sat in review meetings, and watched better editors weigh risk, sequence, and tone. A junior communicator who once produced several uneven drafts and received detailed correction may now produce one cleaner AI-assisted draft and receive far less feedback on the reasoning beneath it. The visible output improves. The learning loop gets smaller.
A profession renews itself by passing on standards. In communication, those standards move through review culture, mentorship, and repeated contact with what language does in the real world. When the early layers of work compress too quickly, teams preserve output while weakening the bench of future editors, counselors, and leaders. That is where the real cost starts to show.

Governance Exists to Protect the Craft
Governance in communication deserves a broader definition than policy compliance. It decides whether AI use remains reviewable, teachable, and answerable to human judgment.
Ragan’s reporting from its 2026 AI Horizons event captured the distinction that matters most. Governance can operate as prohibition, which pushes tool use into the shadows, or as guardrails, which make experimentation visible and accountable. Communication teams need the second model. Responsible use depends on both flexibility and standards, and prohibition reliably produces neither.
The teams handling this well are treating AI as part of an editorial system rather than a replacement for one. They define which work benefits from acceleration and which requires closer human review. They train people to question outputs, explain revisions, and defend choices in plain language. They preserve enough friction in the process that younger practitioners still learn why one version carries authority and another creates risk.
Revision teaches restraint, editorial disagreement sharpens standards, and exposure to pressure builds the kind of judgment that holds up when consequences are real. A function built on public consequence keeps its authority by protecting those forms of learning, even as the tools improve.
Hasten Slowly, Build Judgment Intentionally
The Latin phrase festina lente, hasten slowly, fits this moment better than most technology slogans do.
AI will keep improving. Communication teams will keep using it because the operational gains are real and the pressure to produce is constant. The harder responsibility lies in what happens alongside that adoption. Leaders now have to decide how judgment will be built in the years ahead, who will learn it, and what structures will carry it forward.
The strongest organizations will move quickly where speed genuinely helps and invest patiently where judgment still has to be formed. Communication has always asked for more than fluent language. It has asked for people who can read a situation, carry institutional context, and choose words that hold up when the pressure rises.
Speed has become the commodity. The organizations that recognize that early and build deliberately around judgment will hold their authority long after the tools have equalized everything else.

