Artifical intelligence tools are presenting new avenues for government agencies to enhance public communications and service delivery
The ethical considerations surrounding AI-generated content, particularly regarding transparency and accountability, are critical for maintaining public trust
Effective integration of AI in public sector communications requires a strong emphasis on human oversight and robust internal policies
Generative AI, particularly tools that can produce written content, social media posts, and visual materials, offers clear benefits like speed, consistency, and efficiency. However, with these advantages come critical ethical considerations, especially when it comes to transparency, accountability, and public trust. As such, the ethical use of AI-generated content is not just a technical or operational issue—it’s a matter of public responsibility.
Public information officers and communications teams are often the first line of contact between agencies and the communities they serve. Their work shapes how residents understand local policies, access resources, and engage with their government. The use of AI-generated content can streamline this work, but it must be approached thoughtfully.
The Public Relations Society of America (PRSA), a leading voice in communication ethics, emphasizes that “the value of public relations lies in the trust that people place in the information they receive.” This principle is especially critical in government contexts, where misinformation—or even the perception of manipulation—can erode public trust and civic engagement.
Transparency – One of the central tenets of ethical communication is transparency. If AI-generated content is being used in public communications, should the public know? The answer isn’t always straightforward.
The PRSA Code of Ethics urges communicators to “reveal the sponsors for causes and interests represented” and to “be honest and accurate in all communications.” While this doesn’t require an AI disclosure in every tweet or flyer, it does suggest agencies should have clear internal guidelines and be ready to explain their use of AI if asked.
Transparency also means being honest internally. Teams using AI should document how content is produced, reviewed, and approved. Disclosing AI use may not be legally required yet, but ethical governance often means going beyond the minimum.
Accountability and oversight – When using AI-generated content, it’s imperative that a human still take responsibility. Agencies should treat AI as a support tool, not a decision-maker. Every piece of AI-generated content must be reviewed with appropriate expertise to ensure it is accurate, relevant, culturally sensitive, and compliant with public policy. This is especially important given AI’s limitations. Generative tools can create facts, referred to as “hallucinated facts,” default to biased language, or miss nuances in local context. Without oversight, even a well-meaning use of AI can lead to miscommunication, confusion, or offense. PRSA reinforces this in its advocacy for “preserving the integrity of the process of communication” and holding professionals accountable for their work.
Bias and equity – AI models are trained on large datasets that reflect the content of the internet—biases and all. If not carefully managed, AI-generated messaging can unintentionally reflect or reinforce stereotypes, omit diverse voices, or fail to resonate with marginalized communities. Government communicators must ensure that all messages, whether generated by humans or AI, align with principles of equity and inclusion. This includes using community review processes, plain language standards, and cultural competency checks to evaluate whether messaging reflects and serves the full diversity of the community.
Public trust and perception – Public agencies must operate under a higher standard of trust than private companies. Any suspicion that an agency uses AI to manipulate, mislead, or “cut corners” in public communication could damage reputations and reduce community engagement. Even when AI is used ethically and effectively, perception matters. Communicators should proactively educate stakeholders—elected officials, community leaders, and the public—on how AI tools are being used, what safeguards are in place, and how content is reviewed and approved by trusted professionals.
To navigate these challenges, government agencies should consider the following best practices:
When used responsibly, AI can help government communicators scale their efforts, deliver timely information, and better serve their communities. But it must be used with care, intention, and a commitment to public trust.
As PRSA puts it, “Ethical practice is the most important obligation of a public relations professional.”
In the next phase of public sector innovation, ethics must lead technology. Agencies that center integrity, equity, and transparency in their AI use will not only avoid missteps but also set the standard for responsible governance in the digital age.
For more information on the ethics behind using AI in public communications, contact Gina DePinto at gdepinto@raftelis.com.