Balancing innovation and integrity: the ethics of using AI in local government communications

As artificial intelligence (AI) tools become more integrated into everyday workflows, government agencies are exploring ways to harness their potential to streamline communications, enhance public engagement, and improve service delivery.

At a Glance

Artifical intelligence tools are presenting new avenues for government agencies to enhance public communications and service delivery

The ethical considerations surrounding AI-generated content, particularly regarding transparency and accountability, are critical for maintaining public trust

Effective integration of AI in public sector communications requires a strong emphasis on human oversight and robust internal policies

Generative AI, particularly tools that can produce written content, social media posts, and visual materials, offers clear benefits like speed, consistency, and efficiency. However, with these advantages come critical ethical considerations, especially when it comes to transparency, accountability, and public trust. As such, the ethical use of AI-generated content is not just a technical or operational issue—it’s a matter of public responsibility.

The role of government communicators

Public information officers and communications teams are often the first line of contact between agencies and the communities they serve. Their work shapes how residents understand local policies, access resources, and engage with their government. The use of AI-generated content can streamline this work, but it must be approached thoughtfully.

The Public Relations Society of America (PRSA), a leading voice in communication ethics, emphasizes that “the value of public relations lies in the trust that people place in the information they receive.” This principle is especially critical in government contexts, where misinformation—or even the perception of manipulation—can erode public trust and civic engagement.

Key ethical considerations

Transparency – One of the central tenets of ethical communication is transparency. If AI-generated content is being used in public communications, should the public know? The answer isn’t always straightforward.

The PRSA Code of Ethics urges communicators to “reveal the sponsors for causes and interests represented” and to “be honest and accurate in all communications.” While this doesn’t require an AI disclosure in every tweet or flyer, it does suggest agencies should have clear internal guidelines and be ready to explain their use of AI if asked.

Transparency also means being honest internally. Teams using AI should document how content is produced, reviewed, and approved. Disclosing AI use may not be legally required yet, but ethical governance often means going beyond the minimum.

Accountability and oversight – When using AI-generated content, it’s imperative that a human still take responsibility. Agencies should treat AI as a support tool, not a decision-maker. Every piece of AI-generated content must be reviewed with appropriate expertise to ensure it is accurate, relevant, culturally sensitive, and compliant with public policy. This is especially important given AI’s limitations. Generative tools can create facts, referred to as “hallucinated facts,” default to biased language, or miss nuances in local context. Without oversight, even a well-meaning use of AI can lead to miscommunication, confusion, or offense. PRSA reinforces this in its advocacy for “preserving the integrity of the process of communication” and holding professionals accountable for their work.

Bias and equity – AI models are trained on large datasets that reflect the content of the internet—biases and all. If not carefully managed, AI-generated messaging can unintentionally reflect or reinforce stereotypes, omit diverse voices, or fail to resonate with marginalized communities. Government communicators must ensure that all messages, whether generated by humans or AI, align with principles of equity and inclusion. This includes using community review processes, plain language standards, and cultural competency checks to evaluate whether messaging reflects and serves the full diversity of the community.

Public trust and perception – Public agencies must operate under a higher standard of trust than private companies. Any suspicion that an agency uses AI to manipulate, mislead, or “cut corners” in public communication could damage reputations and reduce community engagement. Even when AI is used ethically and effectively, perception matters. Communicators should proactively educate stakeholders—elected officials, community leaders, and the public—on how AI tools are being used, what safeguards are in place, and how content is reviewed and approved by trusted professionals.



Best practices for ethical AI use in public communication

To navigate these challenges, government agencies should consider the following best practices:

  1. Develop an AI use policy
    Establish internal guidelines that define when, how, and for what purposes AI tools may be used. Clarify expectations for human oversight, disclosure, and content review. Involve legal counsel, IT, and DEI professionals in policy development.
  2. Ensure human review and contextualization
    Never publish AI-generated content without human editing. Review for factual accuracy, tone, cultural relevance, and accessibility. Ensure content aligns with agency values, legal requirements, and the needs of diverse communities.
  3. Disclose when appropriate
    Transparency doesn’t mean labeling every post, but agencies should be ready to explain AI use if asked. In contexts like public reports, major initiatives, or contentious topics, consider including a brief note about the role of automation in content development.
  4. Prioritize accessibility and inclusion
    Use plain language and accessible formats. Evaluate content for bias and representation. Consider audience demographics and needs in content design, especially when communicating with underrepresented or historically underserved communities.
  5. Train staff and build capacity
    Ensure communication staff are trained in AI tools’ capabilities and limitations. Provide guidance on ethical use, editing techniques, and when not to use AI. Foster a culture of innovation where staff feel empowered to question and improve AI-generated content.

When used responsibly, AI can help government communicators scale their efforts, deliver timely information, and better serve their communities. But it must be used with care, intention, and a commitment to public trust.

As PRSA puts it, “Ethical practice is the most important obligation of a public relations professional.”

In the next phase of public sector innovation, ethics must lead technology. Agencies that center integrity, equity, and transparency in their AI use will not only avoid missteps but also set the standard for responsible governance in the digital age.

For more information on the ethics behind using AI in public communications, contact Gina DePinto at gdepinto@raftelis.com.

Get the Latest Insights Delivered