Use of Generative AI by Local Governments
Editor's note: On August 8, 2023, Washington Technology Solutions (WaTech), the state's consolidated technology services agency, adopted Interim Guidelines for Purposeful and Responsible Use of Generative Artificial Intelligence (AI) in Washington State Government.
Generative artificial intelligence (AI) is a type of AI that uses algorithms and large data sets, including (among other sources) information available on the internet, to produce text, images, computer code, or other content — referred to here as output — in response to queries, known as prompts. It can be used to draft communications, conduct research, summarize content, generate software code, produce art, and many other applications.
Local government employees are some of the 100 million individuals beginning to embrace the use of generative AI tools, including ChatGPT, Google Bard, and Bing Chat, primarily for their efficiency, power, and potential to improve public services. Additionally, generative AI technology is increasingly incorporated into existing software products used regularly by local governments.
This blog addresses legal and ethical issues for local government attorneys that may arise when public agency staff use generative AI in the course of their daily work.
The Legal Issues
When local government staff use generative AI, it raises a host of legal concerns.
Confidential information may not be protected
Generative AI learns from data introduced through prompts. In addition to regulatory limits on dissemination or use of certain data types, local government employees are prohibited from disclosing confidential information under state law and many local ethics codes. See RCW 42.23.070(4).
Caution should be exercised in introducing confidential, personally identifiable information, trade secrets, federally regulated data, and data subject to data-sharing agreements into prompts or databases read by these tools. For example, users should refrain from prompting generative AI with personally identifying information, such as names paired with social security numbers, and should maintain proper security around databases searchable by generative AI tools that contain sensitive data.
New cybersecurity challenges
Use of generative AI may introduce cyber vulnerabilities by potentially providing content to assist threat actors in engineering sophisticated data breaches. For example, it can potentially create highly effective phishing emails, new types of malware that can surpass current cybersecurity protections, or exploitable vulnerabilities through generated computer code that are difficult to detect and timely address.
Generative AI produces output that may contain inaccurate information, including wholly false results known as “hallucinations,” making it unreliable without some level of oversight. Local government users should carefully review output to avoid disseminating misleading or defamatory information.
Use of copyrighted or trademark materials
Generative AI may draw on copyrighted material or reproduce trademarked materials and protected likenesses, potentially infringing on intellectual property rights and permissions or violating contractual terms. Some tools do not attribute sources, thus making detection of copyrighted content difficult.
Generative AI outputs are unlikely to be considered subject to copyright protection, but this is an evolving area and local governments would be well-served to track developments.
Sources may include discriminatory or offensive content
Generative AI produces results based on its data sources, which may include discriminatory, biased, and illegal content. Users may violate federal, state, and local laws and workplace policies when discriminatory or biased outputs has been incorporated into written products or utilized for decision-making. The lack of attribution in generative AI outputs can make it more difficult to ascertain whether biased information has been included in the sourcing.
It could be a public record
Prompts and outputs are public records with retention value depending on use and content, and users should be prepared to properly retain and produce these records. For example, prompts reflecting a research topic may indicate a policy decision or a deliberative process on behalf of a government body. So too, outputs used by local government to create a job description or a press release may reflect government actions. Both may have retention value under state and local retention schedules.
Collective bargaining applications
Future collective bargaining negotiations may include demands to limit generative AI usage to avoid workforce reductions or modifications in working conditions.
Contractual and regulatory issues with tool use
Agencies interested in regulating use of generative AI may need to negotiate vendor contractual terms to align with their preferred usage level goals, privacy protections, and security parameters, and to reflect fair cost for work completed using these tools.
Is it embedded in technology — or not?
As commonly used software programs incorporate generative AI capabilities, local governments will have limited ability to assess impact and consistency with policy. For example, it can be difficult to determine whether embedded tools store prompt and sensitive information, potentially in conflict with an agency's data security, privacy, and retention policies.
Transparency of use
Members of the public may expect local governments to disclose the use of generative AI in decision-making. Nondisclosure may raise deceptive practices or transparency concerns.
The Ethical Issues
What ethical issues does use of generative AI by local government lawyers raise? The Rules of Professional Conduct (RPC) for attorneys provide guidance.
User competency may be required
RPC 1.1, comment 9 requires that lawyers “keep abreast of . . . benefits and risks associated with relevant technology.” Local government lawyers should consider whether the RPCs require an understanding of generative AI.
Transparency in communication
RPC 1.4 requires that lawyers communicate about “the means by which the client’s objectives are to be accomplished.” Local governments lawyers should consider whether the RPCs require disclosure to clients when generative AI is used in legal work.
Confidentiality of information
RPC 1.6 requires that lawyers shall prevent the unauthorized disclosure of information. Local government lawyers should consider whether the RPCs allow any confidential client information to be included in prompts.
Responsibilities regarding non-lawyer assistants
RPC 5.3, comment 3 requires that lawyers shall make “reasonable efforts to ensure that the services are provided in a manner that is compatible with the lawyer’s professional obligations.” Local government lawyers should consider whether the RPCs require oversight of use of generative AI in the same manner as other outside services.
RPC 8.4(g) prohibits lawyers from committing discriminatory acts. Local government lawyers should determine how to address the ethical implications of the use of potentially biased or discriminatory output.
Concerns Regarding Public Use of AI
Now that we’ve considered local government use of generative AI, what is the legal impact of public use of this tool to investigate or influence public agencies?
Powerful public search and disclosure impacts
Generative AI is a powerful tool that will allow members of the public to search for local government data and information, potentially revealing discrepancies in productions and resulting in identification of records not located or produced in response to public records disclosure requests. Additionally, sophisticated users may be able to reverse-engineer prompts used by local governments in responding to records requests, potentially revealing attorney-work product.
Questionable immunity against inaccurate information
Licensing terms currently require indemnification of generative AI companies. However, it is unclear whether generative AI will be provided immunity under Section 230 of the Telecommunications Act of 1996. Local governments may need to determine the impact of public reliance on inaccurate outputs if generative AI is not held responsible.
Increased opportunities for public advocacy
Due to the ease of creating written documents, local jurisdictions may receive increased volumes of claims, pro-se lawsuits, appeals, and written comments.
Additionally, the public can use generative AI to create new legislative proposals or new citizen’s initiatives and recommend strategies for successful introduction.
Legal Resources and Guidance
As discussed above, many laws govern aspects of generative AI inputs and outputs and several federal agencies have produced related AI guidance and standards. However, at this time no local, state, or federal laws directly regulate the use of generative AI. Congressional hearings and other regulatory efforts are underway at all levels and should be tracked for future developments. For example, the State of Maine recently imposed a six-month moratorium on use of generative AI within all state agencies.
Several municipal governments created generative AI policy to guide staff usage of this tool and limit the risks discussed above. Below are two examples.
- Boston: Interim Guidelines for Using Generative AI (2023)
- Seattle: Internal Policy Memo - 2301 (2023)
Local governments should consider creating policies for staff that address the legal and ethical use of generative AI and suggest guardrails and appropriate avenues for use of such tools. Local government attorneys should also continue to watch for changes in the regulatory landscape.
MRSC is a private nonprofit organization serving local governments in Washington State. Eligible government agencies in Washington State may use our free, one-on-one Ask MRSC service to get answers to legal, policy, or financial questions.