As generative AI takes off, many are trying to quickly understand what this new era of technology means for government. In this three-part blog series, USDR takes a look at the current landscape and the journey ahead.
Partner:
By: Tina Walha, Chief Partnerships Officer at USDR
By now, we’ve all seen the headlines: Generative AI (gen AI) is on the precipice of redefining the work of government at all levels — from the recent White House Executive Order (EO) on the development and use of AI, to work happening in cities and states across the country. And there’s a near-universal acknowledgment that no one knows exactly how best governments can, and should, leverage these models to improve government operations and service delivery. Government leaders need to embrace the fact that their staff is likely already experimenting with AI, and wrestle with the bigger questions of if, how, or when they should leverage its capabilities to best serve the public. At U.S. Digital Response (USDR), we know AI is a tool like any other technology. It won’t magically fix the large, complex problems of government; but it can be used ethically and transparently to help make progress. Join us as we explore the opportunity gen AI represents and how governments can get started on their journey.
The sophistication and simplicity of tools like ChatGPT, Bard, Claude, and other large language models (LLMs) means that this technology, like the internet in the 1990s, is both accessible and transformative. We’re entering a new era, but it doesn’t change the need to start with a human-centered problem-oriented approach, not solutions. USDR is a demand-driven organization — one that responds to the needs of governments rather than advancing unsolicited ideas — so we’ll lift up what we’re seeing and hearing directly from governments and meet governments where they are.
Generative AI is a type of artificial intelligence that produces written, visual, or audio content based on training models. Machine learning, another type of AI, is similar and both learn from training data, but there’s an important distinction: Machine learning analyzes data to find patterns and make predictions, while gen AI creates new data, information, or content based on the data it’s been trained on.
All levels of government are preparing for the influx of AI tools and services by thinking about policy, governance, and transparency.
Let’s start with the White House’s EO on AI, released on October 30. The EO is wide-reaching and touches on a number of policy objectives for the Federal government, including ensuring AI’s safety and security, promoting responsible innovation and competition, advancing equity and civil rights, supporting American workers, protecting consumer interests, safeguarding privacy and civil liberties, and promoting global cooperation on AI governance. We applaud the emphasis on getting AI talent into the Federal government. Through our work with governments bringing on digital talent, we know that this applicant pool can be hard to recruit and retain; and given the high demand across sectors, AI expertise will no doubt be even harder. We’re energized to see how the Feds have prioritized this important capacity-building element.
Similarly, we’re seeing states emphasize governance, regulation, and talent in their gen AI policies. Pennsylvania’s EO focuses on establishing the governance model the Commonwealth will use, while Virginia’s EO focuses on the standards and protocols needed to act as guardrails for adoption and implementation of gen AI in state government. In the 2023 legislative session, at least 25 states, Puerto Rico, and the District of Columbia introduced artificial intelligence bills, and 15 states and Puerto Rico adopted resolutions or enacted legislation. While only a handful of states have named AI leads, we anticipate that number will increase quickly.
At the local level, cities like Boston and Seattle have taken steps to develop policies for employees on when and how to leverage LLMs in their work. As with states, these first movers are by no means the only governments thinking about how to apply gen AI to their work, but they’ve taken important steps towards shaping a point of view and guidance for staff. Boston’s guidance (see above) emphasizes the ‘do’s and don’ts’ involved with different use cases, building staff capacity in a lightweight way. Governments should acknowledge that staff are experimenting with these tools on their own — which in and of itself is a good thing — and would benefit from guidance to leverage them and cite their use.
USDR recently spent time at two important government gatherings and heard how states and cities are hoping to move from policy to experimentation. At the annual NASCIO conference in Minneapolis, state IT leaders framed the discussion of governance, ethics, workforce implications, and education in practical terms: What’s possible now, not in 5–10 years. At the annual CityLab convening, hosted by Bloomberg Philanthropies and the Aspen Institute, data from a recent survey of mayors and city leaders shed light on where cities are today:
Across state and local governments, we’ve heard consistent themes from leaders:
We’ll share USDR’s point of view on how governments can dream big with gen AI and how USDR is excited to help you experiment. In the meantime, we encourage states to get engaged with NASCIO’s Generative AI Working Group and their work convening agencies on gen AI, and local governments should sign up for CityAIConnect.
Ready to really get in the weeds with gen AI and government? USDR is hiring a Generative AI Technologist-in-Residence who will help set the overall direction and then lead the delivery of how USDR helps governments make the best use of gen AI for improved operations and service delivery.