Amid Government Action on AI in Europe and the U.S., Strategy and Innovation Must Remain Top Priorities

Billy Biggs
By Billy Biggs
Updated February 6, 2024

Overcoming bitter disagreement on how strict AI regulations should be, the European Union reached a landmark deal on its ambitious AI Act in early December 2023. Especially heated were the debates around the use of AI by security forces and around the rules for foundation models underpinning generative AI applications like ChatGPT. While those arguments will undoubtedly continue, the impending law ostensibly aims to balance the risks AI poses with the innovation it promises, a move that also signals Europe’s “determination not to let U.S. companies dominate the AI ecosystem.” 

U.S. companies may yet prevail in that arena regardless of what major legislation across the world comes first. Google’s chief legal officer hit the nail on the head when he noted that “the race should be for the best AI regulations, not the first AI regulations.” 

While the U.S. Congress has not passed a law, the comprehensive executive order on AI from October 2023 has gotten the regulatory ball rolling in America — both helpful in terms of risk framework and problematic in terms of mandates that put the cart before the horse. The situation for both the private and public sectors is comparable to the advent of cloud computing more than 15 years ago. There’s a lot of talk and product development, but nobody has a clear, full understanding yet of the value of the AI tools out there and their many and varied use cases. Want proof? A recent GAO report highlighted that most agency AI inventories are “not fully comprehensive and accurate.” The needle is moving so fast and the technology evolving so rapidly, how could agencies be ready? As a result, it’s imperative that government actors and partners keep important considerations in mind and advocate for the following actions.

Get funding in place for AI initiatives and mandates

The U.S. executive order (EO) is largely directed at government agencies themselves, only addressing the private sector in so far as its AI impacts critical technologies, infrastructure and civil rights, and as it interfaces with the agencies themselves. Even in this restricted and well intentioned form, parts of the EO require agencies to act and implement detailed changes without the funding to do it. Prioritizing safe and secure AI is vital, and the American public expects that, but it’s also aspirational to establish timelines for agencies to achieve milestones without the required resources. 

For example, the EO directs the National Science Foundation (NSF) to set up the National AI Research Resource (NAIRR) within 90 days to, among other tasks, “provide high-power computing and datasets to AI researchers across the U.S., especially those at less resourced institutions.” Bipartisan bills that would formally authorize NAIRR remain in limbo, as does recommended funding from a Congressional task force — as much as a $2.6 billion allocation over six years.  

Additionally, the EO mandates that the Department of Energy leverage its “computing capabilities and AI testbeds to build foundation models that support new applications in science and energy, and for national security” within 180 days, and, within 270 days, “implement a plan for building out its AI testbeds and model evaluation tools, with a focus on guarding against risks posed by AI-enabled development of nuclear, chemical, and biological weapons or threats to energy infrastructure.” The DOE is also expected by 2025 to establish a pilot program to train 500 AI scientists. All of this and more, with no new funding.

Measure the current environment to inform better AI strategy

For government players, having a smart strategy should be a prerequisite for specific regulation that achieves beneficial ends, and developing a strategy first requires understanding what you’ve got and how you’re using it. Agencies need to discover and measure exactly what’s going on in their own environments. If you’re trying to create an AI strategy, you’ll be wildly off the mark unless you understand the extent, functions and usage of your own systems, their pitfalls and the opportunities they present.

We also know that shadow AI impacts both the private sector and the public sector. Even though the previous administration paved the way with their EO on AI, specifically requiring agencies to self-report the existence of AI tools and use cases, it’s improbable that CIOs in the government are aware of every single AI tool being used in their organization. One of the helpful things about the EO is its aim to put appropriate guardrails in place, rather than outright banning various forms of AI. A good Digital Adoption Platform (DAP) can offer complete visibility into all the shadow AI tools and capabilities that employees are engaging — and generate a warning not to input classified data or other sensitive information. With safeguards in place, teams can test and evaluate AI in a controlled environment with oversight. That approach helps us get collectively smarter about AI strategy, rather than legislating away useful functionality and innovation.

Use best practices and pilot programs from industry

Indeed, we need more time with government and industry working together to figure out what makes sense for agencies and for their commercial stakeholders. Government leaders can proactively lean into industry to learn and understand best practices, which are evolving fast in the private sector, instead of being reactive. Government acquisition personnel were behind the times during the cloud revolution, and industry partners spent a lot of time educating them. It took years before government procurement personnel fully understood what they were buying in terms of “the Cloud.” Good strategy means not making this mistake again.   

Implementing pilot programs and prototypes is a great way to spin up new categories of technology like AI. For example, if an agency has 80,000 users, start with a test program of 500 users for six months in a specific division that has a use case and will get value out of a particular AI system. If it makes sense to then scale, government partners can offer the resources and budget perspective to enable that. This “try it before you fully buy it” approach resonates with many government customers and prospects. Risk is low and kinks can be worked out. Success also hinges on the transparency of these pilots. Effective communication on what is working and what is not should be at the forefront of every agency’s AI strategy.

Learn and innovate with suitable generative AI use cases

Digging more deeply into pilot program approaches for AI, agencies and their partners will find compelling use cases when they consider both improvements needed to legacy systems and mission-driven value. Using AI for the sake of using AI is silly. It’s not the same as any other tool. Leaders and teams must identify real needs that serve the government mission and the public interest.

For example, one federal system integrator has a custom generative AI tool that helps convert COBOL, a 60-year old programming language, to Java in an effort to solve the technology modernization challenge the government has been plagued with for decades. There are countless examples of legacy technology that have a very specific purpose but cannot be updated or migrated to new technology on the fly. In the COBOL example, it probably would have taken hundreds of developers to manually make the changes, but generative AI can ensure it’s done faster and more effectively, while ensuring safeguards are implemented.  

In another instance, the State Department has a declassification project underway right now. Generative AI can assist with identifying what documents — among decades worth of documents — meet certain requirements to move them into review faster for declassification. This saves humans there from tedious work and enables them to direct their talents toward diplomacy and American needs abroad, while saving the industry money and time. So, it’s all about the use case. What are you trying to solve? And does AI give you a competitive advantage, eliminate redundancies, or present a mission-driven reason for solving it, versus humans doing the work?  

Ensure restrictions on dual use don’t stifle innovation

One of the trickiest and most impactful parts of the EO and the EU’s AI Act is concerned with dual-use foundation models: “AI models that are trained on broad datasets containing at least tens of billions of parameters and pose (or can be modified to pose) serious risks to security, public health, or safety.” 

Like GPT-4, LLaMA-2, DALL-E, and various machine learning models, generalized and domain-specific, these are commercially available and pose risks because they potentially make designing chemical, biological, radiological and nuclear weapons, as well as every form of cyberattack, easier for non-experts. Companies developing the models must now “inform the federal government of their activities with detailed reports on results and performance (within 90 days of the order’s issuance)” so that designated departments can analyze national security risks and how critical infrastructure is vulnerable. 

The other side of the coin is that the process may end up being unnecessarily cumbersome and slow down innovation — even life-saving innovation. Will every commercial software company that further develops foundation models for new uses be hamstrung by government requirements? How nuanced the specific requirements are isn’t clear. They may be impossible to meet. Will software companies need to shoulder the burden of expense and time in creating a separate government offering? Can they use the same code base and innovative features? Overall, will the speed of innovation be slowed for Fortune 100 private enterprises and others that typically move much faster than government? What this roadmap looks like is unclear.

It’s difficult to legislate things we don’t truly understand yet. With the EU’s move, we’re unlikely to turn back the tide of government action, but like every revolutionary technology, we need to balance concern with creativity to build a better future. We’ll likely see two different buckets emerge: (1) AI applications deemed less risky to national security and less of a concern from a privacy and civil liberties perspective and (2) an expanding foundation models category, including LLMs, that will have to go through some kind of AI certification comparable to a FedRAMP process. My hope is that real innovation in both the private and public sectors can move forward in this environment.

Billy Biggs
By Billy Biggs
With 25+ years of experience in software and technology, Billy leads the Public Sector team at WalkMe. He’s passionate about simplifying user experiences and increasing enterprise productivity by leveraging the power of insights, engagement, guidance, and automation capabilities. He’s responsible for all go-to-market functions, revenue generation, client success, and field operations for WalkMe's Federal, State, Local, and Education customers. Billy holds an MBA, a BS in Information Systems Management, and certifications in Project Management (PMP) and Revenue Leadership (CRO and Enterprise GTM).