Will states lead the way on AI regulation?
Last year was a busy time for lawmakers and lobbyists concerned about AI — most notably in California, where Gavin Newsom signed 18 new AI laws while also vetoing high-profile AI legislation.
And 2025 could see just as much activity, especially on the state level, according to Mark Weatherford. Weatherford has, in his words, seen the “sausage making of policy and legislation” at both the state and federal levels; he’s served as Chief Information Security Officer for the states of California and Colorado, as well as Deputy Under Secretary for Cybersecurity under President Barack Obama.
Weatherford said that in recent years, he has held different job titles, but his role usually boils down to figuring out “how do we raise the level of conversation around security and around privacy so that we can help influence how policy is made.”
Last fall, he joined synthetic data company Gretel as its vice president of policy and standards. So I was excited to talk to him about what he thinks comes next in AI regulation and why he thinks states are likely to lead the way.
This interview has been edited for length and clarity.
That goal of raising the level of conversation will probably resonate with many folks in the tech industry, who have maybe watched congressional hearings about social media or related topics in the past and clutched their heads, seeing what some elected officials know and don’t know. How optimistic are you that lawmakers can get the context they need in order to make informed decisions around regulation?
Well, I’m very confident they can get there. What I’m less confident about is the timeline to get there. You know, AI is changing daily. It’s mindblowing to me that issues we were talking about just a month ago have already evolved into something else. So I am confident that the government will get there, but they need people to help guide them, staff them, educate them.
Earlier this week, the US House of Representatives had a task force they started about a year ago, a task force on artificial intelligence, and they released their report — well, it took them a year to do this. It’s a 230-page report; I’m wading through it right now. [Weatherford and I first spoke in December.]
[When it comes to] the sausage making of policy and legislation, you’ve got two different very partisan organizations, and they’re trying to come together and create something that makes everybody happy, which means everything gets watered down just a little bit. It just takes a long time, and now, as we move into a new administration, everything’s up in the air on how much attention certain things are going to get or not.
It sounds like your viewpoint is that we may see more regulatory action on the state level in 2025 than on the federal level. Is that right?
I absolutely believe that. I mean, in California, I think Governor [Gavin] Newsom, just within the last couple months, signed 12 pieces of legislation that had something to do with AI. [Again, it’s 18 by TechCrunch’s count.)] He vetoed the big bill on AI, which was going to really require AI companies to invest a lot more in testing and really slow things down.
In fact, I gave a talk in Sacramento yesterday to the California Cybersecurity Education Summit, and I talked a little bit about the legislation that’s happening across the entire US, all of the states, and it’s like something like over 400 different pieces of legislation at the state level have been introduced just in the past 12 months. So there’s a lot going on there.
And I think one of the big concerns, it’s a big concern in technology in general, and in cybersecurity, but we’re seeing it on the artificial intelligence side right now, is that there’s a harmonization requirement. Harmonization is the word that [the Department of Homeland Security] and Harry Coker at the [Biden] White House have been using to [refer to]: How do we harmonize all of these rules and regulations around these different things so that we don’t have this [situation] of everybody doing their own thing, which drives companies crazy. Because then they have to figure out, how do they comply with all these different laws and regulations in different states?
I do think there’s going to be a lot more activity on the state side, and hopefully we can harmonize these a little bit so there’s not this very diverse set of regulations that companies have to comply with.
I hadn’t heard that term, but that was going to be my next question: I imagine most people would agree that harmonization is a good goal, but are there mechanisms by which that’s happening? What incentive do the states have to actually make sure their laws and regulations are in line with each other?
Honestly, there’s not a lot of incentive to harmonize regulations, except that I can see the same kind of language popping up in different states — which to me, indicates that they’re all looking at what each other’s doing.
But from a purely, like, “Let’s take a strategic plan approach to this amongst all the states,” that’s not going to happen, I don’t have any high hopes for it happening.
Do you think other states might follow California’s lead in terms of the general approach?
A lot of people don’t like to hear this, but California does kind of push the envelope [in tech legislation] that helps people to come along, because they do all the heavy lifting, they do a lot of the work to do the research that goes into some of that legislation.
The 12 bills that Governor Newsom just passed were across the map, everything from pornography to using data to train websites to all different kinds of things. They have been pretty comprehensive about leaning forward there.
Although my understanding is that they passed more targeted, specific measures and then the bigger regulation that got most of the attention, Governor Newsom ultimately vetoed it.
I could see both sides of it. There’s the privacy component that was driving the bill initially, but then you have to consider the cost of doing these things, and the requirements that it levies on artificial intelligence companies to be innovative. So there’s a balance there.
I would fully expect [in 2025] that California is going to pass something a little bit more strict than than what they did [in 2024].
And your sense is that on the federal level, there’s certainly interest, like the House report that you mentioned, but it’s not necessarily going to be as big a priority or that we’re going to see major legislation [in 2025]?
Well, I don’t know. It depends on how much emphasis the [new] Congress brings in. I think we’re going to see. I mean, you read what I read, and what I read is that there’s going to be an emphasis on less regulation. But technology in many respects, certainly around privacy and cybersecurity, it’s kind of a bipartisan issue, it’s good for everybody.
I’m not a huge fan of regulation, there’s a lot of duplication and a lot of wasted resources that happen with so much different legislation. But at the same time, when the safety and security of society is at stake, as it is with AI, there’s definitely a place for more regulation.
You mentioned it being a bipartisan issue. My sense is that when there is a split, it’s not always predictable — it isn’t just all the Republican votes versus all the Democratic votes.
That’s a great point. Geography matters, whether we like to admit it or not, that, and that’s why places like California are really being forward leaning in some of their legislation compared to some other states.
Obviously, this is an area that Gretel works in, but it seems like you believe, or the company believes, that as there’s more regulation, it pushes the industry in the direction of more synthetic data.
Maybe. One of the reasons I’m here is, I believe synthetic data is the future of AI. Without data, there’s no AI, and quality of data is becoming more of an issue, as the pool of data gets used up or shrinks. There’s going to be more and more of a need for high quality synthetic data that ensures privacy and eliminates bias and takes care of all of those kind of nontechnical, soft issues. We believe that synthetic data is the answer to that. In fact, I’m 100% convinced of it.
I would love to hear more about what brought you around to that point of view. I think there’s other folks who recognize the problems you’re talking about but think of synthetic data potentially amplifying whatever biases or problems were in the original data, as opposed to solving the problem.
Sure, that’s the technical part of the conversation. Our customers feel like we have solved that, and there is this concept of the flywheel of data generation — that if you generate bad data, it gets worse and worse and worse, but building controls into this flywheel validates that the data is not getting worse, that it’s staying equally or getting better each time the fly will comes around. That’s the problem Gretel has solved.
Many Trump-aligned figures in Silicon Valley have been warning about AI “censorship” — the various weights and guardrails that companies put around the content created by generative AI. Do you think that’s likely to be regulated? Should it be?
Regarding concerns about AI censorship, the government has a number of administrative levers they can pull, and when there is a perceived risk to society, it’s almost certain they will take action.
However, finding that sweet spot between reasonable content moderation and restrictive censorship will be a challenge. The incoming administration has been pretty clear that “less regulation is better” will be the modus operandi, so whether through formal legislation or executive order, or less formal means such as [National Institute of Standards and Technology] guidelines and frameworks or joint statements via interagency coordination, we should expect some guidance.
I want to get back to this question of what good AI regulation might look like. There’s this big spread in terms of how people talk about AI, like it’s either going to save the world or going to destroy the world, it’s the most amazing technology, or it’s wildly overhyped. There’s so many divergent opinions about the technology’s potential and its risks. How can a single piece or even multiple pieces of AI regulation encompass that?
I think we have to be very careful about managing the sprawl of AI. We have already seen with deepfakes and some of the really negative aspects, it’s concerning to see young kids now in high school and even younger that are generating deep fakes that are getting them in trouble with the law. So I think there’s a place for legislation that controls how people can use artificial intelligence that doesn’t violate what may be an existing law — we create a new law that reinforces current law, but just taking the AI component into it.
I think we — those of us that have been in the technology space — all have to remember, a lot of this stuff that we just consider second nature to us, when I talk to my family members and some of my friends that are not in technology, they literally don’t have a clue what I’m talking about most of the time. We don’t want people to feel like that big government is over-regulating, but it’s important to talk about these things in language that non-technologists can understand.
But on the other hand, you probably can tell it just from talking to me, I am giddy about the future of AI. I see so much goodness coming. I do think we’re going to have a couple of bumpy years as people more in tune with it and more understand it, and legislation is going to have a place there, to both let people understand what AI means to them and put some guardrails up around AI.