Home Artificial Intelligence Jon Potter, Partner at The RXN Group – Interview Series

Jon Potter, Partner at The RXN Group – Interview Series

by admin
mm

Jon Potter is a Partner and leads the State-Level AI Practice at RXN Group. He is an experienced lawyer, lobbyist, and communicator, has founded and led two industry associations and a consumer organization, and consulted many industries and organizations on legislative, communications and issue advocacy challenges. Jon founded the Digital Media Association, Fan Freedom, and the Application Developers Alliance, was Executive Vice President of the global communications firm Burson-Marsteller, and a lawyer for several years with the firm of Weil, Gotshal, & Manges.

As both a client and consultant, Jon has overseen federal, multistate and international advocacy campaigns and engaged lobbyists, communications firms, and law firms on three continents. Jon has testified in Congress and state legislatures several times, has spoken at dozens of conferences throughout the U.S. and internationally, and has been interviewed on national and local radio and television news programs, including CNN, Today Show, and 60 Minutes.

Can you provide an overview of the key trends in AI legislation across states in 2024?

2024 has been an extraordinary year for state-level AI legislation, marked by several trends.

The first trend is volume. 445 AI bills were introduced across 40 states, and we expect this will continue in 2025.

A second trend is a consistent dichotomy—bills about government use of AI were generally optimistic, while bills about AI generally and private sector use of AI were skeptical and fearful. Additionally, several states passed bills creating AI “task forces,” which are now meeting.

What are the main concerns driving state legislators to introduce AI bills, and how do these concerns vary from state to state?

Many legislators want government agencies to improve with AI – to deliver better services more efficiently.

Among skeptics, topics of concern include fraudulent and abusive “deepfakes” related to elections, creative arts, and bullying; algorithmic discrimination; fear of AI-influenced “life critical” decisions and decision processes; personal privacy and personal data use; and job displacement. Some concerns can be addressed in very specific legislation, such as Tennessee’s ELVIS Act and California’s political deepfakes prohibition. Other concerns, such as risks of algorithmic discrimination and job displacement, are amorphous and so the legislative proposals are broad, non-specific, and of great concern.

Some lawmakers believe that today’s social media and digital privacy challenges could have been mitigated by prophylactic legislation, so they are rushing to pass laws to solve AI problems they fear will develop. Of course, it’s very hard to define clear compliance guidance before actual problems emerge.

How can states craft AI legislation that encourages innovation while also addressing potential risks?

Legislation can do both when it regulates specific use cases and risks but not foundational multipurpose technology. A good example of this is the federal laws that govern uses of health, financial, and student education data, but do not regulate computers, servers, or cloud computing. By not regulating multipurpose tools such as data storage and data processing technologies (including AI), the laws address real risks and define clear compliance rules.

It’s important that legislators hear from a wide range of stakeholders before passing new laws. Headlines suggest that AI is dominated by giant companies investing billions of dollars to build extraordinarily powerful and risk models. But there are thousands of small and local companies using AI to build recycling, workplace bias, small business lending, and cybersecurity solutions. Legal Aid organizations and local nonprofits are using AI to help underserved communities. Lawmakers must be confident that AI-skeptical legislation does not shut down small, local, and public-benefit AI activity.

From your experience, what are the most significant impacts that recent AI bills have had on businesses? Are there specific industries that have been more affected than others?

We don’t yet know the impacts of recent AI bills because very few recent bills have become law and the new laws are not yet making a difference. The broadest law, in Colorado, does not take effect until 2026 and most stakeholders, including the Governor and sponsor, anticipate significant amendments before the effective date. The ELVIS Act in Tennessee and the California deepfakes laws should reduce fraudulent and criminal activity, and hopefully won’t inhibit parody or other protected speech.

With so many states stepping up to legislate AI, how do you see the relationship between state-level AI regulations and potential federal action evolving?

This is a moving target with many variables. There are already several areas of law where the federal and state governments co-exist, and AI is associated with many of them. For example, there are federal and state workplace and financial services discrimination laws that are effective regardless of whether AI is used by alleged bad actors. One question for legislators is why new legislation or regulations are needed solely because AI is used in an activity.

What are the common pitfalls or challenges that state legislators face when drafting AI-related bills? How can these be avoided?

It’s all about education – taking the time to consult many stakeholders and understand how AI works in the real world. Legislation based on fear of the unknown will never be balanced, and will always inhibit innovation and AI for good. Other countries will fill the innovation void if the United States cedes our leadership due to fear.

Can you share examples of successful advocacy that influenced AI legislation in favor of innovation?

In Colorado, the Rocky Mountain AI Interest Group and AI Salon rallied developers and startups to engage with legislators for the first time. Without lobbyists or insider consultants, these groups tapped into a wellspring of smart, unhappy, and motivated founders who expressed their displeasure in crisp, effective, testimony and the media – and were heard.

Similarly in California, founders of small AI-forward companies testified passionately in the legislature and connected with the media to express urgent concern and disappointment about well-intended but terribly overbroad legislation. LIke their counterparts in Colorado, these founders were non-traditional, highly motivated, and very effective.

How do you identify and engage stakeholders effectively in state-level AI policy battles?

Partly by talking to people like you, and spreading the word that legislation has or will soon be introduced and may impact someone’s livelihood, business, or opportunity. Legislators only know what they know, and what they learn by talking to lobbyists and companies they are aware of. It’s important to engage in the process when bills are drafted and amended, because if you’re not at the table then you’re probably on the menu. Every company and organization that is building or using AI should participate before the laws and regulations are written, because after they are written it is frequently too late.

What are the most effective ways to communicate the complexities of AI to state legislators?

Advocacy exists in many forms. Whether it’s meeting with legislators in-person or by video, sending a letter or email, or speaking with the media – each of these is a way to make your voice heard. What’s most important is clarity and simplicity, and telling your own story which is what you know best. Different state legislatures have different rules, processes, and norms, but almost all legislators are eager to learn and want to hear from constituents.

Thank you for the great interview, readers who wish to learn more should visit RXN Group.

Source Link

Related Posts

Leave a Comment