RSA Conference 2024: Regulations and AI Set to Clash

This year at RSA Conference 2024, there were nearly endless presentations and panels tied to the latest threats, techniques, and risks aimed at organizations; however, there was one topic that threaded it all together: AI.
Elliot Volkman

by Elliot Volkman

May 22, 2024
Image - RSA AI Recap Feature

This year at RSA Conference 2024, there were nearly endless presentations and panels tied to the latest threats, techniques, and risks aimed at organizations. However, there was one topic that threaded it all together: AI.

While it is true that not every session at the conference uttered the word, and AI has indeed become the catchall phrase to include machine learning (ML) primarily, there is no ignoring the weight of this technology on our future.

During the 33rd annual event, around 41,000 attendees, 650 speakers across 425 sessions, and 600 exhibitors converged in San Francisco for a week, and one could correctly assume that the word AI was uttered in that bubble more than anywhere else at that time. Of those 425 sessions, 136 were specifically focused on addressing challenges associated with AI, and from those, regulation is seen as having the most impact in the short term.

For some, AI and machine learning may pose risks to taking over jobs, for others it enables security teams to move faster and analyze data in seconds. However, if the walls of RSAC could talk it would all be about risk and risk reduction. 

There was around one in five vendors spread across the three expo halls that proudly exclaimed their new AI or ML offerings—either fighting, managing, or using it—and all of it is about to be hit by a tsunami of regulation.

Dozens of state bills, international laws, new frameworks, and the recently passed EU AI Act are bearing down on the world of cybersecurity. However, the likelihood of American federal regulation is still some years away though some are in the works.

Untangling Incoming AI Regulation

There is a lot to unpack regarding bills and legislation in motion that is designed to restrict, narrow down, and generally put guardrails around the new wave of AI-like technology. At RSAC, it has become clear that security and GRC teams are growing concerned about the amount of overlap this may cause as bills become law.

For example, California has CCPA, but if that same company works in the EU, it also needs to align with GDPR. While there is overlap and some controls can be applied to support both, this incoming situation will look more like a rat’s nest rather than just one or two regulations with overlap.

Speaking of CCPA, Act 15 already covers provisions on the use of automated decision-making tools. They also recently released draft rules on these provisions—governing consumer notice, access, and opt-out rights with respect to automated decision-making technology, which will likely overlap with the use of AI.

However, over 40 state AI bills have been introduced since last year, and both Connecticut and Texas have already adopted statutes. Most recently though, Tennessee became the first state to protect artists against AI in the Ensuring Likeness Voice and Image Security Act, or ELVIS Act. While the latter focuses on consumer and artist protections, the former is to ensure AI systems do not create unlawful discrimination.

On the federal level, there are several acts in flight such as the SAFE Innovation AI Framework, REAL Political Advertisements Act, Stop Spying Bosses Act, No FAKES Act, and the AI Research, Innovation, and Accountability Act.

EU AI Act Will Create Waves and Influence Future Regulation

The EU AI Act has officially been signed into law and is seen as the first international guidance that will shape the future of AI regulation. According to many at RSAC, there is currently a privacy and regulation vacuum associated with AI, which means the EU AI Act will cover a wide gap.

A common theme at RSAC this year is that the act is AI-specific; however, it really just builds upon the foundation that GDPR has provided. In that regard, it should be similar for companies today.

What is drastically different, though, are the related repercussions. While GDPR has a fine of €20 million or 4% of a company's annual global revenue, the EU AI Act will be 7% of the company’s revenue. However, to put this into context, Anu Talus, the EDPB Chair, said this pales in comparison to the competition policy which sits at 10%.

While there are clear guidelines for fines (aka the stick), the future guidance on implementation could still offer a reward or incentivized system for compliance. It’s also worth noting that Talus mentioned that while the fines of up to 7% of revenue are designed to be a deterrent, they are also mapped to the most egregious of rule breaking like companies that use AI to create social scoring.

If you are familiar with the show Black Mirror, the episode, Nosedive, featuring Bryce Dallas Howard, paints a picture of what this could look like.

And while the repercussions of non-compliance are locked in place, implementation of the EU AI Act is still forthcoming. In anticipation of further guidance, Emma Redmond, OpenAI’s Associate General Counsel, Head of Privacy and Data Protection, recommends zooming out and looking at the big picture for now. At OpenAI, they are taking a phased approach to ensure there are no missteps.

Emma continued on by saying that as OpenAI originated as a research lab, their layered approach as an AI platform requires them to deploy responsibly. She recommends seeking out a Privacy by Design philosophy, or specifically, building privacy and compliance from the start, and not bolting it on later. In the language of buzzwords, this aligns with the concept of shift left compliance.

Frameworks: Build vs. Adopt

While the EU AI Act will influence future regulation globally, right now organizations are weighing the options between building and adopting frameworks. According to J. Trevor Hughes, the President and CEO of IAPP, there are now dozens if not hundreds of frameworks being developed around AI governance.

However, Hughes suggests that you don’t have to build something new when you’ve already got operational programs that work. For example, GRC and cybersecurity teams already have impact assessments and can expand on this with fairness and bias impact assessments in relation to AI.

He also recommends including third parties who specialize in auditing and external review of a similar nature. Hughes also pointed towards Uber as an example who is heading in this direction.

Ruby Zefo, Uber’s Chief Privacy Officer, AGC Privacy and Cybersecurity, builds upon this by discussing that they invested in a full third-party civil rights and AI review last year.

Continuous GRC Scales to Untangle Overlapping Regulation

Outside of the sessions, Drata has also prepared for the incoming wave of AI-related risks and regulations poised to impact the world of GRC. While AI itself was a common discussion point at our booth, passersby were curious about how they can balance using AI, defending against AI threats, and of course untangling incoming regulation.

To scale alongside AI and ML, we’ve heard many CISOs exclaim that the only way to balance overlapping regulation is through automated or continuous GRC.

As an aside, our team crafted an experience built into our space on the expo floor to represent how chaotic the related evidence collection for this use case is (watch our recap video above).

While we found a way to make manual evidence collection fun, our visitors made it clear that you need the proper process and technique to collect as much evidence as possible. Even then, it was impossible to collect it all without technology, and manual collection certainly isn’t scalable.

Stay tuned to our blog as we continue highlighting the different aspects and conversations we’ve held during RSA Conference 2024.

Trusted Newsletter
Resources for you
New Launches From Drataverse

New Launches From Drataverse: Chart Your Course

Highlights From Drataverse: Chart Your Course

Highlights From Drataverse: Chart Your Course

Image - SOC 2 penetration test list

Penetration Tests and SOC 2: Preference, Tradition, or Requirement?

Elliot Volkman
Elliot Volkman
Former Director of Brand, Content, and Community
Related Resources
Image - RSA AI Recap

RSA Conference 2024: Regulations and AI Set to Clash

GRC Maturity: Manual Risk Management Programs Fall Behind

GRC Maturity: Manual Risk Management Programs Fall Behind

DDRR Recap

A Recap of Drataverse Digital: Risk and Reward

NIST AI RMF

Drata's New NIST AI RMF: A Game-Changer for AI Risk Management