The future of AI governance, with SB 1047 architect Sen. Scott Wiener | TechCrunch Disrupt 2024

03 Nov 2024 (12 days ago)
The future of AI governance, with SB 1047 architect Sen. Scott Wiener | TechCrunch Disrupt 2024

EU and US Approaches to AI Regulation

  • The regulation of artificial intelligence (AI) is a complex and evolving field, with different approaches being taken in various parts of the world, including the European Union (EU) and the United States (16s).
  • The EU has recently passed the EU AI Act, which establishes a pyramid of risk for regulating AI technologies and defines unacceptable and high risks, as well as requirements for high-risk systems (1m34s).
  • The EU AI Act is also working on a code of practice for general-purpose AI systems (1m54s).
  • In contrast to the EU, the US is often characterized as the "wild west" of AI regulation, but this is an oversimplification, as there are many existing regulations that apply to AI technologies (2m1s).
  • US agencies and departments, such as the Federal Trade Commission (FTC), have been working to articulate and clarify the application of existing regulations to AI technologies (2m21s).
  • The FTC has made it clear that AI technologies must comply with consumer protection laws and anti-discrimination laws, and that companies will be held liable for non-compliance (2m37s).
  • At the state level, there are also many regulations and laws related to AI, with dozens of states having laws on the books, including 17 new AI-related laws passed in California in the past couple of months (3m7s).

The Impact and Reception of AI Laws

  • The impact and reception of these laws are still being assessed, but they are seen as a necessary step to mitigate the risks associated with AI technologies (3m40s).
  • The panel discussion highlights the need for ongoing evaluation and refinement of AI regulations to keep pace with the rapidly evolving technology (3m27s).
  • California has passed laws related to deepfakes, including the first deepfakes law five or six years ago, with updates this year, although one of the bills has been put on hold by the courts due to a court decision that Elon Musk's posting of deepfakes is protected by the First Amendment (4m30s).

AI-Specific Regulation and Existing Laws

  • A bill was passed this year to criminalize deepfake porn, specifically AI-generated porn that affects many young women, in a very specific way (5m4s).
  • There is a lack of AI-specific regulation, with most laws being general and applicable to various areas such as consumer protection or employment, which will take time for courts to figure out how they apply to AI (5m24s).
  • Differentiating between AI-specific regulation and general disinformation laws is a good approach (6m3s).
  • Existing liability laws allow individuals to sue developers of models that enable harm, even if the model is not a large language model, and the developer may be found liable (6m41s).

President Biden's Executive Order on AI

  • The federal level has seen meaningful executive action, including President Biden's executive order on artificial intelligence, which aims to mitigate risks and harness the potential of AI (7m33s).
  • The executive order focuses on three areas: spurring innovation, using authorities to mitigate risks, and ensuring accountability, with a focus on socially beneficial use cases like drug discovery and development, carbon capture and storage, and individualized education (8m1s).

The US AI Safety Institute

  • The US AI Safety Institute focuses on advancing the science of AI safety by understanding the capabilities and risks of AI models and working to mitigate those risks, with the goal of enabling innovation through safety and trust (10m11s).
  • The Institute's work is divided into three primary pillars: testing and evaluation, issuing guidance, and building the ecosystem of safety, with the aim of enabling a virtuous cycle of safety, trust, adoption, and innovation (10m37s).
  • The testing and evaluation pillar involves working with developers of advanced models to conduct pre- and post-deployment evaluations, focusing on domains related to public safety and national security, with partnerships announced with Open AI and Anthropic (10m44s).
  • The guidance pillar involves issuing voluntary guidance on testing, evaluation, and best practices for content authentication, synthetic content detection, and other areas, with the goal of broad adoption across industry, civil society, and academia (11m18s).
  • The ecosystem pillar involves building a network of safety experts and leveraging the expertise of companies and the national security establishment to advance safety and inform industry and academia (11m58s).

The Broader Effort to Mitigate AI Risks

  • The Institute's work is part of a broader effort to mitigate the risks and harness the potential of AI, with over 100 actions completed since last year to address the issue (9m37s).
  • The Institute's partnerships with industry are voluntary, with the goal of working together to understand the capabilities and risks of AI models and develop effective safeguards (9m19s).
  • A network of AI safety institutes is being launched globally to build on higher-level commitments and advance safety while enabling innovation and avoiding a patchwork of regulations (12m31s).
  • The AI Safety Institute is a product of an executive order by President Biden, which was in effect as of a year ago, but there are concerns that it may not be permanent if the executive order is repealed (13m10s).
  • An open letter signed by major tech companies, including Google, Meta, Open AI, and Anthropic, urged Congress to make the AI Safety Institute permanent (13m21s).

The Role of NIST and Bipartisan Support for AI Safety

  • The US AI Safety Institute's work is core to the work done by the National Institute of Standard Technology, which has done testing and evaluation of biometrics and issued voluntary guidance (14m11s).
  • Safety is a bipartisan issue, with legislation passed out of committee in both the House and Senate sponsored by representatives and senators on both sides of the aisle (14m35s).
  • There is a shared interest in mitigating the potential downsides of technology and ensuring public safety and national security (14m58s).

SB 1047: A Case Study in AI Legislation

  • The challenge of getting bills passed is illustrated by the case of SB 1047, a comprehensive AI legislation that was not passed despite being a well-crafted bill with significant changes made in response to feedback from industry, academics, and others (15m59s).
  • SB 1047 was worked on with supporters, critics, and various stakeholders, including GitHub, Anthropic, and others, to make significant changes and improvements (16m8s).
  • The bill, SB 1047, aimed to ensure the largest labs perform safety testing on their AI models, allow them to shut down their models if necessary, protect whistleblowers, and create a public cloud for accessible computing (16m33s).
  • The bill took on a life of its own, with opposition viewing it as "mega regulation" and supporters being excited about its specifics, contributing to a healthy conversation on AI safety (17m4s).
  • Despite the governor's veto, the bill passed with strong majorities in both houses of the legislature, with an overwhelming majority of Democrats and some Republicans voting in favor (17m55s).
  • The importance of addressing AI safety and risks was highlighted, as the technology has the potential to make the world a better place but also poses risks if not managed properly (18m21s).
  • The premise of SB 1047 is to foster innovation while trying to understand and minimize risks associated with AI, learning from past mistakes in data privacy and social media (18m52s).
  • Despite the veto, efforts to tackle AI safety issues will continue, with the governor's working group and potential follow-up bills in the future (19m24s).

Balancing Innovation and Risk in AI

  • Meetings are being held with opponents and critics of the bill to address concerns and work towards a constructive solution (20m11s).
  • The goal is to find a balance between promoting innovation and addressing the risks associated with AI, rather than simply letting the technology develop without regulation (19m0s).
  • The process of creating the bill was transparent from the beginning, with a potential outline published and promoted five months before its introduction, allowing for input from various stakeholders, including investors, startups, and large tech companies (20m39s).
  • The initial version of the bill was not expected to pass as is, and the engagement process helped set the stage for future efforts, potentially bringing more people together (21m5s).
  • Large labs have acknowledged the risks associated with AI and have expressed a desire to test for them, with some companies like Open AI and Anthropic making their models available to the federal government (21m37s).

Industry Response and the EU AI Act

  • The EU's AI Act has been passed and is in effect, but its reception has been mixed despite industry input, highlighting the challenges of creating regulations that satisfy all parties (22m28s).
  • Over 100 companies have signed the EU AI Pact, committing to develop governance frameworks and identify high-risk systems, signaling a desire to be perceived as trustworthy (23m32s).
  • The situation surrounding SB 1047 was notable for its light-touch approach, which was a deliberate choice (24m18s).
  • The experience with SB 1047 and the EU's AI Act suggests that finding a regulatory solution that satisfies all parties may be difficult, but it is not impossible, and efforts to create effective regulations will continue (23m23s).

Challenges in Regulating Frontier AI Models

  • Lawmakers face challenges in regulating AI, particularly Frontier AI models, as they cannot regulate every single AI model due to the existence of benign and low-risk systems, while also acknowledging that small models can cause harm (24m34s).
  • There is a tension between being proactive in addressing potential risks and being overly reactive, as well as a lack of scientific evidence to support proactive measures (25m20s).

Perspectives from the Tech Sector

  • Companies prefer a clear and consistent regulatory system at the federal level, rather than a patchwork system where every state has different regulations (25m40s).
  • Representing San Francisco, a hub for the tech sector, has provided insight into the diversity of thought within the tech community, with varying opinions on AI regulation (26m9s).
  • The tech community is divided on AI safety, with some individuals being labeled as "doomers" for supporting regulation, while others are more libertarian, but most people are open to the idea of regulation and are willing to debate its merits (27m1s).

Federal vs. State Regulation of AI

  • There is a debate about whether AI regulation should be handled at the federal or state level, with some arguing that it should be handled at the federal level, but Congress has not enacted major tech regulation since the 1990s (27m51s).
  • California has taken the initiative to enact its own tech regulations, including a bill that aimed to regulate AI, but faced criticism and challenges in the process (28m8s).
  • The US Congress has not yet enacted a data privacy law, and the country still lacks a federal net neutrality law, which underscores the need for expertise and knowledge in government to respond to evolving technologies (28m10s).

The Need for Expertise in AI Governance

  • The conversation around AI governance is important, and it's complicated due to differing opinions on regulation, with some people being against any and all regulation at any level (28m32s).
  • The government response to AI needs to be iterative, and having expertise in knowledge inside all levels of government is crucial to respond to the evolving technology (28m48s).
  • The executive order focuses on an AI Talent surge to bring in people from across the spectrum to serve in government, and the AI Safety Institute is building a team of computer scientists and researchers to inform the federal response (28m56s).
  • Comprehensive privacy legislation is needed, and it's been exacerbated by AI, but new challenges and opportunities will arise, requiring a brain trust to draw upon (29m34s).

Voluntary Commitments vs. Compulsory Elements

  • Some argue that compulsory elements, rather than voluntary commitments, would be better for ensuring safety, but it's a topic of debate (30m7s).
  • The chances of having something like the AI Act in the US are uncertain, but efforts are being made in Congress, and the AI Safety Institute is achieving a lot through executive authority (30m48s).
  • The work being done at the AI Safety Institute, building on voluntary commitments made by companies, is enabling the advancement of safety by understanding risks and capabilities and mitigating those risks (31m10s).

Overwhelmed by Endless Content?