The Pros and Cons of Governor Newsom’s Veto of SB 1047 for Software Developers
I've been thinking about California Governor Gavin Newsom’s veto of Senate Bill 1047 (SB 1047), the proposed regulation aimed at safeguarding against the misuse of artificial intelligence. I was surprised by the veto, because of its popularity in the legislature, but I probably shouldn't have been. Newsom has long positioned himself as a champion of innovation and economic growth, especially in California’s tech sector. And he has historically favored policies that foster innovation and economic competitiveness over early-stage regulatory interventions.
Opinions about the impact of the veto abound, and what I've tried to do in this post is boil them down to provide some context around what this veto might mean for software developers.
The Pros: Why Newsom’s Veto Benefits Developers
Preserves Innovation and Flexibility: To state the obvious, AI is still rapidly evolving, and SB 1047 would have introduced regulations that might have restricted the pace of innovation. The veto means that AI development can continue without requiring adherence to potentially cumbersome bureaucratic guidelines. As a developer, you always want the flexibility to experiment, iterate, and build without having to comply with rigid regulatory frameworks, especially while the technology is still in its early stages. Newsom’s decision allows devs to continue working in an environment that encourages exploration and creativity, without the constraints of audits and impact assessments for every new tool or application.
Avoids Burdensome Compliance Requirements: SB 1047 would have introduced mandatory audits, bias assessments, and regular impact reports for AI systems. Although these measures are intended to ensure ethical AI deployment, they also present a heavy administrative burden, particularly for smaller companies and startups. Meeting these compliance requirements could slow down the development process and drain resources, especially for teams with limited bandwidth. The veto alleviates this pressure, allowing developers to focus on building products and scaling their operations without dedicating time and resources to regulatory compliance.
Maintains California’s Competitive Edge: California is a global leader in AI development, and Newsom’s veto helps maintain its competitive edge by avoiding early, overly restrictive regulations that could push developers and companies to more lenient states or countries. For software developers, this means that the state will remain an attractive hub for innovation and talent. With fewer regulatory hurdles, developers in California can continue to be at the forefront of AI research, attracting investment and top talent from around the world.
Encourages Industry Self-Regulation: Newsom’s veto can be seen as a nod to the idea of industry self-regulation. Many developers and tech companies are already working to establish internal guidelines for ethical AI development, addressing issues such as bias, transparency, and accountability on their own terms. Without top-down regulation, developers have the opportunity to take the lead in defining what responsible AI looks like, potentially shaping industry standards that are both ethical and flexible enough to accommodate innovation.
The Cons: What Developers Lose with the Veto
Missed Opportunity for Ethical Frameworks: Although industry self-regulation is a nice idea, it lacks the enforceability of legislation. SB 1047 was intended to create a standardized ethical framework for AI development, ensuring that developers across the board would be held to a common set of principles regarding fairness, transparency, and accountability. Without these legal requirements, developers may face increased pressure from their employers to prioritize speed and profit over safety and ethics. This could lead to a race to the bottom, where developers feel compelled to cut corners on AI safety and fairness in order to compete.
Unregulated AI Can Lead to Harm: The risks associated with unregulated AI development are not hypothetical. Bias in AI systems has led to real-world harm, from biased hiring algorithms to discriminatory loan approval systems. SB 1047 sought to address these issues by requiring developers to assess and mitigate bias in AI models. Without these legal safeguards, developers may find themselves unintentionally building systems that perpetuate inequality or violate privacy. For developers who care about the societal impact of their work, this veto represents a missed opportunity to put guardrails in place to prevent such harm.
Uncertainty in Future AI Regulation: By vetoing SB 1047, Newsom has delayed the establishment of a regulatory framework, but it is likely that AI regulation will come eventually. This leaves developers in a state of uncertainty, not knowing when or what form future regulation might take. SB 1047, though not perfect, provided clear guidelines on what was expected of developers. Now, they must continue to operate in a regulatory gray area, where the future of AI oversight is unclear. This uncertainty can make it difficult for developers to plan long-term projects or strategies, as they may need to adjust their work to comply with future regulations.
Potential for Unequal Playing Field: The lack of standardized regulations can lead to an unequal playing field for developers. Larger companies with vast resources may choose to implement their own rigorous AI ethics protocols, building public trust in their systems. Meanwhile, smaller developers and startups may lack the resources to independently conduct bias audits and impact assessments, potentially leading to less responsible AI systems. Without a level playing field, some developers may find themselves at a disadvantage, particularly in an industry where public perception of ethical AI is becoming increasingly important.
Conclusion: Developers Must Be Proactive
Clearly, Newsom’s veto of SB 1047 presents both opportunities and challenges for developers. On one hand, the decision allows for continued innovation without the immediate burden of regulatory compliance, offering developers the freedom to experiment and iterate quickly. On the other hand, it leaves the AI landscape largely unregulated, opening the door to ethical risks and leaving developers without clear guidelines to ensure responsible AI development.
For developers, the key takeaway from this veto is the need for proactive involvement in shaping the future of AI regulation. Although Newsom’s decision gives the industry more time to grow and innovate, it also places the responsibility on developers to self-regulate and ensure that their work is ethical and aligned with societal values. Developers should see this as an opportunity to lead the conversation on responsible AI, helping to craft future regulations that balance innovation with safety and fairness.
Ultimately, the veto is a reminder that AI is still in its early stages, and its regulation will require thoughtful, collaborative efforts between developers, policymakers, and the public. For now, the absence of SB 1047 gives developers the freedom to innovate—but with that freedom comes the responsibility to ensure that AI is built for the good of all.
Posted by John K. Waters on October 9, 2024