This Google™ translation feature is provided for informational purposes only.
The Office of the Attorney General is unable to guarantee the accuracy of this translation and is therefore not liable for any inaccurate information resulting from the translation application tool.
Please consult with a translator for accuracy if you are relying on the translation or are using this site for official business.
If you have any questions please contact: Bilingual Services Program at EERROffice@doj.ca.gov
A copy of this disclaimer can also be found on our Disclaimer page.
Select a Language Below / Seleccione el Idioma Abajo
25-0033
Proponent: Alexander Oldham
Initiative Received: 12/01/2025
12/03/2025
I support the principle of accountability in emerging technologies, including artificial intelligence. I do not, however, support regulatory frameworks that substitute centralized government control for innovation, civic responsibility, and constitutional restraint. After reviewing Initiative 25-0033, I am deeply concerned that this measure crosses that line.
The Act does not merely regulate AI use after demonstrable harm. It instead establishes a system of prior restraint, compelled corporate restructuring, expansion pre-approval, and punitive revenue-based enforcement that fundamentally alters the relationship between government and technological innovation. The creation of a powerful AI Accountability Commission with broad enforcement, investigative, approval, and financial penalty authority effectively positions the state as the chief architect of AI development rather than its regulator.
This model assumes that innovation is inherently dangerous and must be permitted rather than evaluated after harm occurs. History teaches us that this approach does not create safety; it creates stagnation and the quiet migration of innovation to more permissive jurisdictions. California’s own economic success has long rested on the opposite principle: that innovation flourishes when individuals are free to take risks and are held accountable only when those risks cause measurable harm.
While the Act rightly acknowledges concerns over AI concentration, civil liberties, and public safety, its solution substitutes centralized moral arbitration for competition, transparency, and civic choice. The Commission is empowered to decide what constitutes “benefit,” who may expand, and which innovations may proceed. That authority is neither market-based nor democratically iterative, it is technocratic and permanent.
Of equal concern is the Act’s punitive structure. Revenue-based penalties reaching up to 25% of global income, combined with executive criminal exposure and mandatory divestiture provisions, introduce extreme liability without clearly defined causal thresholds. This does not encourage responsible innovation. It encourages risk-avoidance, offshoring, and secrecy.
More broadly, this measure reflects a troubling philosophical shift: a growing belief that government must replace citizen responsibility rather than reinforce it. Independent thinking, informed consumer decision-making, and public technical literacy are being displaced by paternalistic enforcement structures. No commission can substitute for an engaged and educated electorate. Safety cannot be engineered solely through bureaucracy.
I do not reject regulation. I reject over-substitution of governance for accountability, and permission-based innovation in place of consequence-based oversight. If this Act is to proceed responsibly, I suggest it must:
1. Limit government intervention to post-harm accountability, not pre-development licensing;
2. Tie penalties to demonstrated, proportionate damage, not speculative risk;
3. Preserve open market competition as the primary safeguard against misuse; and
4. Reinforce civic responsibility and transparency, rather than replacing them with administrative control.
Unchecked technology is dangerous. So is unchecked government. A free society must be vigilant against both. For these reasons, I urge substantial reconsideration, narrowing, and constitutional review of Initiative 25-0033 before it is advanced further.
12/31/2025
Attached Public Comment re CA Public Benefit AI Accountability Act (No. 25-0033) on behalf of OpenAI.
I support the principle of accountability in emerging technologies, including artificial intelligence. I do not, however, support regulatory frameworks that substitute centralized government control for innovation, civic responsibility, and constitutional restraint. After reviewing Initiative 25-0033, I am deeply concerned that this measure crosses that line. The Act does not merely regulate AI use after demonstrable harm. It instead establishes a system of prior restraint, compelled corporate restructuring, expansion pre-approval, and punitive revenue-based enforcement that fundamentally alters the relationship between government and technological innovation. The creation of a powerful AI Accountability Commission with broad enforcement, investigative, approval, and financial penalty authority effectively positions the state as the chief architect of AI development rather than its regulator. This model assumes that innovation is inherently dangerous and must be permitted rather than evaluated after harm occurs. History teaches us that this approach does not create safety; it creates stagnation and the quiet migration of innovation to more permissive jurisdictions. California’s own economic success has long rested on the opposite principle: that innovation flourishes when individuals are free to take risks and are held accountable only when those risks cause measurable harm. While the Act rightly acknowledges concerns over AI concentration, civil liberties, and public safety, its solution substitutes centralized moral arbitration for competition, transparency, and civic choice. The Commission is empowered to decide what constitutes “benefit,” who may expand, and which innovations may proceed. That authority is neither market-based nor democratically iterative, it is technocratic and permanent. Of equal concern is the Act’s punitive structure. Revenue-based penalties reaching up to 25% of global income, combined with executive criminal exposure and mandatory divestiture provisions, introduce extreme liability without clearly defined causal thresholds. This does not encourage responsible innovation. It encourages risk-avoidance, offshoring, and secrecy. More broadly, this measure reflects a troubling philosophical shift: a growing belief that government must replace citizen responsibility rather than reinforce it. Independent thinking, informed consumer decision-making, and public technical literacy are being displaced by paternalistic enforcement structures. No commission can substitute for an engaged and educated electorate. Safety cannot be engineered solely through bureaucracy. I do not reject regulation. I reject over-substitution of governance for accountability, and permission-based innovation in place of consequence-based oversight. If this Act is to proceed responsibly, I suggest it must: 1. Limit government intervention to post-harm accountability, not pre-development licensing; 2. Tie penalties to demonstrated, proportionate damage, not speculative risk; 3. Preserve open market competition as the primary safeguard against misuse; and 4. Reinforce civic responsibility and transparency, rather than replacing them with administrative control. Unchecked technology is dangerous. So is unchecked government. A free society must be vigilant against both. For these reasons, I urge substantial reconsideration, narrowing, and constitutional review of Initiative 25-0033 before it is advanced further.
Attached Public Comment re CA Public Benefit AI Accountability Act (No. 25-0033) on behalf of OpenAI.