Why AI Needs Real Regulation - Roy Austin on the Regulating AI with Sanjay Puri

 


AI is advancing at breakneck speed, transforming every aspect of our lives - from how we work to how we learn, communicate and govern. On the RegulatingAI Podcast hosted by Sanjay Puri and Roy Austin, a leading voice in civil rights, law and technology, lays out why this rapid innovation is outpacing regulation, creating risks for society and why real oversight is urgently needed. 

Austin, a professor at Howard University School of Law and former deputy assistant to President Obama, has spent decades confronting systemic inequities. In conversation with Puri, he warns that without effective regulation, AI development risks harming civil rights, democratic processes and the workforce, all while a small number of tech elites accumulate unprecedented power and influence.


The Pace of AI vs Regulation
“We have a technology moving faster than any we’ve seen in human history,” Austin emphasises. Unlike past innovations, AI evolves daily, scaling globally in ways laws cannot keep up with. Child safety, employment and civil rights are all at risk, yet governance remains slow and fragmented. According to Austin, the current system is reactive - regulators chase after technology rather than guiding it responsibly.

The Illusion of National Regulation
Big tech companies often call for “national regulation,” but Austin argues this is largely performative. Federal rules are slow, and in their absence, states try to fill the void. Tech uses this gap strategically, lobbying against state-level laws while claiming to want national frameworks. “They are playing people for fools,” Austin says, highlighting a pattern of regulatory capture where companies shape rules to favor themselves.

Civil Rights and Bias in AI
Drawing on his co-authored 2014 Obama administration report on big data, Austin explains, “Garbage in, garbage out.” AI systems trained on biased or incomplete data replicate inequities. The infamous example of Amazon’s AI hiring tool - biased against women and minorities - illustrates the danger. Now, with AI systems like ChatGPT reaching hundreds of millions globally, the potential for harm scales exponentially.

US Vs EU Approaches
Austin compares the risk-based EU AI Act, which emphasises transparency and human rights, with the U.S.’s fragmented sector-specific approach. He stresses that regulation does not stifle innovation; instead, it provides trust, accountability and safety - necessary for AI to serve society rather than corporate profit.

Why Self-Regulation Fails
Ethics boards and oversight committees are inadequate. Austin notes oversight boards make few decisions annually while companies deploy millions of AI-driven choices daily. Without internal compliance teams and external independent oversight, there’s no accountability. Companies currently prioritize growth and wealth accumulation over public safety and civil rights.

Societal and Economic Impact
From job displacement to concentrated wealth, AI’s societal impact is profound. Austin warns that tech companies must take moral responsibility for employment disruption and civil rights harms. AI affects democracy, privacy and even child safety. Without regulation, individuals and communities are left vulnerable.

Austin’s Recommendations

Federal regulation complementing state laws, not preempting them.
Internal company infrastructure for responsible AI.
Independent external oversight agencies.
Education to help humans evaluate AI outputs critically.
Inclusive decision-making centered on civil rights.

Conclusion
Roy Austin’s discussion on Sanjay Puri’s RegulatingAI Podcast makes one thing clear - AI will not self-regulate. Without thoughtful oversight, ethical frameworks and diverse leadership, the technology could widen inequities, harm individuals and concentrate power in the hands of a few. Responsible AI isn’t just a technical issue - it’s a societal imperative and the time to act is now.

Source - Upasana Das, Knowledge Networks

Disclaimer – The details expressed in this post are from the organisations responsible for circulating this post for publication and the views are of the spokesperson. This website doesn’t endorse the details published here. Readers are urged to use their own discretion while making a decision about using this information in any way. There has been no monetary benefit to the Publisher/Editor/Website Owner for publishing this post and the Website Owner takes no responsibility for the impacts of using this information in any way.

    Comments