HKS MEMOIR (33): SIMULATION ON U.S. SENATE HEARING ON "OVERSIGHT OF AI: OPEN INNOVATION"



By: Olanrewaju-Smart Wasiu

A familiar atmosphere of a legislative hearing beckons as I took my seat as the U.S Federal Trade Commission's representative appearing before the Senate Subcommittee on Privacy, Technology, and the Law to present FTC’s stance on regulation on Artificial Intelligence and Open Innovation. Professor Latanya Sweeney's DPI 640 simulation took me back to my days as Chief of Staff in Nigeria's House of Representatives, though this time in a different role and context.



The preparation was thorough – we had studied the U.S Senate procedural guide, analyzed video clips of past hearings, and examined recent AI Insight Forums convened by Majority Leader, Senator Chuck Schumer. Now faced Chairman Kevin Wren and Caroline Kracunas acting as the second Senator.


My position paper advocated strongly for open innovation in AI development, a stance that put me in direct debate with OpenAI's representative. Drawing from the FTC's statutory mandate as stipulated in Section 45 of the FTC Act – which empowers us to prevent unfair methods of competition and deceptive practices. I presented our proposed "Tiered Transparency Framework".



One of the most intense moments came during my exchange with OpenAI's representative. While they expressed support for AI regulation in principle, the core tension emerged around the disclosure of their proprietary algorithms and language models powering ChatGPT. Chairman Wren's probing question cut to the heart of the matter: How could the FTC balance its pro-market competition mandate with national security concerns about exposing AI algorithms?


I responded, "Mr. Chairman, the FTC's framework deliberately mirrors successful regulatory approaches in other sensitive industries like existing open-source software companies, pharmaceutical companies, etc. can protect their intellectual property while providing necessary safety disclosures, our proposed framework includes robust protections for truly sensitive proprietary information and America will need the Senate to advance the proposed framework into an omnibus Act on AI governance, apart from the existing Act guiding adoption of AI by U.S government agencies and the Executive Order signed by President Biden."


The OpenAI representative argued that their reluctance to open their language models wasn't about monopolistic behavior but rather responsible AI development and national security. I countered by citing Section 46(f) of the FTC Act, which already provides mechanisms for protecting confidential business information while ensuring necessary oversight.


The room buzzed with energy as my classmates representing tech giants like Google, Anthropic, and the Allen AI Institute entered their role perfectly. When OpenAI pressed on potential security risks, I emphasized how our tiered access approach would differentiate between public disclosures, regulatory oversight, and protected proprietary information. This wasn't about exposing sensitive algorithms to potential adversaries but rather ensuring sufficient transparency for meaningful competition and consumer protection.


Takeaway: Structured dialogue between government and industry in shaping the future of AI regulation demonstrate how careful policy design could address seemingly opposing concerns of innovation, competition, security, and public good.

Comments

Popular posts from this blog

EREDO COUNCIL BOSS PRESENTS LETTER OF APPROVAL TO HIGH CHIEFS

Breaking: JAMB releases 2024 UTME results

EREDO LCDA BOSS HOLDS PEACE AND SECURITY MEETING WITH CONCERNED SECURITY