Home AI Colorado Senator to Establish Safety Policies for AI Use

Colorado Senator to Establish Safety Policies for AI Use

by ccadm



In a significant step towards safer AI use, Senator John Hickenlooper from Colorado has initiated a plan to create AI auditing policies. With a focus on creating clear standards for AI system checks, Hickenlooper has reached out to industry experts and key players for insights. This effort aims to guide the crafting of legislation that ensures AI technologies are both effective and safe.

Crafting standards in Colorado

Hickenlooper’s strategy involves a collaborative effort led by the Department of Commerce. The goal is to develop voluntary standards that would serve as a guide for independent audits of AI technologies. In a February speech, Hickenlooper stressed the importance of external oversight for the AI industry, citing the need to avoid self-regulation given the potential risks involved. 

“If we’re not careful and don’t steer AI that way, it could actually end up displacing huge numbers of workers without taking into consideration what they will do next. That’s one instance of thousands of decisions being made today that are going to have consequences for generations.” said Senator Hickenlooper. 

By involving qualified third parties in auditing AI systems, the initiative seeks to ensure compliance with federal laws and safeguard against unforeseen harms. 

Questionnaire details

The senator’s office has circulated a questionnaire covering key aspects of AI system auditing, including the frequency, scope of audits, transparency requirements, compliance and the overall auditing ecosystem.

This questionnaire, collected from Nextgov/FCW, is designed to gather detailed information that will help define effective and practical audit standards. One of the focal points is how these standards could be adapted for different stages of software development, addressing the unique challenges posed by upstream and downstream development processes.

The practice of evaluating an AI system’s inputs and outputs for high-quality, secure outcomes by a third party is known as AI auditing. Hickenlooper’s office verified the questions’ origins and stated that they will be used to guide future legislative initiatives as well as the regulatory framework he unveiled in early February.

Hickenlooper asks a very thorough question regarding the audit’s scope. The lawmaker primarily addresses upstream versus downstream development when posing the question of how prospective audit criteria should be modified for developers who operate throughout the architecture of software.

Additionally, it also inquires as to whether additional requirements, such as access to AI model training data, direct evaluations of system outputs, internal verification testing, and qualitative assessments, may be incorporated into AI auditing standards. 

Through this proactive approach, Hickenlooper is laying the groundwork for a regulatory framework that promises to make AI technologies safer and more reliable. By prioritizing transparency, compliance, and the well-being of society, this initiative is a critical step forward in the responsible development and deployment of AI.



Source link

Related Articles