Home AI Government Amends AI Advisory, Eases Regulations for Industry Players

Government Amends AI Advisory, Eases Regulations for Industry Players

by ccadm



In a significant development aimed at fostering innovation in the artificial intelligence (AI) sector, the government has revised its advisory concerning releasing GenAI– and AI-based tools and features into the market. The move, welcomed by industry players, is a relief as companies will no longer be required to seek explicit government consent before launching their products.

Industry applauds revisions

The amended advisory, issued on 15 March, has notably removed the requirement for companies to comply within a strict 15-day timeframe. This alteration has been approved by industry experts who had voiced concerns over the potential hindrance to the pace of innovation posed by the initial regulations.

Rohit Kumar, founding partner at The Quantum Hub, a public policy consulting firm, commended the government’s responsiveness to industry feedback. He emphasized that the earlier advisory could have significantly impeded speed to market and stifled the innovation ecosystem. Kumar also pointed out that removing the necessity to submit an action-taken report indicated that the advisory was not merely suggestive but carried weight as a directive.

Key revisions and continuity in requirements

Under the revised advisory, platforms and intermediaries equipped with AI and GenAI capabilities, such as Google and OpenAI, are still required to obtain government approval before offering services enabling the creation of deepfakes. Additionally, they must continue to label themselves as ‘under testing’ and secure explicit consent from users, informing them about potential errors inherent in the technology.

The directive extends to all platforms and intermediaries utilizing large language models (LLMs) and foundational models. Moreover, services are mandated not to produce content that compromises the integrity of the electoral process or violates Indian law, underscoring concerns over misinformation and deepfakes influencing election outcomes.

Emphasis on procedural safeguards

While acknowledging the positive stride with the advisory revision, some executives stress the importance of procedural safeguards in policymaking. They advocate for a consultative approach to prevent knee-jerk reactions to incidents and ensure the formulation of well-considered regulations.

Executives, speaking on the condition of anonymity, highlighted the necessity for intermediaries to exercise caution during high-risk periods such as elections. They supported the government’s initiative to urge intermediaries to be vigilant before releasing untested models and appropriately labeling outputs.

The original advisory was prompted by various controversies, including when Google’s AI platform Gemini faced criticism for answers generated about Prime Minister Modi. Instances of ‘hallucinations’ by GenAI models, exemplified by Ola’s beta GenAI platform Krutrim, have also been observed, prompting regulatory intervention.



Source link

Related Articles