Home AI This Week in AI: How Kamala Harris might regulate AI

This Week in AI: How Kamala Harris might regulate AI

by ccadm


Hiya, folks, welcome to TechCrunch’s regular AI newsletter.

Last Sunday, President Joe Biden announced that he no longer plans to seek reelection, instead offering his “full endorsement” of VP Kamala Harris to become the Democratic Party’s nominee; in the days following, Harris secured support from the Democratic delegate majority.

Harris has been outspoken on tech and AI policy; should she win the presidency, what would that mean for U.S. AI regulation?

My colleague Anthony Ha penned a few words on this over the weekend. Harris and President Biden previously said they “reject the false choice that suggests we can either protect the public or advance innovation.” At that time, Biden had issued an executive order calling for companies to set new standards around the development of AI. Harris said that the voluntary commitments were “an initial step toward a safer AI future with more to come” because “in the absence of regulation and strong government oversight, some technology companies choose to prioritize profit over the well-being of their customers, the safety of our communities, and the stability of our democracies.”

I also spoke with AI policy experts to get their views. For the most part, they said that they’d expect consistency with a Harris administration, as opposed to a dismantling of the current AI policy and general deregulation that Donald Trump’s camp has championed.

Lee Tiedrich, an AI consultant at the Global Partnership on Artificial Intelligence, told TechCrunch that Biden’s endorsement of Harris could “increase the chances of maintaining continuity” in U.S. AI policy. “[This is] framed by the 2023 AI executive order and also marked by multilateralism through the United Nations, the G7, the OECD and other organizations,” she said. “The executive order and related actions also call for more government oversight of AI, including through increased enforcement, greater agency AI rules and policies, a focus on safety and certain mandatory testing and disclosures for some large AI systems.”

Sarah Kreps, a professor of government at Cornell with a special interest in AI, noted that there’s a perception within certain segments of the tech industry that the Biden administration leaned too aggressively into regulation and that the AI executive order was “micromanagement overkill.” She doesn’t anticipate that Harris would roll back any of the AI safety protocols instituted under Biden, but she does wonder whether a Harris administration might take a less top-down regulatory approach to placate critics.

Krystal Kauffman, a research fellow at the Distributed AI Research Institute, agrees with Kreps and Tiedrich that Harris will most likely continue Biden’s work to address the risks associated with AI use and seek to increase transparency around AI. However, she hopes that, should Harris clinch the presidential election, she’ll cast a wider stakeholder net in formulating policy — a net that captures the data workers whose plight (poor pay, poor working conditions and mental health challenges) often goes unacknowledged.

“Harris must include the voices of data workers who help program AI in these important conversations going forward,” Kauffman said. “We cannot continue to see closed-door meetings with tech CEOs as a means to work out policy. This will absolutely take us down the wrong path if it continues.”

News

Meta releases new models: Meta this week released Llama 3.1 405B, a text-generating and -analyzing model containing 405 billion parameters. Its largest “open” model yet, Llama 3.1 405B is making its way into various Meta platforms and apps, including the Meta AI experience across Facebook, Instagram and Messenger.

Adobe refreshes Firefly: Adobe released new Firefly tools for Photoshop and Illustrator on Tuesday, offering graphic designers more ways to use the company’s in-house AI models.

Facial recognition at school: An English school has been formally reprimanded by the U.K.’s data protection regulator after it used facial-recognition technology without getting specific opt-in consent from students for processing their facial scans.

Cohere raises half a billion: Cohere, a generative AI startup co-founded by ex-Google researchers, has raised $500 million in new cash from investors, including Cisco and AMD. Unlike many of its generative AI startup rivals, Cohere customizes AI models for big enterprises — a key factor in its success.

CIA AI director interview: As part of TechCrunch’s ongoing Women in AI series, yours truly interviewed Lakshmi Raman, the director of AI at the CIA. We talked about her path to director as well as the CIA’s use of AI, and the balance that needs to be struck between embracing new tech and deploying it responsibly.

Research paper of the week

Ever heard of the transformer? It’s the AI model architecture of choice for complex reasoning tasks, powering models like OpenAI’s GPT-4o, Anthropic’s Claude and many others. But, as powerful as transformers are, they have their flaws. And so researchers are investigating possible alternatives.

One of the more promising candidates is state space models (SSM), which combine the qualities of several older types of AI models, such as recurrent neural networks and convolutional neural networks, to create a more computationally efficient architecture capable of ingesting long sequences of data (think novels and movies). And one of the strongest incarnations of SSMs yet, Mamba-2, was detailed in a paper this month by research scientists Tri Dao (a professor at Princeton) and Albert Gu (Carnegie Mellon).

Like its predecessor Mamba, Mamba-2 can handle larger chunks of input data than transformer-based equivalents while remaining competitive, performance-wise, with transformer-based models on certain language-generation tasks. Dao and Gu imply that, should SSMs continue to improve, they’ll someday run on commodity hardware — and deliver more powerful generative AI applications than are possible with today’s transformers.

Model of the week

In another recent architecture-related development, a team of researchers developed a new type of generative AI model they claim can match — or beat — both the strongest transformers and Mamba in terms of efficiency.

Called test-time training models (TTT models), the architecture can reason over millions of tokens, according to the researchers, potentially scaling up to billions of tokens in future, refined designs. (In generative AI, “tokens” are bits of raw text and other bite-sized data pieces.) Because TTT models can take in many more tokens than conventional models and do so without overly straining hardware resources, they’re fit to power “next-gen” generative AI apps, the researchers believe.

For a deeper dive into TTT models, check out our recent feature.

Grab bag

Stability AI, the generative AI startup that investors, including Napster co-founder Sean Parker, recently swooped in to save from financial ruin, has caused quite a bit of controversy over its restrictive new product terms of use and licensing policies.

Until recently, to use Stability AI’s newest open AI image model, Stable Diffusion 3, commercially, organizations making less than $1 million a year in revenue had to sign up for a “creator” license that capped the total number of images they could generate to 6,000 per month. The bigger issue for many customers, though, was Stability’s restrictive fine-tuning terms, which gave (or at least appeared to give) Stability AI the right to extract fees for and exert control over any model trained on images generated by Stable Diffusion 3.

Stability AI’s heavy-handed approach led CivitAI, one of the largest hosts of image-generating models, to impose a temporary ban on models based or trained on images from Stable Diffusion 3 while it sought legal counsel on the new license.

“The concern is that from our current understanding, this license grants Stability AI too much power over the use of not only any models fine-tuned on Stable Diffusion 3, but on any other models that include Stable Diffusion 3 images in their data sets,” CivitAI wrote in a post on its blog.

In response to the blowback, Stability AI early this month said that it’ll adjust the licensing terms for Stable Diffusion 3 to allow for more liberal commercial use. “As long as you don’t use it for activities that are illegal, or clearly violate our license or acceptable use policy, Stability AI will never ask you to delete resulting images, fine-tunes or other derived products — even if you never pay Stability AI,” Stability clarified in a blog.

The saga highlights the legal pitfalls that continue to plague generative AI — and, relatedly, the extent to which “open” remains subject to interpretation. Call me a pessimist, but the growing number of controversially restrictive licenses suggests to me that the AI industry won’t reach consensus — or inch toward clarity — anytime soon.





Source link

Related Articles