Home AI This Week in AI: OpenAI moves away from safety

This Week in AI: OpenAI moves away from safety

by ccadm


Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.

By the way, TechCrunch plans to launch an AI newsletter soon. Stay tuned. In the meantime, we’re upping the cadence of our semiregular AI column, which was previously twice a month (or so), to weekly — so be on the lookout for more editions.

This week in AI, OpenAI once again dominated the news cycle (despite Google’s best efforts) with a product launch, but also, with some palace intrigue. The company unveiled GPT-4o, its most capable generative model yet, and just days later effectively disbanded a team working on the problem of developing controls to prevent “superintelligent” AI systems from going rogue.

The dismantling of the team generated a lot of headlines, predictably. Reporting — including ours — suggests that OpenAI deprioritized the team’s safety research in favor of launching new products like the aforementioned GPT-4o, ultimately leading to the resignation of the team’s two co-leads, Jan Leike and OpenAI co-founder Ilya Sutskever.

Superintelligent AI is more theoretical than real at this point; it’s not clear when — or whether — the tech industry will achieve the breakthroughs necessary in order to create AI capable of accomplishing any task a human can. But the coverage from this week would seem to confirm one thing: that OpenAI’s leadership — in particular CEO Sam Altman — has increasingly chosen to prioritize products over safeguards.

Altman reportedly “infuriated” Sutskever by rushing the launch of AI-powered features at OpenAI’s first dev conference last November. And he’s said to have been critical of Helen Toner, director at Georgetown’s Center for Security and Emerging Technologies and a former member of OpenAI’s board, over a paper she co-authored that cast OpenAI’s approach to safety in a critical light — to the point where he attempted to push her off the board.

Over the past year or so, OpenAI’s let its chatbot store fill up with spam and (allegedly) scraped data from YouTube against the platform’s terms of service while voicing ambitions to let its AI generate depictions of porn and gore. Certainly, safety seems to have taken a back seat at the company — and a growing number of OpenAI safety researchers have come to the conclusion that their work would be better supported elsewhere.

Here are some other AI stories of note from the past few days:

  • OpenAI + Reddit: In more OpenAI news, the company reached an agreement with Reddit to use the social site’s data for AI model training. Wall Street welcomed the deal with open arms — but Reddit users may not be so pleased.
  • Google’s AI: Google hosted its annual I/O developer conference this week, during which it debuted a ton of AI products. We rounded them up here, from the video-generating Veo to AI-organized results in Google Search to upgrades to Google’s Gemini chatbot apps.
  • Anthropic hires Krieger: Mike Krieger, one of the co-founders of Instagram and, more recently, the co-founder of personalized news app Artifact (which TechCrunch corporate parent Yahoo recently acquired), is joining Anthropic as the company’s first chief product officer. He’ll oversee both the company’s consumer and enterprise efforts.
  • AI for kids: Anthropic announced last week that it would begin allowing developers to create kid-focused apps and tools built on its AI models — so long as they follow certain rules. Notably, rivals like Google disallow their AI from being built into apps aimed at younger ages.
  • AI film festival: AI startup Runway held its second-ever AI film festival earlier this month. The takeaway? Some of the more powerful moments in the showcase came not from AI, but the more human elements.

More machine learnings

AI safety is obviously top of mind this week with the OpenAI departures, but Google Deepmind is plowing onwards with a new “Frontier Safety Framework.” Basically it’s the organization’s strategy for identifying and hopefully preventing any runaway capabilities — it doesn’t have to be AGI, it could be a malware generator gone mad or the like.

Image Credits: Google Deepmind

The framework has three steps: 1. Identify potentially harmful capabilities in a model by simulating its paths of development. 2. Evaluate models regularly to detect when they have reached known “critical capability levels.” 3. Apply a mitigation plan to prevent exfiltration (by another or itself) or problematic deployment. There’s more detail here. It may sound kind of like an obvious series of actions, but it’s important to formalize them or everyone is just kind of winging it. That’s how you get the bad AI.

A rather different risk has been identified by Cambridge researchers, who are rightly concerned at the proliferation of chatbots that one trains on a dead person’s data in order to provide a superficial simulacrum of that person. You may (as I do) find the whole concept somewhat abhorrent, but it could be used in grief management and other scenarios if we are careful. The problem is we are not being careful.

Image Credits: Cambridge University / T. Hollanek

“This area of AI is an ethical minefield,” said lead researcher Katarzyna Nowaczyk-Basińska. “We need to start thinking now about how we mitigate the social and psychological risks of digital immortality, because the technology is already here.” The team identifies numerous scams, potential bad and good outcomes, and discusses the concept generally (including fake services) in a paper published in Philosophy & Technology. Black Mirror predicts the future once again!

In less creepy applications of AI, physicists at MIT are looking at a useful (to them) tool for predicting a physical system’s phase or state, normally a statistical task that can grow onerous with more complex systems. But training up a machine learning model on the right data and grounding it with some known material characteristics of a system and you have yourself a considerably more efficient way to go about it. Just another example of how ML is finding niches even in advanced science.

Over at CU Boulder, they’re talking about how AI can be used in disaster management. The tech may be useful for quick prediction of where resources will be needed, mapping damage, even helping train responders, but people are (understandably) hesitant to apply it in life-and-death scenarios.

Attendees at the workshop.
Image Credits: CU Boulder

Professor Amir Behzadan is trying to move the ball forward on that, saying “Human-centered AI leads to more effective disaster response and recovery practices by promoting collaboration, understanding and inclusivity among team members, survivors and stakeholders.” They’re still at the workshop phase, but it’s important to think deeply about this stuff before trying to, say, automate aid distribution after a hurricane.

Lastly some interesting work out of Disney Research, which was looking at how to diversify the output of diffusion image generation models, which can produce similar results over and over for some prompts. Their solution? “Our sampling strategy anneals the conditioning signal by adding scheduled, monotonically decreasing Gaussian noise to the conditioning vector during inference to balance diversity and condition alignment.” I simply could not put it better myself.

Image Credits: Disney Research

The result is a much wider diversity in angles, settings, and general look in the image outputs. Sometimes you want this, sometimes you don’t, but it’s nice to have the option.



Source link

Related Articles