Home AI How Anthropic found a trick to get AI to give you answers it’s not supposed to

How Anthropic found a trick to get AI to give you answers it’s not supposed to

by ccadm


If you build it, people will try to break it. Sometimes even the people building stuff are the ones breaking it. Such is the case with Anthropic and its latest research which demonstrates an interesting vulnerability in current LLM technology. More or less if you keep at a question, you can break guardrails and wind up with large language models telling you stuff that they are designed not to. Like how to build a bomb.

Of course given progress in open-source AI technology, you can spin up your own LLM locally and just ask it whatever you want, but for more consumer-grade stuff this is an issue worth pondering. What’s fun about AI today is the quick pace it is advancing, and how well — or not — we’re doing as a species to better understand what we’re building.

If you’ll allow me the thought, I wonder if we’re going to see more questions and issues of the type that Anthropic outlines as LLMs and other new AI model types get smarter, and larger. Which is perhaps repeating myself. But the closer we get to more generalized AI intelligence, the more it should resemble a thinking entity, and not a computer that we can program, right? If so, we might have a harder time nailing down edge cases to the point when that work becomes unfeasible? Anyway, let’s talk about what Anthropic recently shared.



Source link

Related Articles