Home AI Watch it and weep (or smile): Synthesia’s AI video avatars now feature emotions

Watch it and weep (or smile): Synthesia’s AI video avatars now feature emotions

by ccadm


Generative AI has captured the public imagination with a leap into creating elaborate, plausibly real text and imagery out of verbal prompts. But the catch — and there is often a catch — is that the results are often far from perfect when you look a little closer.

People point out strange fingers, floor tiles slip away, and math problems are precisely that: problematically, sometimes they don’t add up.

Now, Synthesia — one of the ambitious AI startups working in video, specifically custom avatars designed for business users to create promotional, training and other enterprise video content — is releasing an update that it hopes will help it leapfrog over some of the challenges in its particular field. Its latest version features avatars — built based on actual humans captured in their studio — which provide more emotion, better lip tracking and what it says are more expressive natural and human movements when they are fed text to generate videos.

The release is coming on the heels of some impressive progress for the company to date. Unlike other generative AI players like OpenAI, which has built a two-pronged strategy — raising huge public awareness with consumer tools like ChatGPT while also building out a B2B offering, with its APIs used by independent developers as well as giant enterprises — Synthesia is leaning into the approach that some other prominent AI startups are taking.

Similar to how Perplexity’s focus on really nailing generative AI search, Synthesia is focused on really nailing how to build the most humanlike generative video avatars possible. More specifically, it is looking to do this only for the business market and use cases like training and marketing.

That focus has helped Synthesia stand out in what is become a very crowded market in AI that runs the risk of getting commoditized when hype settles down into more long-term concerns like ARR, unit economics and operational costs attached to AI implementations.

Synthesia describes its new Expressive Avatars, the version being released today, as a first of their kind: “The world’s first avatars fully generated with AI.” Built on large, pre-trained models, Synthesia says its breakthrough has been in how they are combined to achieve multimodal distributions that more closely mimic how actual humans speak.

These are generated on the fly, Synthesia says, which is meant to be closer to the experience we go through when we speak or react in life, and stands in contrast to how a lot of AI video tools based around avatars work today: typically these are actually many pieces of video that get quickly stitched together to create facial responses that line up, more or less, with the scripts that are fed into them. The aim is to appear less robotic, and more lifelike.

Previous version:

New version:

As you can see in the two examples here, one from Synthesia’s older version and the one being released today, there is still a ways to go still in development, something CEO Victor Riparbelli himself also admits.

“Of course its not 100% there yet, but it will be very, very soon, by the end of the year. It’ll be so mind blowing,” he told TechCrunch. “I think you can also see that the AI part of this is very subtle. With humans there’s so much information in the tiniest details, the tiniest like movements of our facial muscles. I think we could never sit down and describe, ‘yes you smile like this when you’re happy but that is fake right?’ That is such a complex thing to ever describe for humans, but it can be [captured in] deep learning networks. They’re actually able to figure out the pattern and then replicate it in a predictable way.” Next thing it’s working on, he added, is hands.

“Hands are like, super hard,” he added.

The focus on B2B also helps Synthesia anchor its messaging and product more on “safe” AI usage. That is essential especially with the huge concern today over deepfakes and using AI for malicious purposes like misinformation and fraud. Even so, Synthesia hasn’t managed to avoid controversy on that front altogether. As we’ve pointed out before, Synthesia’s tech has previously been misused to produce propaganda in Venezuela and false news reports promoted by pro-China social media accounts.

The company today noted that it has taken further steps to try to lock down that usage. Last month, it updated its policies, it said, “to restrict the type of content people can make, investing in the early detection of bad faith actors, increasing the teams that work on AI safety, and experimenting with content credentials technologies such as C2PA.”

Despite those challenges, the company has continued to grow.

Synthesia was last valued at $1 billion when it raised $90 million. Notably, that fundraise was almost a year ago, in June 2023.

Riparbelli (pictured above, right, with other co-founders Steffen Tjerrild, Professor Lourdes Agapito, Professor Matthias Niessner) said in an interview earlier this month that there are currently no plans to raise more, although that doesn’t really answer the question of whether Synthesia is getting proactively approached. (Note: we are very excited to have the actual human Riparbelli speaking at an event of ours in London in May, where I’m definitely going to ask about this again. Please come if you’re in town.)

What we do know for sure is that AI costs a lot of money to build and run, and Synthesia has been building and running a lot.

Prior to the launch of today’s version some 200,000 people have created more than 18 million video presentations across some 130 languages using Synthesia’s 225 legacy avatars, the company said. (It does not break out how many users are on its paid tiers, but there are a lot of big-name customers including Zoom, the BBC, DuPont and more, and enteprises do pay.) The startup’s hope, of course, is that with the new version getting pushed out today those numbers will go up even more.



Source link

Related Articles