Categories: Technology

Microsoft teases lifelike avatar AI tech but gives no release date

NEW YORK (AFP): Researchers at Microsoft have revealed a new artificial tool that can create deeply realistic human avatars — but offered no timetable to make it available to the public, citing concerns about facilitating deep fake content.

The AI model known as VASA-1, for “visual affective skills,” can create an animated video of a person talking, with synchronized lip movements, using just a single image and a speech audio clip.

Disinformation researchers fear rampant misuse of AI-powered applications to create “deep fake” pictures, video, and audio clips in a pivotal election year.

“We are opposed to any behavior to create misleading or harmful contents of real persons,” wrote the authors of the VASA-1 report, released this week by Microsoft Research Asia.

“We are dedicated to developing AI responsibly, with the goal of advancing human well-being,” they said.

“We have no plans to release an online demo, API, product, additional implementation details, or any related offerings until we are certain that the technology will be used responsibly and in accordance with proper regulations.”

Microsoft researchers said the technology can capture a wide spectrum of facial nuances and natural head motions.

“It paves the way for real-time engagements with lifelike avatars that emulate human conversational behaviors,” researchers said in the post.

VASA can work with artistic photos, songs, and non-English speech, according to Microsoft.

Researchers touted potential benefits of the technology such as providing virtual teachers to students or therapeutic support to people in need.

“It is not intended to create content that is used to mislead or deceive,” they said.

VASA videos still have “artifacts” that reveal they are AI-generated, according to the post.

ProPublica technology lead Ben Werdmuller said he’d be “excited to hear about someone using it to represent them in a Zoom meeting for the first time.”

“Like, how did it go? Did anyone notice?” he said on social network Threads.

ChatGPT-maker OpenAI in March revealed a voice-cloning tool called “Voice Engine” that can essentially duplicate someone’s speech based on a 15-second audio sample.

But it said it was “taking a cautious and informed approach to a broader release due to the potential for synthetic voice misuse.”

Earlier this year, a consultant working for a long-shot Democratic presidential candidates admitted he was behind a robocall impersonation of Joe Biden sent to voters in New Hampshire, saying he was trying to highlight the dangers of AI.

The call featured what sounded like Biden’s voice urging people not to cast ballots in the state’s January’s primary, sparking alarm among experts who fear a deluge of AI-powered deep fake disinformation in the 2024 White House race.

The Frontier Post

Recent Posts

Sindh govt to bring more buses for entire province

F.P. Report KARACHI: The Sindh government has decided to bring in more buses for the…

5 hours ago

Gandapur gives Centre 15-day ultimatum

F.P. Report PESHAWAR: Khyber Pakhtunkhwa Chief Minister Ali Amin Gundapur has threatened to take over…

5 hours ago

PHC CJ orders CS, IGP to appear before court

Humayun Khan PESHAWAR: Chief Justice Peshawar High Court (PHC) Justice Ishtiaq Ibrahim ordered Chief Secretary,…

5 hours ago

PIA flight to Toronto diverted to Karachi due to technical fault

F.P. Report KARACHI : Pakistan International Airlines flight from Islamabad to Toronto has been diverted…

5 hours ago

Didn’t stop govt from blocking SIMs of non-filers: IHC

F.P. Report ISLAMABAD: Chief Justice Aamer Farooq of the Islamabad High Court (IHC) on Friday…

5 hours ago

Pak Hockey team calls on COAS

F.P. Report RAWALPINDI: General Syed Asim Munir, NI (M), Chief of Army Staff (COAS), received…

5 hours ago

This website uses cookies.