Animation and AI at Adobe

Animation is a superpowered medium. It can crank up conversion rates by 20%, with shorter videos watched and shared at even higher rates. And no matter the use — entertainment, advertisement, or education — animation sticks with people. People who watch animated videos have 15% greater recall versus live-action video.

To our animation teams, though, this isn’t a surprise. We’ve long been passionate about the unique powers of animation and film in general and how software can push them forward. Recently, the Academy of Motion Picture Arts and Sciences bestowed Scientific and Engineering Awards on Adobe Photoshop and Adobe After Effects. Both have revolutionized the creative possibilities in film — including animated hits like Pixar’s “Coco.”

Pushing the bounds of animation and creativity

And we aren’t done yet, as the industry — and Adobe in particular — continues to push the boundaries of this cutting-edge medium. That latest push: integrating artificial intelligence (AI) via Adobe Sensei — our own AI and machine-learning technology — within these creative tools to streamline and expand creative capabilities in animation. David Simons — part of the Academy Award-winning team and now an Adobe fellow — is spearheading this effort.

“As in so many other fields, incorporating AI into animation has the potential to accelerate workflows that are currently tedious and slow,” he says. “Then, it gives animators and designers a greater ability to explore and expand the possibilities of their creative space.”

Wil Li, principal scientist at Adobe and research lead for Character Animator, adds, “When you talk about the use of data and machine-learning techniques in the animation industry, you see so much potential to push creative boundaries, drive greater engagement, and reduce some of the current inefficiencies. Adobe is at the center of that conversation.”

As part of that conversation, Adobe is applying AI to some of the biggest challenges for animators. These AI solutions facilitate better content, enhanced creativity, and, at the same time, free creatives up to do even more groundbreaking work at a faster pace. Here are four examples of how Adobe is combining AI and animation.

Lip Sync in Character Animator

The Creative Cloud desktop application Character Animator combines live motion capture with a multitrack recording system, controlling layered puppets drawn in Photoshop or Illustrator.

“By automating much of the animation process, we allow people who couldn’t afford to do it before, or didn’t have the skills to do it, to be able to create their own animation and do it quickly,” David says.

In 2015 the Character Animation team released a preview version of its feature, Lip Sync, which automatically — and instantly — syncs an animated character’s mouth movements with voiceover audio. Two years later, the release included the first Adobe Sensei-branded version Lip Sync.

“Walking and talking are two of the most difficult, tedious things to do in animation,” says David. “Lip sync is one of a handful of Adobe Sensei features that applies to animation right now. It allows us to use the phonemes — the sounds that we make with our mouths — and map those to the movements of mouth shapes in our animation software. In fact, it can be done so quickly that it has enabled live character animation to happen, which was not possible before.”

“It opens up possibilities for new users, people who may not have been able to create animation before,” Wil says. “It could be someone who is a great illustrator but has never animated characters. Or it could be somebody who doesn’t know how to draw and just wants to perform as an animated character.”

That, he adds, is one of the most interesting things about integrating AI in animation.

“When it comes to machine learning in animation, it’s not just about precision,” Wil says. “We have to ask ourselves questions like, ‘What is funny and unique?’ and ‘What is cool and emotive?’”

These questions are pushing Wil’s and David’s teams to reimagine the boundaries of live character performance in animation.

“We’re comparing the individual sound of a Homer Simpson or a Rick from “Rick and Morty” and how that is attached to the needs of the character, which is often exaggerated and imperfect in different ways,” David says. “What does it mean to interpret an accent, a different dialect, or another language altogether? Then we’re figuring out how that works in Lip Sync.”

Characterizer and Character Trigger in Character Animator

Like animating talking, animating facial expressions tends to be very tedious and very time-consuming. Typically, designers and animators have built character puppets piece by piece, indicating all points of articulation. Now, Character Animator applies Adobe Sensei in the new Characterizer feature.

Leveraging this technology, animators can generate a stylized Character Animator puppet in seconds, based on the user’s face and any piece of reference art. Once the puppet has been created, it can be animated in real time using a webcam and microphone.

“We’re using machine learning to, with increasing accuracy, identify things like faces, hair, necks, clothing, and backgrounds as we generate those puppets,” David says. “These capabilities are also expanding beyond facial expressions into overall body movements and postures to solve the challenge of full-body animation.”

Another Character Animator feature — Character Trigger — leverages Characterizer to enable animation in real time. Character Trigger was designed to build live, interactive animation experiences and recently was added to the Character Animator app. Initially, this was promoted for the Twitch livestreaming video platform — Twitch streamers were invited to join Adobe users in private beta.

The Character Trigger enables audience members to participate in all the action. In exchange for a payment in “bits” (virtual currency), users can “trigger” a change of costume, a spontaneous dance, or any other type of movement for the animated character. Streamers can then build on and refine their puppets and the triggers they offer on their own, right in the Twitch interface. The result is a standout animated streaming experience that engages audience members, ultimately boosting revenue.

What’s next for AI and animation at Adobe

Similarly, a recent Adobe Sneak — dubbed Project #MovingStills — enables people to turn any photograph into a “live” image. By animating the image in 3D, these images immediately become simulated — and highly realistic — experiences.

Using AI, #MovingStills estimates and recreates 3D geometry and a depth map of each scene. From there, it can build out a camera path for realistic motion video. This is ideal for generating new pixels — for example, if part of a scene is out of sight or you’re needing to add a more realistic perspective since the camera can “move” back and forth within the scene.

Adobe is pushing the bounds of AI and its creative tools, creating new use cases and pushing the envelope on what’s next. The future direction of this synergy may depend on the users themselves, though, David says. From his vantage point, things are changing rapidly and the speed will continue to increase as more people are trained on the technology and think about the best ways to use it. And that, no doubt, means the best is yet to come.

Visit the Adobe Blog for more on how artificial intelligence and machine learning are driving the creative process.