The Television Academy’s Second Annual AI Summit Pushed Calm — and Concern

Remember that time in your life when being forced to consume broccoli felt like a death sentence? Your parents would tell you that it was good for you, that it would make you stronger and healthier, but the scent, texture, and taste told a different story. Eventually, you grow up and come to find that your parents were in fact just trying to do right by you, that broccoli can provide some value to your diet, and maybe you can even come to enjoy it once you look past some of its less appealing features.

This, essentially, was the message on display at the Television Academy‘s second annual AI Summit, hosted Saturday, March 15 at the Saban Media Center in North Hollywood. Academy members represented the children in this analogy, AI entrepreneurs, technicians, and legal experts were the parents, and the tech itself was the broccoli gently being airplaned into our mouths. The only problem was that as much as they worked to convince the audience that consuming AI now would be better for us in the long run, it was hard to walk away with anything but the taste of battery acid burning our tongues.

Rachel Zegler as Snow White in DISNEY's live-action SNOW WHITE. Photo courtesy of Disney. © 2024 Disney Enterprises, Inc. All Rights Reserved.

One of the first indicators that this summit might be more interested in pushing a message than fostering a conversation was the fact that, despite commencing the day’s panels at 8:30 a.m. on a Saturday morning, no coffee or water was allowed into the theater we’d be situated in for the next four hours. The audience was left in a drowsy state, perfect for force-feeding “facts” that may have caused uproar or distress from more caffeinated individuals.

Opening the event, senior marketing manager at Adobe, member of the TV Academy’s AI Task Force, and chair of the AI Summit, Kylie Pena said, “Our industry is embracing AI tools, experimenting boldly, and proceeding with a healthy amount of caution. This shift from uncertainty to cautious curiosity mirrors the vast changes we’ve experienced this year. The landscape has rapidly evolved, shaped by innovation, breakthroughs, and importantly, significant questions about ethics, creativity, and our responsibilities to creative and technical professionals across all of the TV Academy’s peer groups, many of which are represented here on stage today.”

But were they actually? After Pena’s introduction, Eric Shamlin, CEO of Secret Level — one of the three companies responsible for the controversial AI-produced Coca-Cola commercial last year — was brought on stage to deliver the Keynote. Shamlin is an Academy Governor, specifically working within the Emerging Media Programming peer group, and also serves as a chair on the AI Task Force. There are 31 peer groups across the entire Television Academy including Casting Directors, Writers, Costume Design & Supervision, Stunts, Lighting, Camera, & Technical Arts, and many more who might have something to say about the use of AI in film and television, but not one of these groups were represented in the morning’s conversations.

“To bridge both the creative and business worlds, we have an obligation not just to react to AI but to actively guide how it’s implemented,” said Shamlin. “That means collaborating with the guilds, studios, policymakers, and technologists.”

And yet the only ones chosen to speak on the subject were those who proffered an enthusiastic appreciation of the technology’s so-called “magic” — a term that would be used so frequently by various speakers, one started to wonder if they had all used the same PR firm. Clearly, this event was geared toward highlighting where AI is today, how it’s used, and where it’s headed, but any summit on the subject of the technology should not ignore the voices that oppose its use. Doing so only served to remind this writer that the insistence on its implementation is not a product of its usefulness, but our usefulness to it. Shamlin doesn’t believe this is something to question however, instead pushing the notion of opportunity.

Eric Shamlin, CEO Secret Level, and chair of the Television Academy’s AI Task Force gives the keynote address at the second annual AI Summit.
Eric ShamlinInvision for the Television Academy

“The iPhone democratized capture, YouTube democratized distribution, social media democratized marketing, and now AI is democratizing production value,” he told the crowd at Saban Media Center. “For decades, the one thing studios controlled was access to high-end production, budgets that determined talent, crew, location, and scale. AI is breaking down that final gate.”

Sounds nice, right? What’s not considered in this statement are the thousands of jobs evaporating as a result of breaking down this gate. Is it ethical to let a technology such as this not only control what we consume, but to erase an entire economic infrastructure in order to allow it to do so? Some might say this is just how evolution and advancement work and good luck trying to stop it. This was roughly how it was pitched in the first panel, “The Real Talk on Embracing Change.” Moderated by AI & media tech investor Seth Hallen, the panel featured chief content officer & EVP of Production at Metaphysic.ai, Ed Ulbrich, and longtime media executive and filmmaker Barbara Ford Grant. Ulbrich was the standout, able to offer thoughtful insights on AI’s growth in entertainment, particularly as it applies to the visual effects industry.

“I’ve seen a lot of disruption,” Ulbrich said. “I got to move to L.A. in the early 90s to start a company with James Cameron, Stan Winston, and Scott Ross called Digital Domain. So that was an incredible journey, 20 years of that. We didn’t call it AI, but machine learning, computer vision — that is not new. This is something we’ve been doing for almost 20 years.” 

Ulbrich also pointed out that we’ve been scanning actors for 25 years to fill out crowd scenes, but only in the last few years has there been any pushback on it. Again, there is a little bit of selective presentation in this pronouncement, as Ulbrich didn’t acknowledge that this pushback comes from the new intent of modern machine learning, which — to a certain extent — is to replace actors by training AI on past work. Even so, he helped the audience adjust to the idea that AI can be a tool with many benefits. For instance, Robert Zemeckis’ 2024 drama “Here” was able to de-age stars Tom Hanks and Robin Wright in-camera as they were shooting, a process that years ago, might’ve taken weeks and millions more in the budget.

“I look at it this way: People are like, ‘You’re eliminating jobs.’ Well, there’s a whole bunch of people who got to make that movie that wouldn’t have otherwise,” said Ulbrich. “It wouldn’t have been made at the price.”

And if Hanks and Wright were OK with AI training on their earlier work, it’s their prerogative, no? The question becomes: Now that AI has that information on both actors, would their talent even be needed on a set in the future? Will their likeness be available to all who wish to use it? What does this practice say about the value of the artist to the art?

Moderator Seth Hallen, right, and panelists Ed Ulbrich, left, and Barbara Ford Grant, center, discuss 'The Real Talk on Embracing Change.'
Ed Ulbrich, Barbara Ford Grant, and Seth HallenInvision for the Television Academy

Following “Real Talk,” the AI Summit attempted to answer these questions by bringing out a slew of legal experts including Loyola Law School professor Julie Shapiro, Protege Media’s Dave Davis, Frankfurt Kurnit Klein & Selz’s Andrew Folks, and Covington & Burlington’s Robyn Polashuk. Moderated by chief client engagement officer for Screen Engine/ASI, Holly Leff-Pressman, the panel acted as a “Legal Ask Me Anything,” with attendees encouraged to text in questions for them all to answer. However, one of the first comments from the group laid out the fact that there are still many unknowables that have allowed this technology to evade legal boundaries or at least attempt to do so.

“The laws have not been prepared to address these technologies,” said Folks. “It’s, ‘Let’s go through privacy, copyright, all of these areas that are grappling with how to take on AI,’ so yeah, it’s an interesting feeling.” 

Folks went on to share some of the regulatory action that has been put in place, such as the EU and Colorado AI Acts, as well as legislation in Virginia and Texas that “address AI from a comprehensive point-of-view.” He also discussed how existing law, such as copyright, can be “tweaked” to include AI, a concept that’s being litigated right now. OpenAI and Google have opposed this, recently arguing in an official U.S. government proposal that fair use protections be applied to all copyrighted material used for AI purposes. Immediately, a slew of over 400 A-list Hollywood power players condemned such an action.

Considering AI was such a hot button issue for the WGA and SAG-AFTRA during the 2023 strikes, these two guilds have more protections than most when it comes to both utilizing the technology and preventing their materials being used for training models said Shapiro. However, many are still bound by the idea of “good faith” in negotiations, which doesn’t necessarily exist without the proper regulation. The notion of regulation, particularly in Hollywood and Washington D.C., tends to make many balk.

“I believe the Copyright Office — because it’s federal law — has been very cautious in terms of making a statement whether or not they need to rewrite certain sections or build in certain sections of the Copyright Act for this very purpose,” Shapiro said. “It took them a long time to do it with music.”

Davis believes that ultimately, the fairest way to handle the use of copyrighted material is some kind of “licensing marketplace for AI training” that he believes companies in Silicon Valley are already preparing for.

From left to right, Entertainment and Tech attorney Ghen Laraya Long, Esq. moderates 'Evaluating AI Tools,' and is joined by USC’s Eric Weaver and Motion Picture Association’s Ben Sheffner for that provocative panel at the second annual AI Summit.
Ghen Laraya Long, Esq., Eric Weaver, Ben SheffnerInvision for the Television Academy

Other horror stories related to the AI industry were shared in the next panel, “Evaluating AI Tools,” which featured a chat between Motion Picture Association SVP & associate general counsel Ben Sheffner and head of Virtual & Adaptive Production at USC’s Entertainment Technology Center Eric Weaver. One of the main topics of discussion included a real-word anecdote that speaks to the caution companies and individuals must take in handling this new technology. Moderator Ghen Laraya Long brought up a colleague who was an engineer at a major media compan, but got fired for releasing malware from an AI tool he downloaded that resulted in a massive breach of data.

Sheffner explained that studios currently adhere to two considerations when it comes to AI: Can they be sued for copyright infringement? and If they generate a product using AI, do they own the copyright over that material?

“That means that they can license and exploit it around the world,” said Sheffner, “make millions of dollars for the studio, and for all the people who worked on it, but what the U.S. Copyright Office has said, and I know the previous panel touched on this, is that simply entering prompts into an AI system that produces an output is not copyrightable material.”

Even if someone were to create hundreds of prompts before finally landing on their final product, the U.S. Copyright Office would still not deem it eligible for licensing, which proves a massive protection for artists currently. It might also just be another hurdle the studios and AI companies have to jump over should OpenAI and Google’s proposals gain traction in the halls of Congress or, easier yet, The White House. But politics aside, both Sheffner and Weaver counseled caution when it comes to downloading any AI tool and to make sure they’re from reputable sources.

In terms of applying these tools, founder & chief creative officer of Secret Level Jason Zada presented two examples of work he and his company have done over the last year to show just how far the tech has advanced in that time. The first was the company’s homage to the 1995 “Holidays Are Coming” Coca-Cola commercial, which sparked controversy for its unsettling depiction of human beings. This is one of the big problems with current AI models and why many companies are seeking out more data for training. However, Zada did illustrate a good point in breaking down his workflow in that using the tech to build a vision for your project can often put many filmmakers ahead of the curve.

Jason Zada, Founder/Chief Creative Officer, Secret Level presents during “Real World Workflows, Lightning Round Style” during the second annual AI Summit.
Jason ZadaInvision for the Television Academy

“We’ve been doing generative AI for about the past two years, and we have this saying that pre-production is the new post-production,” said Zada. “What I mean by that is like every single time we get into pre-production on any project, we’re starting to see basically what it’s going to look like very, very, very early on.”

The second project Zada presented was a recently released music video for the Wu-Tang Clan that mixes martial arts and Blaxploitation films from the 1970s. Was the imagery slightly better than the Coke ad? Sure. Did it still lack a sense of humanity and creative nuance that suggested unequivocally the use of AI? Absolutely.

“ChikaBOOM,” an animated project boasting the voice talents of Yara Shahidi, Daveed Diggs, and Natasha Lyonne, was also discussed by one of the show’s coordinators, Alyssa Katalyna. Applied to animation, AI tools are a little better hidden and do seem to be a more natural extension of traditional CGI, itself an advancement of 2D animation. Even so, it’s difficult not to consider where the use of this AI might lead. Right now it’s focused on speeding up the process of designing and refining, but what happens when it moves past this stage? Is AI just a tool or do companies want it to replace human artistry, thereby helping their financial bottom lines?

Ultimately, the Television Academy’s second annual AI Summit, though informative and well-curated, was too prioritized with the growth of this technology to offer space for those who question certain elements of its necessity. Putting up such blinders in pursuit of continued application and advancement certainly won’t help convince people to use it, but only serves to garner further distrust. 

Fuente

DEJA UNA RESPUESTA

Por favor ingrese su comentario!
Por favor ingrese su nombre aquí