Visionaries. Scripted by Silicon: AI’s Role in TV and Cinema

Visionaries. Scripted by Silicon: AI’s Role in TV and CinemaPhilip Grossman: Welcome to TKT Visionaries, our monthly show where we bring industry leaders together to talk about the leading trends affecting our industry this month.

I have a wonderful guest today, and they happen not only to be industry experts but also dear friends. So, of course, we have:

Morgan Prygrocki. She’s a Senior Strategic Development Manager at Adobe, and I’ve known Morgan for many years. She’s actually the reason that I shoot Red, as a matter of fact.

Jeff Greenberg. He’s the owner of Greenberg Consulting and is a post-consulting strategic advisor, editor, colorist, author, educator, and all-around good guy. We had some events at NAB where I first got to meet Jeff. Thank you for being here.

Michael Kammes. Sr. Director, Innovation at Shift Media.

Philip: Morgan, in the wild and wacky world of AI, are you seeing customers come to you saying, “These are the tools we want, and you guys are developing them,” or is it more like a giant experiment where you guys are learning about what can be done and providing those to customers and seeing what they use?

Morgan: The general feedback that we’re hearing from a lot of our user base is really just around how their creatives can work more efficiently and how they can eliminate some of the less creative parts of the process, things like versioning and some of the more repetitive, redundant tasks that go into this kind of work.

Whether that’s pre-production, post-production, or even physical production, we aim to streamline this in a way that doesn’t take up as much time. I think that, based on the feedback that we’re hearing from our users, we’re looking at opportunities to help eliminate some of those finer elements of the process, maybe by resizing a video for delivery to TikTok and Instagram.

But we also have a lot of really talented engineers who love to tinker. Ultimately, we’re in a fortunate position where some of our research teams are given a long leash to figure out what areas of innovation can exist within this space. I think it is a healthy mixture of both. But ultimately, nothing really makes it to production without going through the paces of getting user feedback.

We love to test out new concepts in beta. A little-known fact is that a lot of our Creative Cloud apps also have the ability to download a beta parallel to the release build of any application. And that’s really the sandbox where a lot of these new features are going to first emerge.

Philip: Fantastic! And Jeff, what are you seeing with your customers and the clients you work with? Are they chomping at the bit to say, “I need this in AI,” or is it more of a wait-and-see? Or is it, “Holy crap, this is scaring the heck out of me”?

Jeff: It’s all of the above. I mean, in each piece of the gamut, there’s a need to make findings, which can be a painful discovery process for a new tool that shows up on their frontier. I have to credit Morgan. I’ve got to credit the beta of the new Photoshop and the Firefly integration. I have to tell you, it is spectacular, and people – I’m talking about video people – are losing their minds, Philip. Because in Photoshop, they’re instantly extending edges, fixing things faster, and really getting their job done more efficiently.

There’s absolutely a fear process as well because they’re scared of something coming along like Canva or the likes, and suddenly cutting out a lot of the work that in a lot of cases they didn’t really enjoy to begin with, which is kind of always strange to me. I always think of it as an assistant and that’s what we really focus on. It’s more of an artificial assistant than it is a replacement.

Morgan: 100 percent. When you think about it, we’re not in the business of replacing creatives; that would actually be counterintuitive to what we are trying to build here. But exactly to your point, we’re really just trying to create a creative assistant that allows you to arrive at a final concept faster.

Michael, obviously, I was going to give you credit. You told me there was somebody else who said this, but it was a great statement and that was that “AI is not going to replace you, but somebody using AI is going to replace you.” In the work and the customers and the things that you’re doing, are you finding people gravitating to using AI to help them do things to get ahead, or are you seeing people worried that they’re going to lose their job because of AI?

Michael: What’s very interesting is that the voices of the folks who are concerned about AI taking their job tend to be a little bit louder than the number of people who are actually using these tools. I think folks using technology as it comes out have always catapulted themselves to the head of the line. So, this isn’t anything different than that. However, I can’t remember any other point in time where I’ve seen simultaneously every facet of the creative industry all biting their nails. I mean, we have voice-over artists concerned that voice cloning is going to take their jobs, and that things like Autopods that are automatically doing edits are going to take the editors’ jobs, and generative AI is going to take the job of the people who are doing the artwork. So, it’s just really interesting to see the large swath or wide swath of folks who are concerned, but at the same time, everyone’s asking for it. At NAB this year, the single biggest question was, “Are you doing anything with AI?” That was the single biggest question at our booth this year.

Philip: It’s actually interesting that you brought up people being worried about losing jobs, and Jeff and I had this conversation and I wanted to get your thoughts, Michael. One of the things about our industry is that a lot of it is about a type of apprenticeship. You’re an assistant editor before you become an editor. You first need to learn what’s a good shot and your job is the scout work, it’s to go out and to cull the shots. Well, if AI is replacing that assistant editor’s role, how do you become an editor? Or is that a concern in the industry that we’re losing this sort of apprenticeship – whether it’s official or unofficial – in getting into those more advanced roles?

Morgan: I wouldn’t say that we are in the process of phasing out assistant editors. I think if anything, it’s going to help the assistant editor get to where they need to be faster. And I think there’s a big difference between a creative assistant, which is what we lovingly now refer to a lot of the AI tools, and a first AE. There’s still a pretty large delta there, and I think we would be in deep trouble if we were all of a sudden trying to cut out first assistant editors out of this process.

Jeff: I’m going to go with a wider swath of the market. It depends on where we are in the market. When I think of LA, the real craft editors are always going to have assistants. They’re always going to need assistants; they want to do storytelling first, and the people coming in need to be technical to support that storytelling. But when I look outside of it, especially in corporate America, everybody has to do everything, and in doing everything, if I can take the stuff that I don’t like, like repetitive tasks, and give it to a combination of Chat GPT, and other pieces to get the work done so I can do the fun parts of the work, it gets me to the fun faster. That’s kind of how I look at all of it.

Michael: Jeff, I’ve got to agree with you. As much as I love the industry we work in, the deifying of editors and assistant editors as being the top of the food chain has been dropping like a rock over the past 20 years. Since the days when we’ve gone from mag to digital, the days of mentoring and coaching have dropped systematically throughout the post industry.

And now, learning about editing and saying, “I want to work in movies,” isn’t the end-all, be-all. It’s, “I want to work for Mr. Beast, I want to work for this YouTuber, I want to work for this influencer,” where the roles of an editor/traditional editor, Motion Graphics, VFX are all blended and blurred. And I now no longer think that becoming a post-production professional in the TV and film industry is as important as it once was. So, the mentorship is going to continue to drop precipitously.

Philip: But when you see that lack of mentorship, do you find that maybe the skill sets of the people are being lost because of these tools, or are people gaining more from the tools and those types of roles?

Michael: The base tools are still needed; you still need to know how to do multi-cam and how to tell a story. But to be segmented, and again – that’s kind of how it is in the film and TV industry here in the states – is you are in that role. That is the one thing you do, and then you communicate between who’s in front of you and who’s behind you. That’s kind of how it is, and I don’t think folks like being pigeonholed like that any longer. And that’s why some of these other jobs are much more appealing, plus the fact that you don’t have to be geo-locked like you are for some of the higher-end stuff in film and television in LA and New York.

Jeff: But mentorship as a whole has eroded continuously in all the creative fields. And it’s painful to watch because I think that there is a certain lack of feedback loop for a creative that gets lost. In that way, there is a lack of somebody saying, “Oh yeah, I made that mistake, you need to go out and make bigger mistakes,” and that ripple on the pond. We get these wonderful and amazing digital tools that give us a lot of power. You should know how the buttons and switches work, but you still don’t necessarily have the ten thousand hours of craft.

Philip: Morgan, I’ll ask you this question. Regarding Jeff’s point, I always say you learn from other people’s mistakes because you can’t live long enough to make them all yourself. Are you finding that the learning curve with the AI tools is making people accelerate faster or just repeating things that are already existing?

Morgan: No, I would say they are learning things faster. Well, I can speak to this from the Adobe perspective, but because of the way a lot of our AI tools work, they almost give you a little bit of a trail of breadcrumbs in whichever workspace you’re in, to see exactly what the AI did to your image or your audio track. It’s actually accelerating new users to learn our platform faster because, let’s take for instance something like Auto color, it uses AI to get a first pass of color down on whatever video track you’re working with.

As a new user, you might not know the right combination of exposure changes relative to tint or warmth. However, being able to see exactly what the AI did to your image to achieve the result you’re currently seeing is, in itself, helpful in understanding how to use it. These tools can significantly shorten the learning curve. When I was learning digital editing years ago on Final Cut, the only option for self-teaching was buying a book and going through different modules and chapters. But for many creatives, getting hands-on with the tools is a more effective way of learning.

I think the concept of AI tools teaching visually is interesting when it comes to learning something new. Nowadays, I find that I learn better by watching YouTube videos rather than digging through documentation.

Philip: I’m the same way. I tend to go down the YouTube rabbit hole and try to dissect things because I’m a visual learner, although I also read lots of tech manuals. I always use this story: when the synthesizer first came out, it enabled someone who couldn’t afford to hire a hundred-piece orchestra to write a symphony and try it out because they had the necessary skills. Then, when they gained recognition, they could reach a point in their career where they could hire a hundred-piece orchestra to perform the music, which is fantastic.

But what about the person who shouldn’t attempt to orchestrate a hundred-piece orchestra? Michael, do you see AI adding a lot of, for lack of a better term, “static” into the industry? Will there be an increase in content quantity without necessarily ensuring the quality, as it becomes a crutch for those who may lack the skill set?

Michael: That’s an interesting point. We faced similar concerns when Pro Tools introduced a free version, Pro Tools 3.4, about 20 years ago. People were worried that it would lead to subpar music and sound post-production because it became accessible to everyone. However, I don’t think that happened. Instead, it allowed more people to try it out and decide if it was something they wanted to pursue professionally, resulting in better content being produced.

On the other hand, when we talk about people using AI to generate more content, there will be an overwhelming amount of content created that we won’t be able to consume manually. This will require AI to curate the content generated by AI itself, so we can enjoy a filtered selection. It’s almost comical to think about how there will be content that many of us will never consume because it’s either generated by AI or filtered out by AI.

Philip: Jeff, have you encountered individuals who express concerns that AI will be our downfall? The “Doom and Gloom” scenario. Do you come across more people who believe we need to slow down and view AI as a potential threat to humanity, or is it mostly hype? Is the general consensus that AI will not destroy us but rather empower a new era for creatives, where we can accomplish so much more?

Jeff: I have a lot of thoughts on this, but let’s address the most serious aspect. It’s true that an F16 was piloted entirely by AI. When you combine that with the advanced and somewhat frightening drone technology available, it raises a scary question, Philip. Would it be easy to say, “Go kill all the people over there using your advanced AI technology”? I believe we could have done it even without AI technology; it would just require fewer individuals to press an off button.

As for whether Sam from OpenAI walks around with someone carrying an off switch in case of emergencies, I don’t know, but I hope so. There is definitely a valid concern if we give AI control over critical functions. Initially, we might start by seeking advice from AI, then gradually entrust it with tasks like handling lights, plumbing, and eventually, perhaps even driving, as it has proven to be better at it than we are.

Regarding the concept of a “Gilded Age,” I recently read a fascinating discussion about the TV show The Orville, which shares similarities with Star Trek. The discussion highlighted the idea that we cannot interfere with a society by giving them replicators, free food, and generators because they may not be socially evolved enough to handle it responsibly, leading to hoarding and the potential destruction of society. AI has both positive and negative aspects, and it could be phenomenal if we believe that human beings should work less.

Philip: Interestingly, I was listening to a podcast the other day discussing how we often use the term AI, but it’s more accurately referred to as algorithmic imitation. We’re discovering new ways of implementing algorithms. For example, at Caltech, they discovered a potential antibiotic that can combat antibiotic-resistant bacteria. While they mentioned using AI, they actually developed new algorithms using the techniques we’re learning through data analysis. Morgan, in your research at Adobe, are you seeing an increase in the development of these tools? Are there still many technological challenges to overcome?

Morgan: Absolutely. I attended an AI event two weeks ago, and one of the panelists articulated it perfectly, stating that what used to represent a year’s worth of research now amounts to just a day’s worth of research due to the rapid advancements in AI technology. Adobe is a large company with tens of thousands of employees, and often it’s a challenge for us to innovate at the pace demanded by our creative users.

The speed at which we’re developing AI technology is astonishing, to the point where it can be overwhelming. There’s a high demand, and it seems like every day we come across a new podcast or news story discussing untapped opportunities for AI, whether it’s finding a cure for cancer or revolutionizing the travel industry. I even listened to a podcast last week that predicted AI would eliminate the need for travel agents. Now you can simply input your budget, desired destination, and preferences, and an entire itinerary is generated for you.

So I believe we have only scratched the surface when it comes to the potential use cases of AI in the creative field. This technology is going to have a profound impact on the types of content we consume, particularly in big-budget movies. Currently, many big-budget movies have the resources to invest in pre-visualization (pre viz) and pitch viz, which involve generating animatics or storyboard concepts to help bring a project to life.

However, there is a glass ceiling in the industry when it comes to the types of creatives who can actually get their movies made. It often depends on who you know in Hollywood, and if you don’t have the financial backing to create fancy pre viz and pitch viz sequences, it can be extremely challenging to even get your idea considered by someone with the power to fund your project. By providing more of these tools to creative individuals with great ideas, we can significantly change the landscape and increase the chances of their concepts being showcased on screen.

This shift will have a transformative effect on the projects that receive funding. I don’t mean to suggest that I don’t want to see more Marvel movies, but I believe we will also witness the emergence of new franchises from fresh voices that have previously struggled to be heard. The same goes for international content. Nvidia, for example, is working on groundbreaking technology that could render dubbing obsolete. Soon, we may be able to reconstruct people’s mouth movements to the extent that you won’t even know which country a production originated from.

Philip: I have nostalgic memories of watching those Godzilla films when I was a child. Michael, as this technology is created by humans, there is undoubtedly an inherent bias that can be present in these models. Based on your experience and observations, have you encountered issues related to bias? What are your thoughts on human influence in this context?

Michael: It does feel like there’s a Gold Rush mentality at the moment, with everyone rushing to mine gold by pouring chemicals and digging into the ground. In the context of AI models, there are models built by OpenAI and ChatGPT, as well as models created using scraped work from various websites and domains. However, much of this data has been cleansed, and the act of cleansing automatically limits the diversity within the generated content. This is currently an issue, as most commercially available models have undergone cleansing.

When it comes to using LLMs for writing or AI assistance in writing, the content generated often feels like an afterthought, driven by the desire for monetization and the need for content to be ratified. This leads us to the next stage, where we may encounter privatized datasets and models that are “dirty” and have not undergone a curation process. These models can be used in their natural state, which can have both positive and negative implications. One of the future steps we need to take is gaining access to models that haven’t been cleansed.

Philip: Jeff, regarding the issue of cleansed models and content scraping, what are your customers’ thoughts on the legal aspects of this? The Library of Congress, for instance, has stated that it will not copyright anything that’s AI-generated. Is there concern within the creative community that we may lose ownership of our content because computers are generating a significant portion of it?

Jeff: I don’t believe the loss of ownership of AI-generated content will become a significant factor, especially with companies like Disney actively working to prevent it. There’s a certain hypocrisy when it comes to owning AI-generated material because that material is trained on copyrighted information belonging to others. Interestingly, in Japan, they have stated that web scraping for training purposes is not considered copyright infringement. However, we face the standard problem of technology outpacing the legal and political spectrum, which can create challenges around AI.

As Michael mentioned, the source and cleansing of training data are crucial considerations. There have been instances where Chatbots created by Meta, Microsoft, and other groups have exhibited racist behavior or engaged in inappropriate actions due to being trained on unfiltered and unclean material from the internet. It’s an interesting problem, and having a legal expert’s perspective would be valuable.

Philip: Absolutely. I recall a conversation where someone mentioned using ChatGPT to invest $100. My immediate thought was that the model was trained on internet data that is rife with thousands of get-rich-quick schemes. The model cannot discern what is real and what is fake, so it will indeed be fascinating to see the outcomes and results in such cases.

Morgan, I think Adobe took an interesting tact in training the models only on material that was within the Adobe domain so things that people had already provided to you in the stock area as part of adobe. Has your customer’s reaction to that fact been a positive one or indifferent or has it been lacking?

Morgan: It’s been super positive. In fact in all of my discussions, I haven’t received a single negative piece of feedback from users. I think we’re all anxiously awaiting what conversations are going to happen within our government that are either going to prevent or at least widely restrict how this AI generated content is being used. Granted we have one of the largest stock libraries in the world so we have that going for us. But we’ve trained our AI models on hundreds of millions of high quality stock assets that we have license to use and we also have planted a flag in the statement that we intend to compensate the creators as these new images are being created.

We want to make sure that if it’s pulling from some of our users’ content we want to, we want to figure out a way to to track or compensate that um so I think um you know that’s that that’s going to be an evolving discussion that we’re going to have over time. But I think that one of the biggest areas of opportunity that a lot of our larger enterprise clients, especially those in M & E are looking at are opportunities to be able to train their own models on their own library of content.

You’ll notice that if you try to input specific requests into the Firefly beta or use generative fill in the Photoshop beta, such as “show me ancient Egypt in the theme of Star Wars,” the results might be unexpected or unusual. This is intentional and by design. However, for entities like Disney, working on multiple prequels and expanding the Mandalorian series on Disney+, they may want more refined results based on their previous work. Enabling customers to train their own models to accelerate the creation of concept images is something we aim to provide as well.

We prioritize ethical sourcing of content and exercising caution. We have committed to implementing measures that instantly inform users when an image has been altered. This becomes especially important as we navigate the realm of deep fakes. We want to ensure that our creatives have no concerns about the authenticity of the footage they encounter. We are working on implementing stock caps to address this issue.

Philip: It’s interesting that you mentioned deep fakes. Jeff, in the legal field, experts are often brought into courtrooms to determine if an image has been altered, providing a stamp of approval. Have you come across any image experts who can discern whether AI-generated or AI-manipulated content is fake? Is it still possible for experts to detect alterations, or has AI advanced to the point where it becomes difficult to distinguish between real and fake imagery?

Morgan: As Jeff mentioned, our team is more involved in the first line of defense and determining if content has been generated or manipulated using AI. It is interesting to note that with large language models like ChatGPT, there are tools like Chat Zero that are causing disruption in the field of language in academia. When you input the Bible, for example, Chat Zero responds with content generated by AI and explicitly states that it is AI-generated. It seems to be easier to determine if an image has been manipulated compared to identifying if an image has been generated by AI. However, it’s important to note that forensic analysis of imagery is a delicate field where conclusions are often based on beliefs and statements like “I believe based on this,” rather than definitive proof of manipulation.

Philip: I recall the early days when ChatGPT was being used by students to write their college papers. Then OpenAI introduced a reverse version that could detect if the content was written by ChatGPT. This led to an arms race where someone developed a tool to modify the ChatGPT-generated content enough to evade detection by the reverse tool.

Michael, being closer to the writer’s strike on the West Coast, how is the industry perceiving the rise of AI, especially in the writing domain? Are they fearful or do they see it as a tool that can assist them?

Michael: There is certainly a mix of fear and concern within the industry, especially as it relates to contract renewals and the three different guilds. Writers are afraid that AI may be used to replace them. While AI can be a useful tool for overcoming writer’s block and kickstarting the creative process, there are concerns that it could be misused for nefarious purposes. The Writer’s Guild and other supporting guilds are rightfully cautious, recognizing that bad actors may exploit AI just as much as good actors benefit from it. So, there’s a healthy amount of skepticism and fear alongside those who see potential benefits in specific creative realms.

Philip: Absolutely. I actually came across an article recently where a lawyer used ChatGPT to write a legal brief, and it included made-up cases. The judge discovered that those cases weren’t real and the lawyer faced significant consequences.

Michael: Morgan, I’m aware of Adobe’s content authenticity initiative. I’m curious about its implementation. At what stage in the media chain does it enter the process? If it’s integrated into Photoshop, that’s fantastic, but I’m wondering at what point does the content authenticity initiative come into play? It would be great to understand its earliest point of implementation in the chain.

Morgan: That’s an excellent question, and I think it would be best answered by our content authenticity team. I wouldn’t want to provide incorrect information. However, I’ll find out more about it because it’s a fantastic question, and I wish I had the answer right now.

Philip: Morgan, it’s a really interesting point. When things are generated, let’s say I go into Photoshop and create something from scratch, it’s just made up of an amalgamation of things. Is there a lot of concern in the industry about people passing off AI-generated work as their own? What impact will that have on our industry?

Morgan: On one hand, AI is a tool that allows you to bring your concept to life and put it onto a surface using basic text. It’s kind of a “Chicken and Egg” type conversation: the AI is simply calculating what you’re typing into the prompt. Who is the real artist? Is it the AI or the creative individual? I would argue that it’s the creative individual, but that’s just my opinion.

Philip: It’s like everybody has become a director with a prompt. You’re just telling people what to do, so effectively, everybody has become a director in this world.

Jeff: I think there’s definitely an interesting parallel with sampled music, where we take samples of music and create something new. The question of ownership and the limited variations in our tonal scale arises. I’m sure we’ve all heard great versions on YouTube where every song sounds the same, progressing through similar music using the same set of chords. Is this realistically different? And of course, I would argue in court that at least it’s your own unique creation, even though it originated from someone else’s work, for which they receive all the credit and money. I think that’s always going to be the dilemma of building our creativity on the foundations of others, and it may become even harder to identify with the advent of AI.

Morgan: I want to ask the audience for their thoughts on something I find fascinating. A couple of weeks back, Grimes, a music artist formerly associated with Elon Musk for those who may not know, made a statement saying that anyone can use her likeness for AI-generated music, but she wants to split the proceeds 50/50. Many people in the music industry are upset about this because it could potentially set a precedent for monetizing AI music in the future.

What are your thoughts on this? Perhaps Grimes saw an opportunity to make money, and I have to admit, I’ve listened to some AI-generated music, and it’s pretty good. I would still enjoy it. There’s an argument to be made that all art inspires art, and there are plenty of musicians who blatantly rip off others’ work. So what do we think about AI-generated music? Can some sort of agreement be reached to make everyone happy? Or should we completely eliminate it?

Philip: Let me expand on that question a bit. If we keep generating music on top of already generated music, will we create something new, or will we end up with a lot of music that sounds very similar? Will the algorithm eventually narrow down, like a weather model? It effectively models input and output, and when you feed the output back into the model, it tends to go in a specific direction. Do we see creativity fading away or becoming constrained to a specific thread, as we don’t have individuals like Einstein or Picasso who think completely differently and try something unique?

Morgan: I have to admit, some of the AI-generated songs I’ve heard actually sound better than the work of the actual artists. I know that may sound terrible, but some of the newer Drake songs, for example, sound better than his recent releases. However, I think the impact of AI music will depend on how prevalent it becomes. The more people use it, the more diluted it will become. So, I’m on the fence about it.

Philip: Michael, you’re in Hollywood, so do you think we’ll see an explosion of new content that’s truly original and different? Or will there be an explosion of content that all has a similar feel to it?

Michael: As my friend Katie Henson mentioned, it’s currently an aesthetic. The output generated by many of the tools, whether they’re audio or visual, still follows a particular aesthetic. You mentioned earlier that many mid-journey trained images had a shallow depth of field, a “cinematic” look. Like anything, we’ll eventually grow tired of that aesthetic and move on to something else. Right now, for artists like Grimes, it’s a good opportunity to grab attention. There’s a saying, “the pioneers get the arrows, the settlers get the land.” The first people to explore AI-generated content will make a name for themselves and be seen as trailblazers in their respective industries. However, I believe that at some point, people will crave more unique and original concepts instead of remixes.

Philip: It’s similar to the trend we saw with DVDs and CDs being popular, and now people are going back to vinyl or even embracing film photography instead of digital. Will we reach a point where things are completely created without AI, without the need for an AI tagline to attract people to that particular form of art?

Jeff: I don’t think it swings in that extreme manner, Philip. Firstly, we’ve seen the same general movie concepts being remade over and over again, like “Seven Samurai” becoming “Magnificent Seven” or “Casablanca.” Sometimes these remakes are shot-for-shot with different actors, like Pamela Anderson instead of Humphrey Bogart, resulting in terrible films. Sometimes it’s the product of the specific time period or the nuanced performances by live humans. What I find fascinating is that there’s a generation of people who prefer the sound of MP3s because CDs are too sharp, yet vinyl offers a different experience. They are seeking something that reminds them of the comfort they felt in their teens and early twenties. So, artistic appeal will always swing, but ultimately, good storytelling is what matters most. And eventually, we will start looking for new and unique endings to stories.

Philip: We’re coming up on the top of the hour, so I’m going to ask each of our guests one final question, and that is: what is your prediction for AI over the next two years? What do you see as the big thing or where do you think things are going to go? Ladies first.

Morgan: That’s a good question. I’ll start with the bad and end with the good. I think the first thing we’re all anxiously awaiting is regulation on the limits of AI. To be fair, that is probably a necessary first step just to make sure that whatever we’re building is for the greater good. Everyone is going into this with the best intentions, but there have been a couple of really public announcements by former heads of large organizations, expressing concerns about the dangers that could arise if AI falls into the wrong hands. I completely respect that. I think we are going to see some regulation on that front. However, we’re never going to fully eliminate the creators.

Regarding the writer’s strike, I fully support understanding their cause, but let’s be real, have you ever read a screenplay written by AI? They’re terrible. All of them. I think the upside of what we’re looking at over the next couple of years is the ability to produce content faster. When I survey a lot of the users I work with, whether they’re making videos for social media or they’re a massive M&E brand producing content for a streaming service, everyone’s content demands have increased exponentially over the last three to four years. Sometimes even by 200 to 400 percent. So there’s no shortage of demand here, and I believe AI tools can help us deliver by enabling us to produce more content and reach wider audiences.

But I’m hopeful that we might be able to shift the paradigm of who actually gets to make content because Hollywood is a little bit of a small pond when it comes to opportunities. I’m really hopeful that we will see more female voices and more international voices getting big platforms to tell their stories. To Jeff’s point, there are only six story types that exist out there if you read any screenwriting book. I want new stories. I’m done with the remakes. We can create new franchises, and I believe it will help us get there faster.

Philip: Michael, what are your predictions for the next 24 months?

Michael: I think the regulation will happen much sooner in the next election cycle, especially as we delve into voice cloning, deep fakes, and face swaps. Once these technologies become part of the political Zeitgeist, we will see massive regulation. I hope that the arms race will then lead to a focus on local models versus cloud models. Currently, we need to pay for numerous cloud services, and while I have no disrespect for Adobe, as I pay for their service each month, we are reaching a point where we have to pay for countless cloud services. If I have to pay for a cloud service for one model but it lacks what another model offers, we’re back to the streaming wars. I am eagerly looking forward to the day when we can take these large models, create smaller models that can run locally, and then spawn other agents. This way, we can create on-premises tools without relying on someone else with a more powerful computer serving those models to everyone else.

Philip: And Jeff, what are your predictions for the next 18 to 24 months? What do you see coming down the pipeline?

Jeff: I’m going to start with a darker tone before I get to the positive side. We’re going to witness a flood of people entering the market with tools too quickly and in destructive ways, disregarding copyright and the ethical aspects of how the models acquire materials and create content. In fact, as Michael mentioned, the drive to get to market quickly will lead to AI tools being used for nefarious purposes. I’m concerned about the day when the first AI malware emerges, capable of wreaking havoc on operating systems, cloud services, and more, without human control. The pace of development feels like it’s accelerating at an alarming rate month after month. I hope, though I doubt it will happen, that all the groups releasing AI tools in the market will be required to disclose how their models were trained, the open-source tools they used, and even register the code of their closed-source models with entities like Congress. This would enable scrutiny for any malicious intent. Ultimately, my desire is for these tools to bring joy to people’s lives, whether they are creators or parents spending time with their children, who have just returned home and fortunately, we haven’t been visited by any harmful AI.

Philip: I agree with you, Jeff. I think as Michael pointed out, we are still in the Gold Rush phase of AI, and over the next 18 months, we will see a lot of hype surrounding the capabilities of these tools. I recently read an article where the head of OpenAI mentioned that everyone keeps talking about waiting for GPT-5, but they haven’t even started training it yet. In fact, they are beginning to put the brakes on a bit. As an industry, I truly appreciate the potential of these tools.

To me, these tools are fantastic. They serve as sharpening tools, conforming tools, and time-savers for me and other creatives. I believe we will be the big winners. The generative capabilities of AI are remarkable and will add value. However, I’m not sure if within the next 18 to 24 months, we will see the first completely AI-generated film, and even if we do, I’m uncertain about its quality. The Blair Witch Project was a fantastic film and the only one of its kind that achieved success. There was another first-person POV type movie made because they could do it, but it didn’t fare well. So, I hope that as a society, we can provide guidance to those who create these tools and to those who use them, in order to shape what we want to see.

With that, I want to express my gratitude to all of my guests, Morgan, Michael, Jeff, and everyone who tuned in today. Join us next month on the 28th of June for the next episode of TKT Visionaries, where we will focus on SMPTE 2110 and the current state of IP video. Until then, thank you very much, and we’ll see you next month.

Upcoming Event

‘The 2110 Evolution Transforming Media Production’ – TKT 1957 will conduct a Visionaries Online Roundtable with the subject «The 2110 Evolution Transforming Media Production» on June 28, 2023 (4:00 P.M. EST).

Get The TKT1957 Tech Newsletter

Tech brief:

- Reviews

- Comparative analysis

- News of technologies and software solutions

We don’t spam!

Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments