AI is creating significant turbulence for the creative industries. While today's AI can be instructed to perform a diverse number of creative tasks from writing to image creation, our research reveals a complex landscape of opportunities and challenges. Industry executives and creators acknowledge AI's potential to generate new forms of creativity and make some specialisms more accessible, but these are countered by serious concerns about copyright, job displacement, the flattening of creativity across collective outputs and low-quality work adding unnecessary clutter to an already saturated content market. Our research also identifies a notable implementation gap that exists between senior management's desire ‘to have AI’ and teams struggling to integrate products into creative workflows. This highlights the need for new role-specific training and clearer implementation and adoption strategies. The article concludes that while AI has the potential to support positive transformation across the creative sector, we should use all our efforts to invest in the key skills the creative industries have become renowned for.
As we look to the future, human creativity is vital in an increasingly complex world with skills like curiosity, creative thinking, resilience and adaptability growing in importance across all sectors.
The Summer of 1956
In 1955, four men from four influential American organisations – John McCarthy from Dartmouth College, Marvin Lee Minsky from Harvard University, Nathaniel Rochester from IBM and Claude Shannon from Bell Telephone Laboratories (now known as Nokia Bell Labs) – shared a proposal with the Rockefeller Foundation requesting financial support, a total of $13,500 (!), for a study they called The Dartmouth Summer Research Project on Artificial Intelligence.[1] The funding would cover “a two month, 10 man study of artificial intelligence” over the summer in 1956.
At the time of the proposal, McCarthy was an Assistant Professor of Mathematics who had worked on questions relating to the mathematical nature of the thought process, the relation of a brain model to its environment and the use of languages by machines. Minsky, was a Harvard Junior Fellow in Mathematics and Neurology who had already built a machine for simulating learning by nerve nets after completing a PhD on the topic. Both McCarthy and Minsky would later be described as both computer scientists and cognitive scientists.[2]
The other men interested in pursuing this endeavour were Nathaniel Rochester and Claude Shannon.
Rochester was a Manager of Information Research at IBM. After graduating with a Bachelor of Science from MIT, he worked on the development of radar and then computer machinery. He went on to co-design the IBM Type 701, a large scale automatic computer and develop a programming language that can be used to code instructions instead of coding in machine language.[3] He was also very interested in how to get machines to carry out tasks that previously only humans were thought to be able to do. It’s a poignant discovery to make; that one of the original architects behind this technology wanted to pursue an interest that would result in human redundancy. In the World Economic Forum’s Future of Jobs Report 2025 it was reported that by 2030, 40% of employers anticipate that they will be reducing their workforce in response to AI being able to automate tasks.[4]
Lastly, Shannon was a Mathematician with undergraduate degrees in electrical engineering and math. His thesis for his Master of Science laid the groundwork for modern digital circuit design and by the time the 1955 proposal was submitted, he had developed a statistical theory of information.[5] Later, Shannon’s work would go on to influence and advance the design of machine learning, cryptography and the theory of Turing machines.
The 7 Aspects of AI
Within this proposal, these four men go on to list seven “aspects of the artificial intelligence problem”.[6] Two of the seven aspects listed relate to most headlines circulating about AI today. One is “How Can a Computer be Programmed to Use a Language” and the other “Randomness and Creativity”. McCarthy, the Assistant Professor of Mathematics, was interested in language and working through the speculation that “a large part of human thought consists of manipulating words according to rules of reasoning and rules of conjecture.” Rochester was interested in originality, invention or discovery, and randomness. The man who was very interested in how to get machines to carry out tasks that previously only humans were thought to be able to do, decided to tackle creative machines.
Four men, armed with the disciplines of mathematics, engineering and neurology, set to work on leading the computation of human language, creativity, learning, self-improvement and abstraction. And they did so with not a linguist, artist, writer, sociologist, psychologist or anthropologist among them.[7]
Those AI architects of the 50s laid the scaffolding to the future that we are now building around today. 70 years (and four new men and their teams) later, AI has survived the “winter” of the 1970s – 1990s and, as we detailed in our article The Race to Own the Future, since the early 2000s, technology and digitisation has come to define and redefine the creative industries.[8]
Today, we can engage a chatbot in conversation, get them to write poems, essays, scripts, stories, come up with an idea, improve an idea, create images, videos… The list of tasks that this new generation of chatbots can take on is certainly diverse. So too is their impact on the creative industries. Our research finds that while the creative industries feel positive about the potential of what AI could bring in the form of new and different kinds of creativity, make some specialisms more accessible and provide more opportunities for people we haven’t heard from before (or not enough of up till now), issues relating to copyright, job displacement, lack of transparency on how models are trained and the rise of low-quality outputs are very real concerns.
The Age Of Wonderpanic
So often we hear impacts being discussed through numbers and statistics. It can be difficult to connect with these numbers and grasp the impact contained within them, especially when they represent a population that you don’t belong to. Part of this is because numbers are an exercise in counting and not an exercise in documenting experience. We wanted to change this and so we conducted in-depth interviews with experienced professionals engaged in creative work across advertising, art, branding, content production, experience design, filmmaking and producing. We also hosted three workshops. One workshop was attended by film and TV executives and the other two with independent creators across film and TV. Overall, we’ve heard the voices of over 100 people working in the creative industries about their views on AI.
Amongst TV and film executives, common words and phrases used to describe AI are “potential”, “new opportunities”, “exponential”, “unregulated” and “wonderpanic.” The word “potential” took on a double act referring to both the positive opportunities it could bring in the future as well as raising concerns and tension for what could possibly go wrong.
Further conversation about the challenges and opportunities they see being ushered in by AI helped to contextualise their summarised thoughts. One of the most pressing concerns was the lack of global regulation and clear standards around data ethics and intellectual property. It was common knowledge amongst the group that AI systems are routinely trained on copyrighted material without consent or compensation. This left creators feeling exploited and created anxiety about who owns AI-generated content and data and if this could erode profitability for independent and smaller enterprises. One attendee remarked that, “AI by California is becoming AI for the world.”
Execs were also concerned that AI would create job losses particularly for entry-level talent and domain experts across music and design. There was further concern that the use of AI products without specialist creative intervention could flood the market with low-quality work and ultimately damage trust in the creative industries. Lastly, a number of attendees voiced their concern about AI’s negative impact on the environment and how “it’s not front and center for businesses and government.”
Amongst this group, there was broad consensus that collaboration between the creative and tech sectors was crucial. The creative sector recognise and acknowledge that AI is important to their future, and the future of the UK. In order for the creative industries to continue to contribute so generously to the UK economy (£124.6bn in 2022) and be recognised globally for their high-value outputs, collaboration is essential. The discussion also highlighted the real need for skills and knowledge support, clarity on copyright to avoid downstream issues when a production is released, and transparency and explainability into how models are trained and what biases they are likely to exhibit. Solutions to these challenges, mitigating risks and minimising harmful impacts are far more likely to happen faster and be more holistically considered through a working partnership.
The Possibility of ‘More’
In our event with young filmmakers, we found awareness of AI to be very high but knowledge of how AI can be integrated into their work and work flows to be low. Major issues relating to AI adoption – impact on jobs, copyright concerns and AI’s ability to make decisions that bias against women and people of colour – were vocalised the most. At the time of running these workshops, these issues were media-dominating topics relating to AI risks and as a result, the tone of these articles was largely negative.
Many creators had used Gen AI tools in a low-risk, experimental way but gains in the form of winning new or more work, and/or measured uplifts in productivity, efficiency, and creativity couldn’t be stated with absolute certainty. After a presentation on the ethics of AI, there was acknowledgement from attendees that the environmental impacts of AI was unknown to them. Many also expressed their relief and surprise that so much work was being undertaken by various groups in how to build and deploy AI responsibly.
However, the most interesting insight from this creator event was the shift that took place after workshop attendees were given a demo of Charismatic. The demo seemed to unlock what was possible and how a tool could be developed in such a way to create a new and different kind of storytelling. The mood seemed to shift towards positivity and the possibility of ‘more’ when they encountered a platform that allowed them to redirect their skill set in a new way that could lead to more work and more revenue. This runs counter to other tools that are presented as being able to replace human creativity and result in the possibility of ‘less’ – fewer projects and less money.
Many of these same themes – uncertainty of how to include new AI tools into work flows and customise their purpose for different job roles, copyright concerns, impact on job losses – appeared in the interviews too.
The Struggle With ‘How’
Speaking about her experience of working with organisations struggling to transition to this new AI world, Katie Hillier, Digital Anthropologist and Design Strategy Director at Future Perfect X, said that often the teams she works with are pursuing AI because of directives that come from the top of the business. The problem then comes in trying to integrate it into the business, because ‘getting’ AI isn’t like upgrading to the latest version of Photoshop.
“Teams are often told by senior leaders inside of an organisation, often the C-Suite, that they need to have AI, and then they’re given the task of figuring out how to integrate it to drive efficiencies. What’s difficult about that is that teams hear that and they think, Oh my God I’m going to lose my job. Then there’s the question of what do I do and what does it – the AI – do and where is the line between those two things. It’s very unclear and there are not a lot of people who are going to tell you exactly where it is because it’s very different depending on what the role is.” Not only are people fearful of new technology taking their jobs, but many are reluctant to embrace change – regardless of the form it takes. Amanda Bluglass, an Independent Filmmaker and ex-Journalist, recalls the reluctance that people had to change, not just to the new technology that was transforming broadcast news from analogue to digital, but to any changes that were being “cascaded down” from the “top floor, where the management was.”
Jim de Zoete shared his experience of an industry struggling to work through the changes AI is creating. Jim, an Independent Executive Creative Director and Filmmaker, is currently consulting with creative agencies helping them adapt and become fit for the future. A big part of his role is “managing teams; helping people work together and making sure you've got the right people in the right roles, and getting those dynamics right.” When asked whether the role of technology comes up in those discussions about getting the dynamics right, Jim responded with “Yes, it comes up a lot.” And it comes up a lot with a particular group of people. “It's interesting. It comes up a lot at a senior level I would say. Often the people running the agency are aware of the power of AI and some of the tools that are out there. They're aware that it's going to transform their agencies in the future. There's encouragement for people to try the tools but I would say in agencies, it still feels quite theoretical. They know that change is coming, but they can't quite figure out how it's gonna work.”
Anne Rogers, Founder and Managing Director of Culture A – an art consultancy and experience design firm in Amsterdam, finds her clients are also curious about tools like ChatGPT and Midjourney but again, they struggle with understanding the how. “Usually we have to bring up the idea and say something like, By the way, if you're struggling with this idea, or you want to do something different we could do AI-generated art. And then they reply with something like, Oh! How does that happen?”
Make It Stick By Making It Specific
These gaps in knowledge and expectations emerged again and again in conversations. Senior management have been told that AI tools will drive efficiencies, boost productivity and make your employees more creative. But knowledge of ‘how’ to make this happen is lacking.
AI products, despite how they’ve been marketed, are not magic. They need to be implemented and, crucially, adopted. AI products are more likely to be successfully adopted if they are integrated into workflows and if the use of a particular AI product makes sense to individuals and teams relative to what they do. A kind of choice paralysis seems to be taking place in organisations where because these tools have been presented as being able to do so much, there’s a struggle to figure out how to direct them. Working out which tools to use, how to direct them and who is best placed to benefit from them is a creative act in and of themselves. There is also a power dynamic emerging where senior leaders desire AI, without understanding the complexity of its adoption and employees wanting to be seen to ‘doing’ AI so that they’re perceived as being ‘innovative’ but are also aware that ‘doing’ AI really successfully might see them being made redundant. The result feels like a new version of innovation theatre rather than actually answering the question, “How do we successfully adapt to this new change?” [9]
Research shows that time and money invested in AI training to bridge skill gaps and support integration of products into workflows is time and money well spent. In research conducted for their book Prediction Machines, authors Ajay Agrawal, Joshua Gans, Avi Goldfarb found that AI training significantly boosted productivity across various skill levels.[10] Our research suggests that the more role-specific this AI training can be, the more likely AI products will achieve their desired effect.
All interviewees had experience of using AI products and spoke about what the ‘how’ looks like for them. This included using specialised tools like speech-to-text transcription, and also using a range of generative AI tools to generate ideas, interrogate ideas and use that response as further stimulus, creating original artworks[11], reducing barriers to getting words down on a page and in the case of Benjamin Field, Executive Producer at Deep Fusion Films, discovering a new form of interviewing.
The Virtually Parkinson audio podcast was created as an experiment in interaction between AI and human beings. The podcast puts Sir Michael Parkinson back in the interview chair using AI. It has full backing from Sir Michael's family and estate and was made using transcripts from previous interviews. When asked how it is different from other interviews, Ben said, “Actually, it ends up being like a therapy session because there are no distractions. There's no micro expressions from an interviewer. You know you're not led in any way, shape or form. You go into your own head.”
Tool, Creative Partner, Personal Assistant, Team
Interviewees most often described the AI products they were using, as a “tool” but the phrases “creative partner” and “personal assistant” were also used. Amanda was clear to point out however that the AI tools of today cannot be compared to software or digital tools of the past.
An ex-journalist at the BBC, Amanda reflected on the digital transformation of news production and broadcast. “All the new software we were introduced to had interfaces of software that replicated the old machines. The techniques were the same, but they were digital. In the last 20 years, everything’s accelerated,” she said. “Everything’s become faster and more portable, and individual specialisms have become more accessible. What would have been a crew of six or seven people has now become, if you’re good at it, the work of one person.” In this sense, AI becomes not just a “personal assistant” but “a team” or “specialism augmenter.”
Oliver Veysey, a Film Producer, Screenwriter and Content Director working in digital experience design and strategy, welcomes GenAI as a tool to augment humans in the creative industries, but is skeptical of the signals from big tech firms that using their products will give everyone the ability to become a writer, artist or filmmaker or that a product can easily replace a creative specialist. For Oliver, the suggestion that a product can easily replace human creativity overlooks what’s really at work—at multiple levels—when a story, ad campaign, or piece of social content resonates with an audience.One example of this is when Adobe enraged photographers last year with their latest ad campaign marketing Photoshop’s new AI features. The ad shows a perfume bottle against a plain background and then an AI-generated background with orchids and oranges. The caption says, “Skip the photoshoot with Generate Background. Add or replace a background that matches the lighting, shadows, and perspective of your subject in fewer steps, right from the Contextual Task Bar.” [12]
Photographer, director, and artist Clayton Cubitt shared the advert on X and said “So glad as a photographer I’ve given Adobe tens of thousands of dollars only to have it pivot to selling ‘skip the photo shoot.’” Graphic designer Brian Winkeler also responded negatively, “‘Skip the photoshoot.’ What will Adobe be telling customers to skip next? And when will their consumer message fully devolve down to ‘Skip all experts?’” [13]
“AI arrives at a really interesting inflection point,” Oliver said. “Audience data shows us that people are bored of the same old stories. They want new stories from new perspectives, told by people who have developed a distinct voice. In the UK, a lot of work is being done to make space for those voices – to make it possible for storytellers from different backgrounds to succeed – but things change very slowly. Then along comes AI, and part of me is wildly optimistic about how it can be a tool to accelerate change and break down barriers. Yet all the work I’m seeing from GenAI right now feels a little bit the same. In fact none of it makes me feel anything, why is that? What about the distinct voices? Personally, I don’t believe everyone is a film director and I think that should be OK. I don’t think we should hold up film directors as gods among mortals, but nor do I think everyone can become one by learning to prompt. GenAI alone isn’t going to democratize storytelling. What I do hope is that the directors and storytellers who master GenAI as a tool find a completely new creative language with which to move us.”
The Flattening Effect
This flattening effect has recently been uncovered in new research investigating if AI could increase creativity. Oliver Hauser, from University of Exeter and Anil Doshi from the University College London School of Management recruited nearly 300 people and asked them to write an eight sentence story. Around a third of the participants had to come up with ideas without support, while the rest were given a set of starter ideas generated by ChatGPT 4.0. The participants who were given ideas were further divided into two subgroups: participants in one group got a single AI-generated idea, while participants in the other got to choose from up to five. The stories were then judged by 600 evaluators.
The results found people who received AI help were judged to have written stories that were considered more novel and useful by evaluators. The group who got to choose from up to five starter ideas saw the biggest boost in novelty and useful scores. The individuals who saw the biggest gains in their creativity scores were those individuals who were inherently less creative and received help from AI. However, when the researchers looked at the stories across the two writing groups who received AI-generated starter ideas, they found that AI had made the groups less creative as a whole. So while AI can make an individual more creative, there is a decline in creativity across the collective as a result of AI. To paraphrase Oliver Veysey, then along comes AI, and everything feels a little bit the same. How is that a good thing?
But is it good?
It’s worth questioning if this study proves that AI is helping an individual, especially those who are less creative, become more creative or is it helping an individual produce a more creative output? Can we all really just become more creative by using these tools and start to move into specialist creative domains like writing or filmmaking? New research from Boston University and the Boston Consulting Group suggests perhaps not. The research studied nontechnical workers solving data science problems using ChatGPT and concluded that “GenAI can be used to help workers reskill to meet the greater technical demands of the labor market but that the work of nontechnical workers using GenAI is not interchangeable with that of data scientists.” [14]
The insight and sentiment carried within this finding echoed through so many of the conversations we had with creatives.
Anne, founder and MD of Culture A, spoke about how generative AI products helped to amplify her creativity by “sparking a lot more of her imagination” but no matter what, the output still feels “superhuman”. Ben spoke of AI in very much the same way. “AI is a great tool to get you somewhere,” he said. But, he was quick to add that while these tools will allow people to learn new skills, can they really do that new skill well? This quality control function was raised by Jim, Oliver, Katie and Amanda too. Jim spoke about how he would never give AI the responsibility of “making a call” on whether to pursue an idea or not, or whether an idea was good or not. “I wouldn’t trust it. I don’t think it has human-level information or understanding… like being able to read people in a room.” Katie spoke to this “human-level of information” too saying that “the way that humans can tap into creativity is very complex.”
Amanda spoke to this too saying that while AI can be powerful and effective at doing some things that humans can do, “AI can’t sense-check what is right and wrong in a moment. It has no taste. I think craft is quite an important word when we are thinking about AI. The learning processes that humans undertake in order to get good at their craft. There is something about the human imprint of that care and attention that seems to be overlooked at the moment in this new replicant world.”
And indeed, that is true. AI doesn’t know what is right and wrong – it still often gets the facts incorrect. Knowing what ‘good’ looks like is well outside of its area of expertise. So often the conversations we had with creatives would end like this; talking about the incredible capability of AI products but the complete inability these products have to know what good looks like and the resulting impact this will have on creative work.
Uncertainty and Unreasonable Optimism
Overall, our research identified a creative community who are in flux and whose confidence has been shaken and tested.[1] Undeniably there is uncertainty, and as a result, more questions than answers. There is also fear and concern which for some, could be said to be bordering on existential. But, there is also experimentation and curiosity, and a desire to figure out how to make AI work in the sector without resulting in outcomes that will overall weaken the industry. There was also hope that AI could bring about new and different forms of creativity, make some specialisms more accessible and provide more opportunities for people we haven’t heard from before (or not enough of up till now).
“I think I worry about its impact on the value we place on human content creators. I worry that we will become more disconnected, and more distanced from what good looks like. We’re already being flooded with homogenized crap…but do people care?” said Oliver. “Of course, in answering that question there will be opportunities, because some people will care very much.”
We should all care. Why? Because it’s not just the mathematicians and engineers who help us make sense of the world, creatives do this too. As the world becomes more volatile and ever changing, we’ll need their resilience, flexibility and agility to help us adapt. As problems become more ambiguous and complex we’ll need their creative thinking and curiosity to help us see different perspectives and uncover new insights. And as the world becomes a more uncertain place, we’ll need to adopt their approach to lifelong learning to help us reposition our place within it. Coincidently all these skills that are associated with creative professions are increasingly on the rise. While skills related to AI, big data, networks and cybersecurity are some of the fastest-growing skills required in workplaces today, so too are a host of complementary skills. The Future of Jobs Report 2025 lists creative thinking, resilience, flexibility and agility, along with curiosity and lifelong learning, as skills that are rising in importance. Organisations worldwide, now and in the future, need creative brains and specialisms.
But we should all care for a different reason too. We should never forget how often we turn to this community. We turn to the creatives among us to make sense from chaos, transform a mess into beauty, create experiences that help us understand who we are, tell our stories, tell other people’s stories, tell stories we couldn’t have even imagined, sweep us away, take us deeper, connect us, help us identify what we’re feeling and make us feel absolutely everything all at once.
[1] http://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf
[2] Computer science isn’t just about coding. It’s an interdisciplinary field of study that intersects with the theoretical underpinning of computers – that is, their creation and how they are used in the world. Computer science is therefore about the design and application of computer software and hardware, along with theories relating to algorithms, information and automation. Cognitive science is about studying how the mind works. Whereas neuroscience is the study of the nervous system with a primary focus on the brain, cognitive science is interested in exploring how perception, reasoning, memory, attention, language, imagery, motor control and problem-solving are represented and processed in the mind. Cognitive scientists are also interested in discovering more about how humans acquire and develop these capacities and how they are then physically or otherwise implemented. The field of artificial intelligence is about combining these studies. Using what we know about the human mind and brain, can we build a machine that reproduces human intelligence? This is made clear in the original proposal which states, “(t)he study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”
[3] https://www.ibm.com/docs/en/zos-basic-skills?topic=zos-assembler-language
[4] https://www.weforum.org/publications/the-future-of-jobs-report-2025/digest/
[5] https://www.cs.virginia.edu/~evans/greatworks/shannon38.pdf, https://www.quantamagazine.org/how-claude-shannons-information-theory-invented-the-future-20201222/
[6] The seven aspects of AI listed in the paper are as follows: (1) automatic computers, (2) how can a computer be programmed to use language, (3) neuron nets, (4) theory of the size of a calculation, (5) self-improvement, (6) abstractions and (7) randomness and creativity.
[7] Oh the confidence!
[8] Technology and digitisation hasn’t just shaped and reshaped the media and creative industries, it’s come to define and redefine them. In our previous article, The Race to Own the Future, we chart a timeline of disruption from the early 2000s till today. https://charismaticai.substack.com/p/the-race-to-own-the-future
[9] https://hbr.org/2019/10/why-companies-do-innovation-theater-instead-of-actual-innovation
[10] https://hbr.org/2024/11/research-how-gen-ai-is-already-impacting-the-labor-market,
https://www.predictionmachines.ai/
[11] See for example Culture A’s selection of Meditative Landscape prints, https://www.culture-a.com/shop
[12] https://petapixel.com/2024/05/03/adobe-throws-photographers-under-the-bus-again-skip-the-photoshoot/
[13] https://www.videomaker.com/news/adobe-enrages-photographers-with-its-latest-ad-campaign/
[14]https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4944588
A brave attempt to write a balanced piece. But I think there is a much greater existential threat that isn't really being addressed here. I genuinely believe we are at the apex of a similar moment that reflects the desperation that saw Luddites smashing looms and crop machines. The creative industries that are not " live or improvised in the moment" are fundamentally entering an endgame here. Economists have begun to increasingly agree with what was called for decades the "Luddite fallacy" - that the benefits of automation are not equally distributed. There is a brief flurry of possibility and employment before an entire industry is fundamentally devalued to the point of non existence. Great that we will see a potential growth in theatre, comedy, live music as a counterbalance...but will this remotely outweigh the vast loss that we are sleepwalking into.