Good Morning, and happy Sunday!
My name is Grace Reynolds, and this is my newsletter where I share writing updates and other things about life as an author by night and a stay-at-home parent by day. This week’s post will be a little different than usual but echoes the sentiments many of my peers have shared over the last year.
We’re going to talk about generative AI.
This post is not directed at users who are already using generative AI services like ChatGPT, MidJourney, or Bing AI.
This is for the authors, artists, and readers, or any consumer of the creative arts who want to support human artists and writers without engaging with generative AI services.
But Grace, we already went through this earlier this year!
Yes, we did. But then, my Instagram feed was inundated with Pixar characters. Bookstagrammers were sharing likenesses of themselves as cartoon characters using Bing AI. The pictures were, admittedly, cute. For a moment, it didn’t register with me that these were images created with generative AI because I knew that cartoon generators existed out there with presets that required the user to select specific features. (Does anyone remember the South Park character creator from the early 2000s? No? Just me?)
At face value, Bing AI presents a low risk for users. They are not prompted to upload photos of themselves for the likeness to be created. Instead, they are asked to describe what they would like the picture to look like.
Sound familiar? (Looking at you, MidJourney)
I shared a post where I questioned how this generative AI differed from other platforms. The post was getting traction too quickly for my comfort, so I decided to delete it, take a step back, and collect my thoughts. Was I crossing a boundary with readers jumping on this trend? Is it my duty as an author to share a concern that many of my peers and I have for AI’s growing interference with human creativity?
As much as I love Instagram, I don’t think it’s the right place to discuss philosophy, whether in a comment section or a slide deck on someone’s feed.
So let’s talk. Let’s discuss meaningful solutions for a future with AI as a writing and reading community.
Before we get into the thick of it, let’s establish a baseline to work from.
Fundamental truths we must agree on:
AI is here.
AI is not going away.
What is Generative AI?
“Generative artificial intelligence is artificial intelligence capable of generating text, images, or other media, using generative models. Generative AI models learn the patterns and structure of their input training data and then generate new data that has similar characteristics.” (Wikipedia)
What Creative Realms Does AI Exist In?
All of them.
Understanding Tangible Loss Created By Generative AI for Artists
I read the same argument presented by my peers time and time again: AI steals from real artists. While I agree with this, it is worth looking at some data points to reflect the monetary loss already incurred by artists and writers.
Do we have access to that information, though?
There must be some way to come up with the data points, considering OpenAI has already attracted an investment of $10 billion from Microsoft based on its potential to generate revenue in the future. They’ve calculated the return on their investment, so I refuse to believe that projected losses for artists and their partners in creative spheres are incalculable.
This is where I am asking the community to weigh in and contribute to the conversation.
While I continue to scour the internet for data, I’m inserting my own form here in the hopes that we can start to understand the loss already incurred at the micro-level and predict future trends. I’ve only shared this form with a few folks but have been given permission to share their testimonials. For their privacy, I have omitted their names from this text.
“As soon as ChatGPT was made widely available at the beginning of summer 2023, my contract copy writing opportunities dried up. Unless I was willing to work with AI to write copy (I am not) there were fewer jobs available to me where I usually secure gigs (UpWork). I have noticed more opportunities returning now with jobs specifically looking for human writers, which gives me hope. I still struggle to secure the kind of contract work I could get before though, and feel as though my days of being a contract copywriter are numbered.”
“As an illustrator and book cover designer, I have had a couple of recent clients/authors that had decided to use AI generated images instead of paying me for commissioned pieces. These included book covers as well as illustrations for the interior of the books. It's difficult to quantify revenue loss, but I can state confidently that I've had a drop in number of clients this year while also noticing and increase in AI created art being used by my previous clients.”
We need to aggregate these testimonials and quantify our collective losses together.
If you, or someone you know, have been affected by the use of generative AI in your field, please consider filling out this survey.
Setting Standards
I have observed in the indie community a lack of standardization across the board. While we can’t expect every individual or organization to look or act in the same manner, perhaps we can come to an agreement on a generally accepted code of conduct with respect to AI.
What would that look like? Who would enforce those standards? I have thought of quite a few questions regarding this matter! Like:
Do leaders in our respective communities have a responsibility to advocate and demand standards for our partners in the arts to abide by?
Do we insist on contracts with publishers that include clauses that address the use of AI for cover art or marketing services?
Should cover art or interior artwork created with AI be disclosed before and after publication?
Do we create pathways to direct the traffic of AI submissions from publishers to other platforms as a means to sequester them into their own spaces?
Does the very notion of that legitimize the use of generative AI for creative purposes?
What questions should we ask ourselves that we need to consider?
That’s what I’m interested in.
Current Mitigation Efforts
One of my visual artist friends shared The Glaze Project with me from the University of Chicago. In their own words, “Glaze is a system designed to protect human artists by disrupting style mimicry.” Their new tool, Nightshade, was designed to help creatives protect their images from generative AI training.
If you know of any meaningful efforts in the creative and nonfiction writing fields, please share them!
It is also worth noting that other services are actively working to help conceal users’ text when using large language models like ChatGPT from AI detection software.
The Value of Human Creativity
(Or labor, in blunt terms.) The ethical and existential questions AI poses for humanity are not new. In 2014, CPG Gray created and shared the video “Humans Need Not Apply,” which addresses the concerns many of us share today. If someone asked these questions in 2014—are we really 9 years behind addressing a meaningful solution on a larger scale?
I recommend watching the video in its entirety, but if you would rather skip to the section addressing the creative arts, click here.
There is a follow-up podcast to this video from this year if you want to listen to more.
Look, I’m not an economist or a statistician. I’m simply a writer trying to understand how I can continue to create art and connect with others meaningfully without being drowned out by a litany of computer-generated works. I also want to know how to reach the folks who don’t see using generative AI in our fields as a problem.
So, my last question is, do purveyors (read: consumers) of the arts value the creative process behind a work, or are they only interested in the final outcome?
Thanks for reading. Hopefully, this gave you some food for thought. Stay safe and stay spooky.
-G
I think the problem isn’t so much if consumers value things made by humans more than the final product, it’s that the biggest players in the entertainment industry clearly don’t and are only interested in making a final product as cheaply as possible. They don’t care how the sausage is made, and never have. They only care if they can make it inexpensively and sell it efficiently. It’s the age old tension that has always existed between art and business when the artist isn’t in control of the business end - but it’s about to get amplified 1000 fold. The consumer will consume whatever is put in front of them. If they are only given AI generated content, they’ll consume it rather than consume nothing.
I'm not personally a fan of AI art or text, and I find the outright copyright theft utterly immoral. However, what has been created with this theft is a set of tools - they could be good or bad, they have their uses and their limits. With that said, if the tool is worth $10bn, you've got to assume that it's worth that because it saves paying someone else (creators) more than $10bn.