7. Re: the angry takes, there’s a bit to unpack. A bunch of legal experts chimed in and said that you can’t actually copyright a “style” such as anime. However, OpenAI — and most of the other large AI labs — have been caught training their text/video/image models on copyright material. Altman said they “we put a lot of thought into the initial examples” for the image generator and changed his profile pic to a Ghibli image. Clearly, if they took thousands of Ghibli frames, the studio should be compensated. This is all part of the “real-time figure out how the economics of AI works” we’ve dealt with in past 3 years.
8. It makes total sense why artists are aggrieved. A lot of related creative work (graphic design, animation, drawing etc) was already facing headwinds from new tools and cheap labor prior to the ChatGPT Moment. A lot of these artists are freelancers without full job security. AI just adds to the pain. It's not just visual artists, though, writers and knowledge workers are feeling the crunch.
9. The skills I’ve spent my professional life building are very much in the bullseye. I tested Deep Research last month and was like “wow, this is as good as 76% of the things I did in the first 10 years of my career of knowledge work.”
10. Trust me, I’m taking the lessons of “The Starry Night” very seriously. Any longtime SatPost reader will know how much I like shoe-horning my dad jokes I into otherwise informative text. That’s me trying to differentiate my work from what AI can crank out in seconds.
11. Also important: distribution and relentlessly shilling your work because no one else will do it for you.
12. Back to the Studio Ghibli brouhaha. It’s not like the millions of people that made Ghibli images were going to have that artwork commissioned. Probably a single-digit number of people woke up on March 25th and said, “I’m going to find someone and pay them $50 to draw an image of me and my wife and my dog at the beach in Ghibili style”.
13. Studio Ghibli itself has been exposed to millions of people that otherwise never heard of them.
14. On the last point, many pointed out that Studio Ghibli co-founder Hayao Miyazaki was once shown an AI art render in 2016 and replied with “I am disgusted. I would never wish to incorporate this technology into my work at all. I strongly feel that this is an insult to life itself.” That was based on a dogshit image, though.
15. Would Miyazaki change his mind now? Well, he’s 84-years old and famously avoided technology, opting to draw and paint his films by hand. He is also friends with Pixar’s John Lasseter and loved the intro to Up. He almost certainly wouldn’t use the current AI tools but if someone made something worthy with it, I don’t think he’d hate it either.
16. Similarly, Steve Jobs once said of Pixar: "no amount of technology will turn a bad story into a good story".
17. The best criticism I read of “Ghibli-fication” was by Erik Hoel. He refers to it as the “semantic apocalypse”. Basically, generative AI is getting so good at making decent-sounding text and decent-looking images that people are getting flooded by the content and it is “draining meaning”. Imagine my kid. He hasn’t ever seen a Studio Ghibli film. What if he seems 10,000 “Ghibli-fied” images before he watches Spirited Away or My Neighbor Totoro. Those films — as beautiful as they are — will probably hit different than if he had never seen the images.
18. I think a relatable example for everyone is the Mona Lisa. The average person will probably have seen that image 10,000x in every type of context before actually visiting The Louvre. The magic and meaning is kind of taken away.
19. The meme-ification of everything does kind of suck in that light. I’m a huge culprit of trying to turn any piece of “art” or news-y moment into a piece of bit-sized content. Everything is flattened. Culture, politics, finance, entertainment etc into an endless feed with little time for reflection. Spend a day off the internet and you miss the “craziest meme story ever”, which is forgotten 12 hours later. This is unfortunately the internet in 2025. It is hella fun in the moment, though. So much dopamine.
20. On a related note, I’ve posted 100s of Succession, Breaking Bad and Mad Men memes online. Is this so different than people using ChatGPT to make “Ghibli-fied” images? In both cases, you’re taking a piece of content popularized by someone else and using it as a new form of media (that is out of context and without compensation to the original creator).
21. Someone will read the last point and say, “yeah dumbass, it’s a cheap way to get online engagement and isn’t art.”
22. That someone would be correct.
23. However, I think we need to separate the idea of “art” from these AI outputs.
24. Seems a majority of these AI output probably aren't mean to be "art". To be clear, I don't mean "art" as "hang it up in the MOMA". I mean "art" as someone creating something to express an emotion they feel or elicit an emotion from someone else or to teach a lesson or to bring joy.
25. Which raises the question, what makes art? Don't some prompted AI works achieve this goal? I think the key is that the creator has to put thought and effort into the output. One framing I like is from legendary sci-fi author Neal Stephenson. In an article titled “Idea Having is not Art: What AI is and isn't good for in creative disciplines”, he believes art requires microdecisions:
"An artform is a framework for a relationship between the artist and the audience. Artist and audience are engaging in activities that are extremely different (e.g. hitting a piece of marble with a chisel in ancient Athens, vs. staring at the finished sculpture in a museum in New York two thousand years later) but they are linked by the artwork. The audience might experience the artwork live and in-person, as when attending the opera, or hundreds of years after the art was created, as when looking at a Renaissance painting. The artwork might be a fixed and immutable thing, as with a novel, or fluid, as with an improv show.
But in all cases there is an artform, a certain agreed-on framework through which the audience experiences the artwork: sitting down in a movie theater for two hours, picking up a book and reading the words on the page, going to a museum to gaze at paintings, listening to music on earbuds.
In the course of having that experience, the audience is exposed to a lot of microdecisions that were made by the artist. An artform such as a book, an opera, a building, or a painting is a schema whereby those microdecisions can be made available to be experienced. In nerd lingo, it’s almost like a compression algorithm for packing microdecisions as densely as possible.
Artforms that succeed—that are practiced by many artists and loved by many audience members over long spans of time—tend to be ones in which the artist has ways of expressing the widest possible range of microdecisions consistent with that agreed-on schema. Density of microdecisions correlates with good art."
26. AI can help make microdecisions. I use ChatGPT, Claude and Grok all the time as research assistants to find info and bounce ideas off of. But it's always an input to my process. AI is a tool. Don't let it stop you from making microdecisions. Here is a recent viral article from a professor at a regional university. He’s talking about his Gen-Z students and it’s grim. Social media has zapped their attention and they’re outsourcing cognitive functions to ChatGPT. Just like your muscles, your mind needs exercise. Always be making microdecisions.
27. OpenAI says that 130m users have generated 700m images in the past week. I'm sure the majority wasn't intended to be "art". That's fine. I've used countless photo apps over the years to have fun by filtering some image and sending them to small group chats. Not everything has to be "art". There is obviously total bottom-of-the barrel stuff too. A lot of the output is just straight up slop. Low effort engagement bait with zero redeeming features (like Subway’s footlong cookie).
28. But if you do care about craft and art, then it’s important to always be making microdecisions. To wit: each Studio Ghibli film has 60-70,000 frames and Miyazaki has to greenlight every single one. Example: It took his team 15 months to make this one 4-second clip. Insane.
29. Speaking of slop, the Ghibli trend was a wrap when White House X account completely jumped the shark and posted this.
30. Here’s an example of using Studio Ghibli ChatGPT for something new. PJ Ace spent 9-hours taking 100+ scenes of trailer for The Lord of The Rings and “Ghibli-fied” it. Somewhat ironically, Miyazaki hated the Peter Jackson trilogy because he thought the battle scenes were too video-gamey and didn’t show the complexity of war. But, clearly, there were many microdecisions made here. It’s not just a 3-second prompt output.
31. Also, this is pure stream of consciousness (so, sorry for any typos).
32. Media analyst Doug Shapiro has a good series of articles on Hollywood and the future of AI video. He says a useful way to think about AI and industry disruption is to consider a 2x2 matrix. The axes are “technology development” and “consumer acceptance”. One study he finds is that people that use AI more are more willing to accept AI outputs. Something to think about long term.
33. Shapiro also has a great quote on what is valuable in a world where AI makes infinite content but there is still finite demand: “The economic model of content creation shifts radically, as video becomes a loss leader to drive value elsewhere—whether data capture, hardware purchases, live events, merchandise, fan creation or who knows what else. The value of curation, distribution chokepoints, brands, recognizable IP, community building, 360-degree monetization, marketing muscle and know-how all go up.”
34. Aside from art, the new ChatGPT model has interested use cases as shared by Professor Ethan Mollick: Visual recipes, homepages, textures for video games, illustrated poems, unhinged monologues, photo improvements, and visual adventure games, to name just a few.
35. Second last post. An interesting take on how ChatGPT totally remakes buying real estate properties.
36. Last post is this insightful thread from Balaji Srinivasan:
A few thoughts on the new ChatGPT image release.
(1) This changes filters. Instagram filters required custom code; now all you need are a few keywords like “Studio Ghibli” or Dr. Seuss or South Park.
(2) This changes online ads. Much of the workflow of ad unit generation can now be automated, as per QT below.
(3) This changes memes. The baseline quality of memes should rise, because a critical threshold of reducing prompting effort to get good results has been reached.
(4) This may change books. I’d like to see someone take a public domain book from
Project Gutenberg, feed it page by page into Claude, and have it turn it into comic book panels with the new ChatGPT. Old books may become more accessible this way.
(5) This changes slides. We’re now close to the point where you can generate a few reasonable AI images for any slide deck. With the right integration, there should be less bullet-point only presentations.
(6) This changes websites. You can now generate placeholder images in a site-specific style for any <img> tag, as a kind of visual Loren Ipsum.
(7) This may change movies. We could see shot-for-shot remakes of old movies in new visual styles, with dubbing just for the artistry of it. Though these might be more interesting as clips than as full movies.
(8) This may change social networking. Once this tech is open source and/or cheap enough to widely integrate, every upload image button will have a generate image alongside it.
(9) This should change image search. A generate option will likewise pop up alongside available images.
(10) In general, visual styles have suddenly become extremely easy to copy, even easier than frontend code. Distinction will have to come in other ways.
Below are my favourite Ghibli posts. At one point, you could post literally anything on social media — like a Ghibli-fied banana — and it would go nuclear. I appreciated the Ghibli posts that had a meta angle: