What Ghibli films and Ghibli memes share
The Wind Rises (2013)
The four seconds of film above is the result of fifteen months of painstaking animation by a single artist under Hayao Miyazaki’s watchful eye.
And this image, created in the same style, was likely made in less than a minute.
Created by Grant Slatton and GPT-4o
This image was followed by untold thousands of memes emulating the same illustrated look, and even a full trailer for the 2001 version of The Lord of the Rings.
Studio Ghibli and its resident visionary Hayao Miyazaki are the originators of this style. Their masterpieces include films like My Neighbor Totoro, Princess Mononoke, and Spirited Away.
The Ghibli memes that were suddenly everywhere were produced by OpenAI’s highly impressive GPT–4o image generation (which I will have much more to say about next week).
As social media overflowed with AI-generated images mimicking Ghibli’s distinctive aesthetic, we witnessed a fascinating and historic collision between one of animation’s most labor-intensive traditions and the instant gratification of AI art.
And yet, the difference between the Ghibli films and the Ghibli memes is not so much about methods as it is about effort and commitment.
To be clear, Hayao Miyazaki is a genius and the Ghibli meme makers are not. But they still share something deeply meaningful, not just superficially, but creatively.
Craft vs Click
When it comes to effort, there is a vast chasm separating Ghibli films from Ghibli memes.
The Ghibli memes are made quickly and easily. Even the LoR trailer only took nine hours.
Studio Ghibli’s films are lovingly and laboriously hand-crafted over three to five years.
But as I covered in Everything is a Remix, meme-making is creative. It’s just the very first steps on a very long and demanding path.
The meme makers did something simple but powerful: they copied, transformed, and combined existing work to create something new.
Grant Slatton, who created the family portrait above and sparked this craze, just loved Ghibli films and wanted to see his family transformed into Ghibli characters.
Later meme makers were more interested in combining the Ghibli style with other works, primarily memes. What would the gentle nostalgic style of Ghibli look like applied to the devil-ish Disaster Girl meme?
The Ghibli memes came from regular people who took a moment from their day to have fun, and incidentally, they did something creative.
The meme Disaster Girl converted to Ghibli style
The Long, Long, Very Very Long Road
Hayao Miyazaki had a similar moment of incidental creativity untold decades ago. He did some rudimentary creative act while he was playing. But then he kept going. And going. And going.
Over time, these acts became bold and sophisticated. He merged the aesthetics of ukiyo-e woodblock prints into this work. The result can be seen in My Neighbor Totoro. He merged Alice in Wonderland and The Wizard of Oz with Japanese folklore. The result was Spirited Away.
What distinguishes Miyazaki is not so much how he created but how far he was willing to travel down the long, winding path of creativity until he finally started creating enduring art.
The rest of us may never get there, but we can travel at our own pace, go as far as we like, have fun along the way, and maybe even make enduring creative work for ourselves or our communities.
A woodblock print by Kawase Hasui. This style was one of the foremost influences on Hayao Miyazaki.
Poster for The Sting (1973) vs The Studio (2025)
Central image is totally different, but everything around that is taken from The Sting poster. In my opinion, it copies too much from one place.
Beyond Uncanny Valley
2D Akira vs 3D Holly
The uncanny valley is the sensation of unease we feel when a human character looks close to but not quite human.
For example, the character of Akira, is not realistic at all. We easily connect with this character and relate. No uncanny valley.
The character of Holly from The Polar Express looks more like a real person, but is not relatable. She feels strange and ghost-like.
This is the uncanny valley. When human characters are kinda real but obviously not real, they have an eerie or even repulsive effect on us.
The Luke Skywalker cameo in The Mandalorian was uber-uncanny
Computer-generated imagery has been around for at least forty years, but it has never conquered the uncanny valley. There has never been a truly believable CGI human character. Anybody who saw the conclusion of season 2 of The Mandalorian got treated to a Luke Skywalker cameo that didn’t look much better than The Polar Express.
The closest there’s been to believable CGI humans are human-like characters in movies like Avatar: The Way of Water or Kingdom of the Planet of the Apes. These characters don’t feel uncanny because they’re not human.
The main reason the apes in Kingdom of the Planet of the Apes looked so good was just because they’re not people.
AI is doing what CGI could not – it is crossing the uncanny valley. AI is generating human characters that are indistinguishable from real people.
There are now AI-generated still images of people that look real. Head over to ThisPersonDoesNotExist, and with every refresh of the page you’ll see a new portrait of a synthetic person. There are often small imperfections but these images are good enough to fool basically everyone.
Real? Not real? He’s not real but I can’t tell. Via ThisPersonDoesNotExist.
And we are now getting AI-generated video of realistic people. For instance, a new feature in the AI video app Captions creates realistic bloggers. If you watch these clips long enough, the illusion doesn’t hold, but in shorter durations, you can be fooled.
The vloggers generated by Captions look real, especially in short clips.
We are traveling beyond the uncanny valley. What comes next?
I can foresee two realities that will co-exist.
We will still very much have the uncanny valley. AI-generated humans mostly do not look real and creators won’t be focused on making perfectly realistic people. This is mostly what we’re going to see for a long time to come.
But sprinkled out there among the uncanny will be fake people who look real. “They” will be sharing their experiences, instructing us, persuading us, selling us, and seducing us.
What does this mean? Like everybody else, I’m still grappling with this emerging reality. I’m happy to hear your thoughts in the meantime.
But this I’m certain of: those old uncanny valley characters like Holly will soon feel nostalgic, emblems of simpler times when you could trust your eyes.
AI is for Gruntwork
Stephen Root in Office Space
My big struggle with using AI for creative work has been this: how do we harness the benefits of AI, while preserving our souls and not churning out slop? How do we maintain our humanity in an era with more and more AI-generated content?
These are complex questions we’ll all be grappling with for years to come, but here’s an insight that has helped me: always remember that AI is for gruntwork.
When using AI is disrespectful
AI is for gruntwork, but what’s the opposite of gruntwork? For me, the opposite of gruntwork is human connection. This is a core feature of creative work. Whether you’re an artist or you work in advertising, you need to connect with your audience.
This connection with our fellow humans is sacred. When you’re trying to empathize with others, to share your struggles and your experiences, to teach and inform, you had better speak in your own voice. If you use AI, people will smell it.
If you send a personal email to a friend that you generated with ChatGPT, don’t be surprised if a chill descends upon your relationship afterwards. The realm of human connection is sacred territory. If you attempt to tread these grounds using AI tools, you might offend the person you’re trying to reach.
Alas, most of what we do is much more mundane. And that’s where AI can help the most.
Most work is gruntwork
Most of us don’t spend all day connecting with people. Most of what we do is gruntwork.
Gruntwork is time-consuming, tedious, and low skill, but it’s also essential. It’s the necessary behind-the-scenes work for our larger projects and goals. Some examples of creative gruntwork.
Writing metadata, tags, or captions for content. You did the hard work of creating good content, but you still need to do chores like these when you publish.
Scheduling and project management. Many creatives are not strong at this sort of admin work. ChatGPT is very good at creating these templates.
Research and problem-solving. Most creative work requires figuring out a bunch of stuff you’re not familiar with. AI is simply the most efficient way to do this.
Emailing clients or collaborators for updates or approvals. Nope, this is not part of the sacred realm of human connection. This is just routine information exchange. People don’t expect to read your authentic voice.
In a nutshell, if a virtual assistant could do the task, try using AI. If you’re seeking to connect with someone, use your own authentic voice.
Exclusive new guide, Creative AI in 2025: A No-Hype Assessment
I’ve just published an exclusive new guide Creative AI in 2025: A No-Hype Assessment! Learn what AI will help you get real work done right now. Subscribe to the Everything is a Remix newsletter to get immediate access.
(If you’ve already subscribed to the newsletter, check your email for a download link!)
Creative AI in 2025: A No-Hype Assessment (Part 3 of 3)
#1: ChatGPT (and LLMs)
Welcome to Part 3 of my rundown of the creative AI that are useful right now.
Click here to read Part 1.
Click here to read Part 2.
The gold standard for creative work is ChatGPT and other LLMS, like the newly released free Chinese chatbot, DeepSeek.
Why does text generation perform so well?
The text itself is often not directly used, so its blandness doesn’t matter. For example, we don’t directly use the text from chats about research, project management, scheduling, brainstorming, and learning new skills.
In addition to your primary content, you need to generate lots of supporting content. For example, social media posts, FAQs, repurposed content, checklists, and cheatsheets. This is called pillar content and ChatGPT is very useful for this work.
But the primary advantage text has over all other forms of media is this: it’s instantly remixable. The text is too long? Cut it down. Don’t like the order a paragraph was written? Reorder it. Sounds generic? Rewrite it in your own voice.
Text is the most fluid form of media. AI often give us boring writing that still conveys good information. You can simply rewrite that text in your own voice and give it personality. Images, video, and music can’t be rebuilt in the same way.
ChatGPT and other LLMs are the premier AI tool for creative work. If you’d like to master ChatGPT for creative use, check out my course, Create Content with ChatGPT and AI, which is now available at the new low price of $49.99.
How to Get Started
Large language models have all reached approximate parity. Unless you want top performance, you can use whatever you like. Or try them all and pick what you like best. My models of choice right now are Claude and DeepSeek.
Platforms (All offer free plans)
Regular price: $99
Creative AI in 2025: A No-Hype Assessment (Part 2 of 3)
#2: AI Voice Synthesis
Welcome to Part 2 of my rundown of the creative AI that are useful right now. Click here to read Part 1.
The dark horse of generative AI is voice generation. In particular, I’m referring to a single platform here, ElevenLabs.
AI voice generation lets you enter text, choose a voice, and export narration. It mostly sounds real, but it also sounds generic. (Generic is a flaw of all AI generators.)
I wouldn’t recommend voice generation for your important projects, but it could be useful for less vital content, like instructional content or quick social media posts.
But that’s not the truly compelling feature of AI voice generation. What’s the most compelling is voice cloning.
Yeah yeah, I know, cloning voices seems a bit Black Mirror. Here’s the way I look at it.
Cloning someone else’s voice is wrong (unless you’re making comedy). But cloning your voice is potentially a big time saver. For example, Sarah Dietschy cloned her voice and appears to have gotten superb results.
To be clear, I don’t want to narrate videos with AI. If you use a voice clone for the entirety of your project, the listener will sense it and they will tune out – and probably leave with a low opinion of your work.
So what is voice cloning so useful for? For smaller tasks, like fixes and additions.
Making revisions to narration realistically takes 10 or 15 minutes, even for tiny edits. You have to get the recording equipment ready, record, export, import, and edit. With AI, you can type, export, import, done. Takes a couple minutes.
That is the killer feature of AI voice cloning: it lets you easily make fixes and additions.
Sidenote: another promising new platform is MMAudio. With this tool you can upload video, then it’ll analyze the scenes and add sound effects. It’s plenty quirky, but if it gives you usable results half the time, that’s a big win.
How to Get Started
If you’d like to get started with AI voice synthesis and cloning, ElevenLabs is in a league of its own. MMAudio is an interesting emerging platform that might appeal to video editors who need to add sound effects to your projects.
ElevenLabs (Free plan available)
Up next: who leads the way in creative AI?
Creative AI in 2025: A No-Hype Assessment (Part 1 of 3)
Echoes of Grace, a Sora promotional video by OpenAI
Generative AI progress has definitely slowed down. But that just means it’s gone from breakneck to merely full-tilt.
The most dramatically transformed realm in creative AI is video generation.
Sora is the biggest player and has many interesting and unique features.
Kling is the best video generator on the market right now.
Runway has made good advances and is a perennial player.
Google’s upcoming Veo 2 is looking like it could leap ahead of the competition.
AI video generation is fun but it still seems very much in the toy category. I struggle to find a purpose for AI video in my work. It’s too error-prone, too uncanny, too generic. I have similar feelings about image generation, although I think that technology is farther along.
So video and image generation aren’t that useful yet. What is?
There are three categories I’ll be covering in the next few posts. Here’s the first.
#3: AI Music Generators
In 2024, AI music generation suddenly made the leap from dreadful to decent. To be clear, you won’t be cranking out bangers with AI. But AI music can work well in soundtracks for video production and podcasts. In these contexts, you’re not seeking bangers. You want a mood, a beat, or even just filler. Tracks like this don’t need to sound good in isolation or in their entirety.
For example, this Suno-generated track has lots of issues. It’s flat, it doesn’t take you on a journey, parts are awkward, it’s often boring. I wouldn’t listen to this. But as accompaniment for a video, it’s evocative and has a vibe. I could work for with it.
Is AI music better than stock? Definitely not. But if you need music and have no budget, AI music might work for you.
How to Get Started
If you’d like to get started with AI music generation, Suno and Udio currently lead the way. Both offer free plans.
Up next: the dark horse of creative AI
OpenAI’s Sora: Toy or Tool?
All new technologies begin as toys. Cars, cameras, and personal computers were all once gadgets of unclear utility. But truly important technologies eventually cross the threshold from toy to tool.
Whenever you start working with an exciting new technology, you first need to know whether it's a toy or a tool.
If it’s a toy, you’re not expecting a lot. You’re looking to experiment, learn, and have fun.
If it’s a tool, the bar is way higher. You expect to get actual work done.
OpenAI’s Sora has been public for a while now. I’ve used it a lot and I’ve seen hundreds of clips generated by others. Is Sora a toy or a tool? The verdict is clear.
Video is like really hard
First let me say this: the mountain Sora is climbing is very, very high and very, very steep. Video is immensely complicated. Images are far less complex and yet AI image generators still regularly give us stuff like this.
AI image generators have come a long way and yet… (Generated by Leonardo.Ai)
AI video generation has the all same challenges as image generation… multiplied by at least a hundred.
Video generation requires creating hundreds of sequential images which must:
1. Look good
2. Look right
3. Flow together coherently
4. Convey your intent
That’s a lot of balls of juggle. How does Sora fare?
Sora is amazing
Sora is mind-bendingly amazing. If I had seen synthetic clips like the ones below ten years ago, I might have thought aliens created them.
If you look closely, you might find rendering issues, but overall these look good and could possibly be used in projects.
Now check this one out.
It also looks pretty good, but look at this bizarre still.
If you watch the clip above closely, it’s riddled with morphing glitches like that. And that’s just the start of the issues with Sora.
Sora is also awful
Clips like the first ones I showed are a rarity. Far more typical are clips like these.
Using Sora often feels like throwing spaghetti at the wall. I tried to create that in Sora and got this. Actually, this one of the better ones.
The majority of clips Sora creates are unusable. They range from looking a little weird to totally surreal.
Now some of you might be thinking, Kirby, do you love slow motion or something? Why are you showing so much slow motion?
None of these prompts mention slow-motion. This is a weird quirk of current AI video generators. They all seem to create slow-motion video by default. That’s a big issue.
And the verdict is…
The verdict is pretty clear, right? Awesome as it is, Sora is firmly in the toy category. The quality of its output is just too erratic for it to qualify as a tool. Not only is Sora a toy, but it’s a very expensive toy: the full version is $200 per month.
But it’s still early in the game. What does the future hold?
My guess is that Sora and other video generators will become much more reliable and convincing. Even more importantly, they’ll get integrated into video-specific applications like After Effects, Davinci Resolve, and Capcut, which will enable us to work around the glitches. Video generators will be capable of generating b-roll, backgrounds, and support elements.
Will video generators become good enough to wholly produce immersive films? I won’t be waiting for that. Again, the mountain Sora is climbing is very tall, very steep.
For now, Sora is a fun toy. But if you’re serious about leading-edge video production, dive in and enjoy.
Tools of the Trade 2025
The Software Stack Powering My 2025 Workflow
Obsidian is my workhorse
2024 was a year of software consolidation for me. I didn’t add many new apps to my system and I stopped using ones that weren’t providing enough value. The apps below are tried-and-true, I’ve used them all for countless hours. This is my arsenal heading into 2025.
Newcomers
New-ish tools that have become essential
Obsidian (Free for most users)
I’ve been using Obsidian for a few years and it’s now entrenched in my toolkit. The major feature of Obsidian (and Notion and other alternatives) is that you can arrange your text documents and connect them. Word, Doc, and other word processors only make separate documents. Obsidian is a bit nerdy and you have to write in Markdown, which is fairly easy but it’s not for everyone. But if you can get through the minor learning curve, you’ll never look back.
Arc (Free)
For years, I’ve hopped among web browsers: Safari, Chrome, Brave, Firefox. I would make a mess of tabs, quit the browser out of frustration, and start making a new mess somewhere else. Arc addresses this issue by closing unused browser windows. I love overall how it manages tabs (I have a tab problem). And it has loads of small features I rely on. I can open a link in a mini-window and quickly dismiss it. It has a mini video player that lets me keep watching a video after I leave a page. And I can summon a search bar from anywhere. Quite simply, Arc is the best browser I’ve ever used. The developer of Arc is shifting its focus to a simpler browser product, and unfortunately, the future of Arc is hazy.
Readwise
Of everything here, Readwise is the most singular. It has no real competition. Readwise collects all your highlights into one place. These highlights can be from e-books, PDFs, the web, and even YouTube videos. (Alas, it has no good solution for podcasts or audiobooks.) You can then review these highlights in Readwise or export them to notes software like Obsidian, Notion, or others. If highlights are important to you, you need Readwise. Please note: this is still developing software that has some quirks.
Things (Mac only)
I’ve been using Things for over a year and it’s the only to-do list application I’ve ever stuck with. For me, Things has just the right balance of power and simplicity. It’s easy and pleasurable to use and not bogged down with complex features I don’t need. Having said this, I’ll admit I’m an inconsistent to-do list keeper. Sometimes my lists get cluttered with unnecessary tasks, sometimes I’ll switch to paper lists for a bit. Nonetheless, I keep returning to Things.
Cal.com (Free plan available)
Meeting scheduling software is essential for me because I’m a solopreneur and it saves me going back and forth with people to schedule (or re-schedule) meetings. Cal.com does everything I need and the plan I use is free.
Video Production and Graphics
These are the core of my video production workflow
Davinci Resolve (Free version available)
Davinci Resolve remains my video editor of choice and I think it’s the best product on the market. Adobe Premiere remains the standard for most video production (outside of film and TV at least). I used Adobe Premiere Pro for many years but ultimately left for two reasons: stability and price. The free version of Resolve is hugely powerful and more than most people need. The full version, which I use, is just $300 and then it’s yours forever. I’ll admit, though, I don’t really use Fusion, the motion graphics component, and prefer Adobe After Effects.
Affinity Suite
It’s simply incredible how powerful these apps are for what they cost. Each is just $70 and they can be found on sale for much less. Mostly I use Affinity Photo, Affinity’s version of Photoshop. It’s not as good as Photoshop but it’s very good, it does everything I need, and I’ve stuck with it for years now. I only do occasional vector art so Designer works great for me. I do even less publishing so Publisher is way more than enough for me. Affinity was bought by Canva this year. Here’s hoping they don’t switch to a subscription model.
Essential Utilities
Utilities I use every day
Claude (and ChatGPT) (Free plans available)
My go-to text generator has become Claude. However, I still use ChatGPT plenty. Actually, I also use Gemini, Perplexity, and Meta AI. They’re all way more alike than different, but overall, I think Claude is the best LLM out there – at least as I write this.
Raycast (Free plan available)
Raycast replaced Alfred for file launching and TextExpander for expanding text. I’ve not dug much deeper with Raycast this year and use it in a pretty elementary way. Still, I use it all day, every day and it’s a standby.
Drafts (Free plan available; Mac only)
Drafts is where I capture bits of text. If an idea pops up, I throw it into Drafts, which is very quick to launch. I also do transient bits of writing here, like messages or posts, temporary references, and notes from meetings. Everything I write in Drafts ultimately moves someplace else or doesn’t need to be archived.
Cleanshot X (Mac only)
I use Cleanshot X for screen capture of stills and videos, but another killer feature it has is letting you copy any text you see onscreen. This comes up a lot for me. This text might be in a video or in a part of the UI that can’t be selected, and I need to copy it and use it.
PullTube
For anyone needing to download YouTube, TikTok, Reels, and other online videos, I’ve been using this app for years. I also use it for downloading poster frames. It’s superb and seems to get updates every couple weeks. I use it almost every day.
Notes (Free; Mac only)
Apple has done a really good job with improving its productivity apps over the years. My wife and I switched from Notion, which I found overkill, to Apple Notes for managing family information. If you have a Mac and you want easy note-taking, this is it.
The Hall of Fame
Apps I’ve used for over a decade
1Password
Essential for managing passwords and the family sharing feature works great. Apple is applying pressure here. They finally have a quality password app.
Default Folder X
Boy, I think I’ve used this app for maybe 15 years. I use Default Folder primarily for shortcuts to commonly used folders or apps. Raycast can do much of this now, but I also like some of Default Folder’s other features, like the ability to select open desktop folders to choose a destination in open and save dialogues.
Chronosync
This is how I back up my work drive. It does everything I need. I use TimeMachine for my internal drive.
The cutting room
Apps I phased out in 2024
Spark
I used Spark as my daily email client for over six months and it didn’t add enough value for me to keep using it. There’s no great email client out there. The most interesting one is Hey, but I wasn’t aligned with how it works. I reverted to GMail for personal email and Apple Mail for business mail.
Fantastical
This is the premiere Mac calendaring app and again, it just didn’t offer many advantages over Apple Calendar or Google Calendar. Also, Apple Calendar has become more competitive. For instance, it now has a quick entry feature where you can just type in a date and time and it’ll schedule it. I now use Apple Calendar and Google Calendar.
Gone but not forgotten
Shelved for now but maybe not forever
Tana (Free plan available)
I used Tana heavily for about a year. It was how I managed projects and small bits of information. I didn’t find Obsidian well-suited to those tasks. But I’ve since shifted a lot of this work over to Things, Apple Calendar, and Obsidian. But I still like Tana and find its feature set unique. It doesn’t fit my workflow at the moment, but I might return to it in time.