Live Action Princess Mononoke Trailer - AI Film

#princessmononoke #studioghibli #studioghibliart #anime This video is a educational project created to demonstrate the capabilities of generative AI technology. All rights to ‘Princess Mononoke’ are owned by Studio Ghibli. This remake is not intended for profit or distribution and is solely for the purpose of educational exploration. Adapted by PJ Accetturo Breakdown of my process for this video: Full breakdown of my process here: But I also copy and pasted it here if you don’t want to see it on X: I’ve wanted to make a live action version of Studio Ghibli’s Princess Mononoke for 20 years now. I spent $745 in Kling credits to show you a glimpse of the future of filmmaking The Mononoke trailer is a shot-for-shot remake of the trailer. This film has been in my head for two decades. I love this world so much. I hope this meager adaptation inspires others to further explore their favorite worlds. I’m sure there will be some criticism of this. I’ve heard Miyazaki is anti-AI. That’s okay. I made this adaptation mostly for myself, because his work makes me want to create new worlds. We should look for ethical ways to explore AI tools to help empower artists to create. If you’re curious how I made this, it was a little bit of u/Magnific_AI for the base characters, thousands of @midjourney generations for the scenes. The trailer is about 50 shots and each shot took about $10-20 of @Kling_ai credits to get right. At first I experimented with @lipdubai for the talking. The results were fantastic, but their platform is better for extended sequences of talking, not 1-2 second shots as you see in the trailer. In the end, I just used Kling’s lip sync feature and it was pretty good. To get the scenes to match the original trailer in Midjourney, I uploaded the still images of the trailer into @ChatGPTapp and then asked it to give me a description of everything in the scene. Then I simplified it in order for Midjourney to understand. After generating a number of images that I liked, I created scene references for the overall aesthetic and then character references from the @Magnific_AI base model characters. Midjourney allows you to reference both scenes and characters in a single prompt. Pro tip: If you’re adding a character reference for the face, use --cw 10 to just reference the face, but have it base the outfit and the scene on your prompt. If I didn’t use --cw 10 in this shot, it would have just been a portrait shot of her face. Also, use the x2 zoom out. When I brought them into Kling 1.5, I just used simple prompts “slow motion“ “she gallops quickly“ “explosions in background“. And would set it to generate 2-3 per image. (This got very expensive and I would need to run some shots 10 times to get it right) I also used negative prompts: ARTIFACTS, SLOW, UGLY, BLURRY, DEFORMED, MULTIPLE LIMBS, CARTOON, ANIME, PIXELATED, STATIC, FOG, FLAT, UNCLEAR, DISTORTED, ERROR, STILL, LOW RESOLUTION, OVERSATURATED, GRAIN, BLUR, MORPHING, WARPING Then I brought all the shots into FCPX and layered them on top of the existing trailer. If you’re thinking about doing this on a budget, I would suggest you use Runway instead. They would have saved me $700 , but I needed 1080p. That’s it! Feel free to follow me on X for more AI films :)
Back to Top