How I Survived Runway’s Gen:48: A 48-Hour AI Film Challenge
- S B
- May 2
- 6 min read
Published in Generative AI

I love films. Yes, but I’m also incredibly passionate about AI. When I learned that Runway ML hosts a periodic global film contest called the Gen:48 challenge, I knew I had to show up however I could. Not as a filmmaker or even as an artist, but as an AI professional with minimal experience: just one previously generated AI film.
I knew it would be brutal. I would need to complete a film in 48 hours. Not just any film, a film that met the requirements of the contest. After all, what’s a challenge without rules.
An impressive 3,000 teams entered Gen:48, highlighting the excitement and democratization of storytelling through AI tools. I chose, arguably, to compete solo, without a team. “Arguably” because convincing my social network to sacrifice an entire weekend to create an AI film is still a formidable task. Nonetheless, what unraveled was a true test of creativity, resilience, and the evolving relationship between AI and storytelling.
You can view my offering to the competition here:
The Challenge Brief
The brief was released at the start of the 48-hour clock. Participants had to integrate 3 required elements: a location, a character, and an event. I selected the shipwreck for my location, the young explorer as my character, and a last meal as my event.
I simply chose the shipwreck because I knew that I wanted to work with water. The other two seemed reasonable. A young explorer signaled at least to me that it could be a story about learning, and last meal meant I could generate AI images of food, a skill I sharpened in my previous AI film experiment.
The brief was hard. To create a coherent story in 48 hours made it clear that the competition’s challenge was twofold: storytelling and technical execution. Therefore I decided to split my workflow into two separate categories.
Day 1: Story Construction
I spent nearly all of Day 1 writing and refining the story. I drafted a full scene-by-scene outline based on the Gen:48 brief, pressure-tested the overall story arc, individual scenes, and pacing using ChatGPT, Claude, and Gemini.
I also mapped out my Day 2 production workflow to fit the intense time constraint, dividing blocks of time for image generation, animation, sound, editing, and assembly. No footage was created on Day 1. It was entirely focused on building a solid narrative structure and execution plan.
Day 2: Production Sprint
Day 2 was a full sprint. I generated all visuals, animations, soundtrack, voiceover, and completed the final editing in under 24 hours. There was no sleep. Every decision — from prompting and re-generations to scene selection, cutting, and sequencing — had to be made rapidly and decisively. With my Day 1 plan as a roadmap, I moved methodically through each production phase despite the intense time pressure.
Time management and creative decisiveness became literal survival skills as the deadline approached. The solo workflow meant every technical hiccup and creative challenge fell squarely on my shoulders, reinforcing just how crucial that preparation day had been. As the hours ticked by, I leaned heavily on the narrative framework I’d built, allowing me to focus on execution rather than rethinking the story under pressure.
Technical Challenges
Model Selection and Image Generation
Creating consistent, high-quality visuals required a strategic use of different AI models. Each tool presented distinct advantages and limitations that shaped my creative process. While Google ImageFX delivered beautifully structured scenes for environments, maintaining character continuity demanded Runway Gen-4’s reference image capabilities.
Character Consistency
Despite Runway’s reference image features, I frequently encountered distortions — smooshed faces and abnormal limb proportions — that required multiple generations and careful prompt refinement. Creating a consistent human character across an entire narrative revealed the current limitations of even advanced AI image models.
Animation Bottlenecks
The most significant challenge wasn’t image generation but animation. Unlike static images where prompt mastery could reliably produce desired results, animation demanded director-level technical direction. Each scene required precise instructions for camera movements, character actions, and scene transitions. The AI frequently misinterpreted these instructions, necessitating multiple generations for each sequence.
Strategic Workarounds
My decision to generate longer 10-second clips provided crucial editing flexibility, allowing me to salvage usable portions from otherwise flawed animations. This approach, while time-consuming, significantly improved the final quality of animated sequences.
Filmmaking Fundamentals Remain Essential
These technical hurdles reinforced an important lesson: even with AI-generated visuals, traditional filmmaking instincts remain crucial. Understanding framing, motion, pacing, and emotional resonance proved just as important as technical prompting skills. The tools may be new, but the fundamental principles of visual storytelling continue to matter deeply.
Technical Arsenal
AI Content Generation
Image Generation (Environments): Google ImageFX
Image Generation (Character Scenes): Runway Gen-4 Image Generator with reference images
Animation: Runway Gen-4 Turbo
Soundtrack: 3 original songs created with Suno AI
Sound Effects: A complementary subscription from Epidemic Sound was granted to contest participants.
Voiceover: AI-cloned voice from ElevenLabs
Assembly
Video Editing: CapCut Pro
This film was 100% AI-created: images, animation, music, and narration — all generated through AI, directed and assembled by a single creator.
Final Output
Runtime: 3 minutes 59 seconds
Format: 1080p
File size: 250+ MB
Submitted to Runway’s Gen:48 Challenge
Perspective as an AI Professional
As an AI professional, I approached Gen:48 as a live experiment. Could technologists and tool builders master storytelling as effectively as trained creative professionals? Does AI literacy help, harm, or have a neutral effect when creating meaningful art and film?
These questions drove my participation. I am not a traditional filmmaker or an artist by training. This was my second AI film ever, and my second film overall. Gen:48 became a pressure test of whether deep familiarity with AI systems could translate into effective storytelling under the highest creative constraints. My AI literacy proved invaluable — allowing me to devote an entire day to story development while confidently planning for the technical execution that would follow.
As I worked through exhaustion, battling technical bottlenecks and making countless creative decisions, I began to see beyond just my individual project. This challenge wasn’t merely about completing a single film — it represented the beginning of something much larger in the creative landscape, a transformation that would soon reshape how we think about filmmaking itself.
The Wild West Won’t Last Forever
AI filmmaking is now in a chaotic, open, Wild West phase. Early stages will bring more quantity than quality, and that’s okay. New voices, genres, and styles will emerge from the noise. Studios will integrate AI into pipelines quietly, and soon “AI film” won’t even be a label anymore.
Eventually, someone as transformative as Walt Disney or Alfred Hitchcock will emerge from this AI-native world. Disney pioneered a new way to produce films, Hitchcock revolutionized how to tell stories. AI is creating the breeding ground for innovation in both these dimensions.
While today’s AI films are often short experiments with rough edges, tomorrow they will simply be cinema — immersive, emotionally resonant, and capable of the same transcendent beauty as any great film. Storytelling will endure through this technological evolution, preserving what makes cinema powerful while embracing new possibilities. The medium will evolve, but the magic of great storytelling remains timeless.
One day soon, we’ll stop asking if it’s an AI film — and simply ask if it’s a good one.
Join the Conversation
"AI is the tool, but the vision is human." — Sophia B.
👉 For weekly insights on navigating our AI-driven world, subscribe to AI & Me:
Let’s Connect
I’m exploring how generative AI is reshaping storytelling, science, and art — especially for those of us outside traditional creative industries.
About the Author
Sophia Banton works at the intersection of AI strategy, communication, and human impact. With a background in bioinformatics, public health, and data science, she brings a grounded, cross-disciplinary perspective to the adoption of emerging technologies.
Beyond technical applications, she explores GenAI’s creative potential through storytelling and short-form video, using experimentation to understand how generative models are reshaping narrative, communication, and visual expression.


