top of page

The AI Advantage: How I Made Award-Winning AI Short Films Without a Film Degree (And You Can Too)

  • Writer: S B
    S B
  • Jun 30
  • 5 min read

Updated: Jul 10

Published in Generative AI


AI-generated image of a smiling Black woman filmmaker holding a clapperboard beside a humanoid robot dressed in an orange sweater vest, seated together on director’s chairs with a sign that reads “AI DIRECTS.”

AI will be the greatest coach and cheerleader many people ever experience.


I’ve won the Runway ML AI Daily Challenge multiple times. Not because I have a film degree or a background in animation. I don’t. I’ve never studied filmmaking formally. Everything I know, I learned by experimenting, asking questions, and using tools like ChatGPT.


Runway ML is a generative video platform designed for filmmakers, artists, and creators to bring their stories to life using AI. One of the standout features is its AI Daily Challenge, an ongoing creative contest where users submit short films based on a daily prompt.


Here’s an example of one of my winning submissions: ‘Pink Morning Routine,’ created for the theme ‘morning routine.’ The message: do something every morning to center yourself.



This article walks through how I consistently create award-winning AI films, from idea to final edit.



Blueprint for Aspiring AI Filmmakers


Below is a quick overview of the full AI short film workflow I use.



These are the seven steps I use every time I create a short film with AI.


Step 1: Idea Generation and Brainstorming


🕟 Time: ~1 hour


Every project starts with a question: What story do I want to tell? I consider the emotion I want to evoke, the theme I’m working within, and whether the narrative should be literal or abstract.


Sometimes the idea is clear. Other times it’s murky and only resolves itself after I begin generating images.


I also decide on the animation style. Will it be painterly and expressive? Or cinematic and grounded in realism? If there are characters, I define their traits early to maintain consistency across the production pipeline.



Step 2: Image Generation


🕟 Time: 1 to 4+ hours


Once the story and style are clear, I move into image generation.


The challenge here is no longer quality. It’s vision. You must capture exactly what you want. You can’t sell yourself short here, and sometimes iteration is a must.

For example: a waterfall in a painterly style can vary dramatically depending on the time of day. Bright skies signal hope and optimism, sunset hues suggest reflection and rest, and nighttime tones evoke solace and completion.


Plus, the better the images, the better the quality of your animation. Don’t take shortcuts.


The tool I choose depends on the visual goals:

  • Google Imagen 4: Best for photorealism and nuanced textures.

  • MidJourney: My go-to for stylized, artistic, or surreal aesthetics.

  • Runway: Essential when I need character consistency, ensuring that a character’s appearance (hair, clothing, facial features) remains recognizable across different shots.


Each has strengths, so picking the right one is critical to matching your intended tone.



Step 3: Animation


🕟 Time: 1 to 3 hours


Sometimes the AI gets it immediately. Sometimes it gets creative and goes beyond your instructions. And sometimes it’s confused.


Confusion looks like asking for water to move, and it doesn’t. Usually, the flaw isn’t the model but your image. Something in the image isn’t registering as water to the model. Refine the image and try again.


My primary tool for animation is Runway Gen-4. While Veo 3 (Google) and MidJourney Video v1 are also available, Runway currently has the most intuitive and filmmaking-ready interface.


What makes Runway Gen-4 stand out:

  • Support for reference images and character continuity

  • A timeline that feels more like editing film than generating random clips

  • Greater control over framing, motion, and cinematic sequencing



Step 4: Assembly and Editing


🕟 Time: 10 to 30 minutes (assembly), 10 to 30 minutes (editing)


Once animation clips are rendered, I assemble everything using DaVinci Resolve. For simpler edits or social media posts, I’ve also used CapCut effectively.


This stage includes:

  • Matching cuts to the pacing of music

  • Adding narration or dialogue if needed

  • Including titles, credits, and final polish



Step 5: Music Generation


🕟 Time: ~5 minutes


I create music using Suno AI. It allows you to generate original compositions that feel emotionally aligned with your story. You just decide on genre (pop, reggae, jazz), tone (upbeat or ambient), and whether you want vocals or instrumentals. Once you get comfortable with the prompts, it’s surprisingly easy to use.



Step 6: Sound Design


🕟 Time: ~10 minutes


For sound effects, I use Epidemic Sound, a curated library of high-quality, royalty-free effects. I find the sounds I need, like a coffee maker brewing or a woman humming, download them, and sync them to the right moments in DaVinci Resolve. It’s quick, intuitive, and brings the scene to life



Step 7: Narration (Optional)


If I want a character to speak or need voiceover narration, I use ElevenLabs. Their text-to-speech catalog lets me select from a wide range of realistic voices, whether it’s a news anchor, a grandmother, or a young narrator. It adds tone, texture, and personality to the film when needed.



Tools Used in My Winning Film (Pink Morning Routine)


  • Images: Google Imagen 4 and Runway Gen-4 (character consistency)

  • Animation: Runway Gen-4

  • Music: Suno AI

  • Sound design: Epidemic Sound

  • Editing: DaVinci Resolve


I chose these tools because they consistently deliver professional-quality results and have proven reliable over months of experimentation, though the AI landscape evolves quickly.



Community and Belonging


Tools matter, but so does community. One of the most important factors in my growth has been the sense of belonging within the Runway and MidJourney communities, especially on Discord.


Runway has created a safe, supportive space for budding AI creators to share work, learn from others, and receive feedback. MidJourney has done something similar for artists exploring imaginative visual styles.


That kind of culture is essential. There’s still widespread resistance and criticism toward AI creative tools, but community helps counter that with encouragement, dialogue, and mentorship. Belonging is not a luxury in creative growth. It’s a requirement.



A Future Forged by Opportunity, Not Just Disruption


As I watch each finished film, I see a glimpse of a future where storytelling isn’t locked behind formal training or expensive equipment. AI is not just a productivity tool. It’s a door opener. A teacher. A coach.


I don’t have an art degree, a film degree, or any traditional filmmaking training. But with the right vision and tools, I’ve been able to make films that connect and win.


AI isn’t about disruption. It’s about opportunity and initiation.


If you’re thinking about making your first AI film, now is the time to try.



Join the Conversation


"AI is the tool, but the vision is human." — Sophia B.


👉 For weekly insights on navigating our AI-driven world, subscribe to AI & Me:


 

  

Let’s Connect

I’m exploring how generative AI is reshaping storytelling, science, and art — especially for those of us outside traditional creative industries.


 

 

 

About the Author


Sophia Banton works at the intersection of AI strategy, communication, and human impact. With a background in bioinformatics, public health, and data science, she brings a grounded, cross-disciplinary perspective to the adoption of emerging technologies.


Beyond technical applications, she explores GenAI’s creative potential through storytelling and short-form video, using experimentation to understand how generative models are reshaping narrative, communication, and visual expression.


bottom of page