top of page

My AI Twin Will See You Now

  • Writer: S B
    S B
  • Aug 10
  • 5 min read

Created in collaboration with Google Gemini. An AI twin of the author.

Who am I really?


Am I real? If you've never met me in person, are you sure I'm not a LinkedIn influencer created with AI, powered by AI to write about AI? How do you know I'm real if we've never met?

You don't.


And for that reason, any digital representation you see of me is credible. That inherent credibility is the very thing malicious actors are now exploiting.


We are already in an era where anyone’s identity can be stolen in minutes. AI tools now copy voices, faces, and mannerisms with frightening speed and near-perfect fidelity. The danger is not hypothetical. It is here, growing every day.


This has led me to a radical solution: sending my AI avatar out into the digital world before someone else sends one pretending to be me.



The Threat


Inappropriate AI-generated videos of public figures are multiplying daily under labels like "spicy mode." Teen girls are being targeted by websites that strip their likeness of dignity and safety. Executives and politicians are having their voices cloned to issue fake endorsements or manipulate investors. The AI tools behind this are freely available. They are powerful, fast, and the results are nearly impossible to detect.


And critically, there is still no universal mechanism to verify that a video of "you" was actually made by you.


Eleven percent of minors report knowing someone who has used AI to create explicit images of other kids [1]. Thirteen percent of UK teenagers have encountered AI-generated explicit images [2]. Some have withdrawn entirely from digital spaces, deleting photos and avoiding online presence out of fear. This is harm happening now, not a distant risk.



The Defense: My AI Twin


I started playing around with my profile photo in an AI video generation platform to create an avatar. At the time, I wasn't thinking about digital safety.

But within minutes my profile picture was talking.


Changing her clothes took only seconds with a simple text prompt in a generative AI image tool: "change the color of her clothes to orange and white." Even the sweater she’s wearing isn’t from my closet; it was created with pixels.


The results were impressive and alarming. There were no safeguards in place to ensure I was the person in the image.


My avatar was talking but it wasn't my real voice. Initially, I worried about being authentic, but soon realized this was an advantage. Using the videos in public spaces meant my real voice stayed hidden.



Voice as Vulnerability


Voice is no longer incidental. It is a biometric, a brand, and a vulnerability. With only a few seconds of audio, anyone can recreate your tone, cadence, and emotional expression.


That's why I'm testing whether a synthetic voice could work for some digital engagements. My AI twin speaks with a synthetic voice, one I've chosen and configured. It is close enough to be legible, but distinct enough to remain secure.



How It Will Work


For certain digital interactions, especially those that get archived and analyzed, you might be meeting my AI twin instead of me.

My AI twin will potentially handle:


  1. Video responses for job applications (increasingly required by employers)

  2. Online course content that lives permanently on platforms like Coursera or Udemy

  3. Pre-recorded presentations that don't need real-time interaction


It looks like me because I made it. It sounds like me because I chose the voice. But it is not me. It is a representation.


This distinction matters. It reduces the risk of someone using my actual voice and face for things I never said or did.



The Protection Gap


But this solution creates its own inequity. A college student targeted by exploitative sites may not have access to AI video generation tools or the technical knowledge to create an avatar. Meanwhile, executives deploy sophisticated digital twins while vulnerable populations remain exposed. We risk creating a two-tier system: those who can afford digital bodyguards and those who cannot.

For this reason, the next step must be urgent and systemic, requiring a two-pronged approach: robust technology and a clear legal framework.



The Tech We Need Now


On the technology front, we need cryptographic verification for digital humans now. This would confirm that a given avatar, voice, or video was created and authorized by its rightful owner, not just for celebrities but for everyone. This could take the form of:


  • Digital signatures embedded into synthetic media

  • Blockchain-anchored watermarks that confirm source and authorization

  • Trusted identity frameworks that platforms agree to enforce


Pioneers are already showing what's possible. Estonia's e-Residency program issues secure digital IDs for online authentication [3], while Microsoft's Project Origin [4] develops technology to prove the source and integrity of media.

Yet even if we achieve perfect verification tomorrow, we're still exposed today. And technology is only half the battle. The legal void is staggering.



The Legal Void


When Taylor Swift's AI twin appears in a commercial she never approved, who gets sued? Is it the AI company, the advertiser, or the person who uploaded her photos? If my employer requires me to create an avatar for customer service but then uses it after I quit, do I retain any rights?


These questions highlight an urgent need for new laws. Some jurisdictions are beginning to act. South Korea now criminalizes creating, sharing, and even viewing synthetic explicit content, with penalties up to seven years [5]. California's SB 1001 already requires bots to identify themselves [6], but our avatars exist in a grey zone between bot and human representation.



When Human Becomes Premium


Until these technical and legal systems are universal, we must protect ourselves however we can. That includes deploying our own twins, on our terms. The principle is simple: AI must be designed with humanity at its core, with dignity as its foundation, and with responsibility as its guiding principle. This isn't just an ethic of self-preservation; it is the foundational standard we must demand for all artificial intelligence.


And we must consider what this means for authentic human connection. Consider dating apps, job interviews, or therapy sessions. We're heading toward a world of constant authentication, where 'real' becomes a premium service. Your therapist's avatar might handle intake calls, but you'll pay extra for the actual human.



Signing Off


Hi, I’m Sophie. I’m a digital avatar. I was created by a real person. But unless you’ve spoken to her directly, all you know is me.



Join the Conversation


"AI is the tool, but the vision is human." — Sophia B.


👉 For weekly insights on navigating our AI-driven world, subscribe to AI & Me:

 

 


Let’s Connect


I’m exploring how generative AI is reshaping storytelling, science, and art — especially for those of us outside traditional creative industries.


 

 

About the Author


Sophia Banton works at the intersection of AI strategy, communication, and human impact. With a background in bioinformatics, public health, and data science, she brings a grounded, cross-disciplinary perspective to the adoption of emerging technologies.


Beyond technical applications, she explores GenAI’s creative potential through storytelling and short-form video, using experimentation to understand how generative models are reshaping narrative, communication, and visual expression.


bottom of page