Message in a Bottle

Antonio Medina

The extent of my dad and I’s conversations, at least over text, usually consists of him sporadically forwarding a collection of random Whatsapp memes, technology tweets, or cool engineering videos on instagram, and me replying with a (admittedly sometimes dismissive) 🤣 or a casual like of the message. Last week, however, he sent me a link to a service called Synthesia, an AI video generation platform that has been gaining professional and media attention (at least enough for it to be on my dad’s radar - he doesn’t work with or around AI, he’s currently an uber driver, but has always been fascinated by the technology and what it can do). Instead of my usual “nice”, this time I clicked the link and read through it, indeed finding evidence of a new service claiming to provide high quality AI generated video content. I engaged with my dad a bit more by hitting him back with a riveting “that’s so cool, there’s people working on tech like this at Stanford and all around the tech world”.

“Scary”

That’s all my dad said in response. In truth, I hadn’t really had a deep conversation with my dad before this point about AI, and how we feel about it - especially not since thinking about it in all of the contexts I’ve been exposed to through this class. I tried being an optimist, sharing the bright side of what this class has opened my eyes to - “yeah, it’s scary - but there’s people working on asking the right questions for how to best handle these technolgies, and how we can build systems to protect against misuse.”

“….”

“But in Venezuela, the government is using this to create fake news and spread misinformation, and people think it is real”

And that’s what got me - the reminder that so much of how my parents view the world is still in comparison to how life is being lived in Venezuela, where we emigrated from when I was ~5 years old. In doing some digging, I did find that indeed, state-run news outlets used Synthesia, that same AI-video generation tool, to generate a series of news videos that touted economic prosperity and social tranquility about a country that is anything but, as lived by the bathes of cousins, grandparents, friends, and extended family we still have out there. Growing up it was always, “be grateful for your toys, your education, your food - you wouldn’t have it like this back in Venezuela”

But now, it’s even more complicated, because something I didn’t think that Venezuela has less of ….. is ethical AI? While I truly believe that it does matter to critically consider AI and all of its daily breakthroughs, the truth is that no matter how much we build systems and dialogues that question the premise, these technologies are already out in the world. So much of the way issues are handled, like they are with most new tech, is reactive rather than proactive. Synthesia banned the user from Venezuela that used their platform to generate the fake news videos, and issued a statement along the lines of “It pains us to see people misuse the product we built to help benefit society — this was never our intention. However, we won’t let the minority ruin the good AI has to offer.”

But does this really solve the problem? Not only are more and more people becoming aware of AI and the possibilities that leveraging it has to offer, but more and more people are coming across the ability to use it - organizations and independent developers post the tech online and let anybody use it, and then just wait to see what happens. What are we supposed to do if, even though the “latest state of the art” like GPT3 is still held under closed doors, the technology that is already out in the world is powerful enough to disrupt people’s lives and support the intentions of corrupt and authoritarian governments? It feels like so much of the conversation around how to build systems for the “ethical” development and use of AI is focused so much on how it’s used in countries that have the money, power, and systems in place to not be completely thrown off by the basic abuse and malintent. As far as I could find, there is no “ethical AI” institute or oversight board in Venezuela, and it makes me wonder how the widespread use of AI technologies can and likely will cause more issues than it solves around the world. And that, to me, is scary. And while I feel so fortunate to be able to sit and question where the future of artificial intelligence fits into the creative outlets of being human and expressing oneself - like music, art, writing - I feel powerless and almost guilty to remember that it’s not music that’s being threatened by AI that’s already out in the world - it’s basic human rights. The danger of AI systems being used places like Venezuela is that they can and will be used to exacerbate existing social, economic, and political inequalities, rather than alleviate them. It would be scary to have to think of solutions to an incoming age of of technological neocolonialism, where AI is used to extract resources and exploit people, rather than empowering them.