With commercial tools like ChatGPT and Midjourney becoming widely available, AI was used for all things wonderful … and worrying.
2023 was the year that generative AI truly went global. Tools like Midjourney and Dall-E became part of the mainstream, getting more accurate with increased use. OpenAI’s ChatGPT, which launched in late 2022, fueled the imagination of tech enthusiasts who used the tool for various new purposes: from a legal ruling in Colombia to a TikTok business created on the platform. Although visits to the ChatGPT site peaked in May and plateaued afterward, the page had been seen a staggering 1.7 billion times by the end of November, according to Similarweb data.
Everyone seemed to find a different use case for AI in 2023. Politicians in India used it as an excuse to deny inconvenient truths, claiming a controversial leaked audio was fabricated using the technology. Advertisers, meanwhile, ran wild with the technology: ad agency Ogilvy created different versions of an ad featuring Shah Rukh Khan, a Bollywood star, for small shops to personalize with AI and have him become their brand ambassador. Photographers, getting ahead of the risk of losing their livelihood to AI, used it to create award-winning images. All year, Rest of World cut through the noise and looked at the wider implication of AI. From women left heartbroken by their AI companion to artists being pushed aside by their own AI copycats, here are some of the most unexpected ways AI was used around the world in 2023.
Religious chatbots that say killing is acceptable
Between January and March, at least five religious chatbots popped up in India. Powered by GPT technology, they provide answers based on the Bhagavad Gita, a 700-verse Hindu scripture. Known as GitaGPTs, these chatbots mimic the tone of the Hindu god Krishna and promise users to be an “AI spiritual companion.”
Experts say chatbots like these can help make religious texts more accessible, but they can also have unintended and dangerous consequences. Because they’re “playing god,” whatever these chatbots say could be perceived as gospel, even though they’re just fabricating believable answers based on statistical probabilities. Rest of World found that some of the answers generated by the Gita bots opined freely — and dangerously — on topics such as casteism, misogyny, and even law. Three of these bots, for instance, said it was acceptable to kill another if it is one’s dharma, or duty.
The real heartbreak of falling in love with an AI boyfriend
In March, Chinese AI voice startup Timedomain launched “Him,” an app with voice-synthesizing technology that provides virtual companionship to users. Most of these users were young women who customized the Him characters and then interacted with them as if they were their long-distance boyfriends. They received affectionate voice messages daily — morning calls, reminders to eat healthy, bedtime stories, and even poems.
Users quickly grew attached to their AI companions. So when Timedomain announced that Him would shut down in early July due to stagnant user growth, many women were devastated. “He died during the summer when I loved him the most,” a heartbroken user wrote on social media app Xiaohongshu. Others rushed to save voice messages; some even reached out to investors, hoping to raise enough money to save Him.
AI brings artists back to life, on demand
Sidhu Moosewala, one of South Asia’s most influential hip-hop figures, was shot dead by gunmen on May 29, 2022. A year after his death, tens of new tracks with his voice were generated using AI and distributed across SoundCloud and YouTube. Some of the tracks garnered thousands of listens, fueling debates over copyright laws and royalties in the age of AI.
Moosewala’s AI tracks aren’t the only ones stirring up the music industry. Chinese-made AI programs like So-Vits-SVC, shared by programmers on platforms like GitHub, let internet users train deepfake voice models and re-create celebrity voices. From Singapore to Spain, living singers now have to compete with dead artists brought back to life by AI.
Singaporean artist Stefanie Sun is resigned to her fate: her AI version is more popular than she is. She penned a blog post in May with a warning that no human would be able to compete with AI: “How do you fight with someone who is putting out new albums in the time span of minutes?”
History rewritten by AI
What if Mexico had invaded the U.S.? Using AI-powered image-generation tools, people set off to reimagine reality.
On TikTok, accounts like @what.if_ai published content inspired by decolonial curiosity. Commenters and followers suggest prompts to the account’s owner, who then uses ChatGPT to write a corresponding script, and Midjourney to produce accompanying visuals. The result is images that show non-Eurocentric alternative realities — a world in which the U.K. is ruled by India, or in which Spain is invaded by the Philippines.
In Argentina, an Instagram account called IAbuelas posts AI-generated images that imagine what babies kidnapped 40 years ago during Argentina’s dictatorship would look like today. The images are made by splicing photos of the child’s parents using Midjourney and then applying an aging filter. Some considered the images a useful tool to raise awareness, while others warned that the images were simply artistic representations and should not be taken as real.