
Oct. 15, 2025 | Jaylin Emond-Hardin | Entertainment Editor
Artificial intelligence, or AI, has become more and more prevalent as technology advances. What was once a science fiction storyline is now our reality; AI chatbots, videos and photos are all too common now.
Most of Generation Z and millennials have watched AI grow with them. Siri first launched with the iPhone 4S in 2011, while Alexa followed three years later with the introduction of the Amazon Echo. These AI assistants have grown with us from childhood to adulthood on our phones, tablets and computers.
When it comes to generative AI, most are now approaching the subject with the utmost caution and hesitancy. After all, it has been proven time and again that AI systems often steal art created by real people to “train” their models.
I mean, it was only April when Studio Ghibli founder and animator Hayao Miyazaki begged fans of his movies not to use the technology to recreate their photos in his movies’ style.
Miyazaki described AI technologies as “utterly disgusting” and “an insult to life itself” before stating he would “never wish to incorporate this technology into (his) work at all.” Unfortunately, this didn’t stop users from “Ghiblifying” their photos on OpenAI’s platform ChatGPT, with thousands of accounts and users sharing these images on social media — even @WhiteHouse on X, formerly known as Twitter, shared “Ghiblified” images.
The exploitation of artists isn’t the only issue posed by AI. Data centers across the world use billions of gallons of water to cool the servers. 2022 saw major tech companies use 580 billion gallons of water for their AI operations, while in 2024, a Google center in Iowa consumed 1 billion gallons alone. When a user interacts with any AI system, be it Google Gemini, ChatGPT or Microsoft Copilot, it uses one-tenth of a gallon or 12.8 fluid ounces of water — almost a full cup.
While most adults can actively choose whether or not to consume AI-generated content or to interact with the models, it’s much more difficult to teach children how to distinguish between what is and isn’t AI-generated.
I don’t think I ever fully understood how AI was being aimed at children until I began student teaching. My class was working on an “I Am” poem where they described who they were before choosing images that represented them.
The end result? So many students found AI-generated photos that they liked for their project.
For one, it’s not their fault. The images they used were very clearly marketed to children — magical cats, animals playing sports, even some that related to the wildly popular Netflix movie “KPop Demon Hunters.”
At our weekly assembly time, it became much more evident how AI was being marketed towards children. For a reminder on how to behave in line in the hall, a parody of “Golden” from “KPop Demon Hunters” was shown — once again, the popularity of the movie is capitalized upon by AI, complete with AI-generated images, animation and singing.
This video has already amassed 463,000 views in just one month of being posted to YouTube, with educators and students alike sharing and watching to learn about the basics of being in line.
It doesn’t stop at “KPop Demon Hunters,” however. A good majority of YouTube Shorts meant for children feature AI.
When scrolling through YouTube Shorts, I found that a majority of the videos that were created using generative AI had the hashtags “YouTube Kids” and “kids animation.” These videos typically feature anthropomorphic cats and have one of three subjects: cheating, revenge or pregnancy. Sometimes all three are featured in one video.
Pairing this normalization of AI in children’s spaces with generated ASMR — glass fruit spreading, gemstone cutting and slime videos — being highly popular on apps like TikTok and Instagram creates a can of worms that nobody is ready to open.
However, we’re not here to talk about the consequences of AI. We’re here to talk about what we can actively do to slow the usage and consumption of AI.
Use a critical eye — One of the biggest pieces of advice I could give is to look at suspicious content with a critical eye. Count the number of fingers on hands and look at the way that the eyes look. Even as AI improves, it cannot perfect the way our hands and eyes look, proving that those are what truly make us human. If those aren’t a dead giveaway, watch the movements. They are often too smooth or too robotic. Text is another dead giveaway — AI models, no matter how sophisticated, can never get letters or logos quite right.
Don’t give the account the views they want — Most accounts that post AI-generated photos or videos want engagement, which is the most obvious thing I think I have ever said. However, it’s the truth. These accounts want people to watch their videos. The more views and engagement they get, the more they get to say, “Look, the people want this. We should make more of this.” It encourages the models’ creators as well because they see the engagement that these videos receive and work to make their models more widespread and normalized. My best advice when one sees these videos is to swipe away or click “not interested.” This will put less AI-generated content on one’s feed, thus lessening the views these accounts receive.
Monitor what children are consuming — With how common AI-generated media is in children’s spheres, it’s best to pay attention to what children are consuming, as it is with all content they interact with. Have a conversation with children about AI-generated content in ways that are appropriate. Kids are smart and they’ll understand what’s going on. I know I’ve had wonderful conversations with my students about AI and its consequences — granted, they are fourth and fifth graders, but still, talking with children in an age-appropriate way about the dangers of AI is better than not educating them at all.
Teach the older generations to recognize AI-generated content — Generation X and baby boomers are two populations that are also susceptible to AI-generated content. Facebook is rife with AI-generated photos and videos, and the generations that weren’t raised on this kind of technology have a harder time telling reality from AI. Along with educating younger generations about AI and its dangers, it’s also important to help the older generations — our parents and grandparents — understand how to recognize when something isn’t real. Show them how to identify an AI-generated video. Talk to them about the consequences.
It doesn’t seem like AI generation models are going away. In recent research, Forbes has found that AI servers and prevalence have multiplied 14 times since 2000. While some AI is beneficial — Siri and Alexa, especially — the wave of generative AI has become increasingly detrimental to both the environment and the social media landscape.
As the generation currently leading the workplace and social media sphere in the digital age as influencers, educators and artists, it’s increasingly important that we slow the wave of generative AI. We can do this by educating ourselves and other generations and not giving the AI models what they want: our attention, our art and our money.
Contact the author at howlentertainment@wou.edu