HOPE, DEEPFAKED: Science journalism and the efforts against AI deception
Josiah Ian Bumagat and Natalie Andrea Ayo
When Pandora opened her box, humanity obtained chaos and confusion. When we opened the internet, we added deepfakes and called it progress.
To many, artificial intelligence (AI) is a chronically online friend one can rely on at any given moment, from providing answers for a test to creating photos of an idea in their head, AI could do just about anything.
However, what many do not realize is that these intelligent machines open a Pandora’s box full of deceit and harmful consequences - deepfakes.
When science is used against itself
Once science is used against itself, people are left asking: “What happens when progress forgets its purpose?”
Years ago, there was excitement over the arrival of AI chatbots. The possibilities seemed endless.
However, Nina Schick, author of Deep Fakes and the Infocalypse, warned that alongside this is the emergence of deepfakes, and while they open doors for creative and commercial use, they also present growing risks in the spread of disinformation.
Since then, AI has only become more powerful and more controversial.
Recently, a Facebook reel circulated, claiming that former Philippine president Rodrigo Duterte, now detained at the International Criminal Court (ICC) in the Netherlands, appeared malnourished and frail.
Multiple AI detection tools were used to flag the viral photo, with Sight Engine rating it 98% likely AI-generated, Hive Moderation at 88.1%, and Decopy AI at 97.8%.
In a media interview at The Hague, the vice president clarified, “That photo is not true; that photo was probably edited,” claiming that it showed a different patient with her father’s face inserted.
These deepfakes reveal how far AI-generated content has come in both realism and risk. What started as a novelty now helps shape public perception and political ideas.
So, what happens when progress forgets its purpose? Science finds itself fighting science. And those on the frontlines of truth, including journalists, are left to keep the facts in the spotlight.
Deepfakes, deeper lies, and the frontline against it
If a deepfake spreads faster than the real research, it's not just fighting against lies, but losing the race for trust.
Deepfakes are manipulated videos, images, or audio clips created using artificial intelligence and deep learning. These tools realistically swap a person’s face, voice, or body to make it seem like they said or did something they never actually did.
A simpler form of deception is the shallow fake, where media is altered using basic editing tools but can still mislead viewers.
In an article from CPI OpenFox, it was noted that around half a million deepfake videos and voice clips were shared globally in 2023, based on data from DeepMedia. If the trend continues, that number could reach 8 million by 2025.
The rise is driven by the easy access to powerful AI tools and large amounts of public data, making it easier than ever to create convincing fake content.
Meanwhile, a 2022 study by biometric firm iProov found that less than a third of global consumers could correctly identify a deepfake, showing how easily fake content can mislead the public.
With public trust at risk, journalists, watchdogs, and civic groups are becoming the digital frontline.
Media outlets such as Rappler, VERA Files, and Tsek.ph are some of the foremost fact checkers of AI-generated content in Filipino media.
These organizations flag deepfake and AI fake news through digital forensics, reverse searches, cross-referencing, and contacting primary sources for verification.
Last July 3, 2025, VERA Files had flagged a fake Facebook page for using an AI-edited video of actress Charo Santos supposedly advertising their eyecare supplement.
Using reverse image search, VERA Files found the original video from February 13, where Santos was simply giving relationship advice. It was later edited with AI to falsely make it seem like she was endorsing a product, according to the same article.
Moreover, in the world of politics and business, Rappler has recently flagged last July 15 a deepfake video claiming that Senator Paolo Benigno "Bam" Aquino IV is allegedly planning an investment scheme with ABS-CBN journalists.
These clips also feature AI-generated versions of journalists like Karen Davila and Bernadette Sembrano, alongside fake ABS-CBN websites mimicking real sections but with non-functional links.
As deepfakes become more believable, people begin to question not just what they see, but if they can trust anything at all. When fakes spread faster than facts, curiosity fades, and cynicism takes its place.
When curiosity turns into cynicism
They say trust is not given, it’s forged. Why trust science if science itself is what’s causing the distortion?
This is not a battle between heroes and villains. It’s a crisis where truth, discovered over the centuries, is now up against imitation.
Deepfakes are making people question if they can still trust science and what they see.
A 2024 study in IEEE Computer revealed that exposure to deepfakes has increased by a staggering 550% since 2019, while public interest in them, measured via Google Trends, has surged yearly since 2020.
Most tellingly, nearly 90% of UK adults surveyed said they’re “very” or “somewhat” concerned about deepfakes, and 40% now reflexively distrust online media, even if it seems or are authentic.
Deepfakes exploit human psychology, making it difficult for people to distinguish real from fake, which can manipulate public perception and individual beliefs. As the technology advances, detection tools struggle to keep pace, leaving users vulnerable to deception.
The lack of public awareness, limited legal safeguards, and weak platform accountability allow deepfakes to spread unchecked. This undermines trust in democratic processes, threatens privacy, and raises serious ethical concerns about truth and authenticity in digital spaces, according to the same research.
Deepfakes are weakening public trust in online content, news, and even real events. People are becoming more skeptical, often doubting the media even when it’s real.
National Science Foundation-backed research from the United States also shows that deepfakes can damage trust in major systems like banking and communication. hat they see or hear.
However, studies show that people who are more media literate are better at spotting deepfakes, and less likely to share them. This means deepfake harm isn’t inevitable; it’s preventable.
Rozendaal, et al and their 2024 study highlight that media education, especially among the youth, builds strong resistance against digital lies. Empowered with the right tools, young people can be the frontliners in defending truth and press freedom online.
Hope in the midst of deception
The United Nations Educational, Scientific, and Cultural Organization (UNESCO) notes the strong demand for science journalism as it has the power to disseminate scientific information to scholars and bridge the community of researchers to the public.
With the diverse methods of science communication and flexible content ranging from research, health discoveries, and technological innovations, science communication is seen to help people understand the world they live in and its challenges better.
In efforts to keep the spirit of science communication alive, the Science and Technology Information Institute of the Department of Science and Technology (DOST-STII) continually holds science journalism training under its Science Journo Ako advocacy.
The program especially encourages the involvement of youth in schools through campus-based journalism in adapting to the digital era where content creation and information dissemination is now very quick and accessible.
Additionally, to combat the widespread use of AI and deepfakes for misinformation, the government has now launched a tool against it.
Through the National Deepfake Task Force, the tool was used during the recently concluded May elections combatting disinformation and election fraud.
Cybercrime Investigation and Coordinating Center (CICC) Undersecretary Alex Ramos highlights that this approach is to catch misinformation early on and will be used for independent fact-checking and not for government censorship.
Efforts such as this show that it is never too late as long as there are people advocating for change.
The future of science communication are the young minds eager to write for science and the community, keeping the spirit of hope alive in the midst of a world heavily reliant on something artificial.
This through various efforts strengthening a cause in the name of ethical journalism, science, and the likes; this time, may actually make real, human progress.

Comments
Post a Comment