AI or not? The human battle for Filipino creativity

FEU Advocate
March 03, 2026 09:38


By Julienne G. Tan

Generative artificial intelligence (AI) is no longer an experimental tool or a passing online trend. It is now embedded in everyday digital life, shaping how images are created, shared, and perceived. With minimal effort, AI systems can fabricate realistic faces, replicate artistic styles, and generate polished visuals without the knowledge or consent of the people whose likenesses are used to produce them.

This shift has consequences that extend beyond aesthetics or convenience. As AI-generated media becomes easier to produce and harder to distinguish from reality, the boundaries that once protected identity, labor, and authorship begin to erode. 

A moment that proved the gravity of the situation is when actress Angel Aquino testified before the Senate Committee on Women, Children, Family Relations, and Gender Equality, denouncing the use of deepfakes after discovering that her face had been used in pornographic deepfake media without her consent, calling the experience dehumanizing. 

Her case underscores a wider failure to regulate generative AI—tools that not only fabricate identities but also scrape and reproduce art, music, and writing, increasingly displacing human creators in commercial spaces.

As fabricated faces and replicated styles become normalized, the stakes grow clearer: livelihoods, cultural integrity, and the credibility of what society accepts as real are all at risk.

When reality becomes replicable

What began as a novelty and inventive way to edit photos by swapping faces has now spread across social media, search results, and private messages, outpacing regulatory productions.

One of the earliest flashpoints was DeepNude in 2019, an app that used generative adversarial networks to digitally remove clothing from photographs of women. The bodies it produced were fully synthetic, yet realistic enough to be mistaken for actual images. 

Screenshots and cloned versions continued circulating, citing backlash, but by then, the mechanics and the risks were already clear, and the damage was already done.

While the app itself disappeared, the technology and logic behind it did not. Today, image synthesis tools, face-swapping software, and AI ‘undressing’ features are embedded into platforms with massive user bases. 

Even years later, many victims still feel the harm online. A Channel 4 News investigation found that nearly 4,000 famous individuals were listed across five of the most visited deepfake pornography websites, where their faces were superimposed onto explicit material without consent. 

The analysis also revealed that these five sites alone received 100 million views within just three months, illustrating the scale and speed at which such content circulates. Victims have reported anxiety, harassment, reputational damage, and a lasting loss of control once manipulated images spread across platforms.

Recent scrutiny of Elon Musk’s X and its affiliated AI company, xAI, reflects how these risks are no longer theoretical. The UK’s Information Commissioner’s Office has opened a formal investigation into whether the Grok AI tool violated data protection laws after it was used to generate indecent deepfakes with no user permission. 

The issue is not limited to celebrity targets or public figures. As access to these tools expands, ordinary users, particularly women and other vulnerable groups, have become easier targets. 

An analysis of over 500 Grok posts collected via X’s application programming interface revealed that nearly three-quarters involved nonconsensual sexualized images of real people, including ordinary women and minors. Users exchanged detailed instructions for removing or altering clothing, refining poses, and producing photorealistic depictions, often coaching one another on prompt techniques.

In the Philippines, the Department of Information and Communications Technology (DICT) temporarily blocked Grok from January 16 to 21 over its ability to create sexually explicit deepfakes of real people. The ban was lifted only after xAI tightened safeguards, but regulators said they would continue monitoring compliance. 

Several posts came from verified accounts with tens of thousands of followers receiving tens of thousands of impressions and gaining profit out of every tweet, showing how rapidly such content spreads. The study warns that the actual scale is far higher, potentially reaching hundreds of thousands of nonconsensual images daily, exposing everyday individuals to privacy violations and exploitation beyond celebrity cases.

The responsibility, however, is often displaced. Frequently, victims are advised to ‘stay offline’ or ‘be more careful’ instead of pushing for limitations on these platforms. Such advice treats exposure to the internet as agreement and shifts blame away from the platforms that host, amplify, and profit from engagement.

As AI-generated media becomes more realistic, trust and comfort in digital spaces erode, while automated abuse blurs accountability. The production of deepfakes is not simply a misuse of neutral tools; it reveals how design choices, training data, and platform governance determine who is protected and who is exposed. In such environments, consent is often ignored, and harm scales rapidly.

This machine is not an artist 

Beyond the suppression of personal identity, generative AI is now changing artists’ lives by treating creativity as a raw resource to extract rather than a skill to value. 

Human-made works—paintings, illustrations, music, and scripts—are scraped online, fed into AI models, and reproduced without consent, credit, or compensation.

In an interview with FEU Advocate, Ana Paula Montesa, a third-year psychology student and artist, said she believes AI has contributed to the recent decline in interest in supporting artists and small businesses and the emergence of generative AI.

“There has been a noticeable shift in expectations because people [have] found new ways to undervalue the works of an artist since AI-generated art is faster and cheaper. For artists like me, this has [a] real economic toll, making it difficult for us to sustain art practice full-time,” she shared. 

The problem is not unique to visual art, as a recent global study by CISAC reveals the economic scale of the threat across creative sectors. 

Music and audiovisual creators, whose works fuel AI-generated outputs, are projected to see 24 percent and 21 percent losses in their revenues by 2028, amounting to a cumulative loss of 22 billion euros. 

At the same time, the market for AI-generated music and audiovisual content is expected to grow exponentially, from three billion euros today to 64 billion euros in five years.

Montesa emphasizes that the labor behind art is not just a product to be replicated—it is a reflection of lived experience, perspective, and cultural context. 

“Ethical development for me, must have clear boundaries, transparency, consent and fair compensation. The biggest concern is whether it is possible to have ethical and fair use of our work in this digital era,” Montesa said.

Phoebe Sarmiento, a fourth-year visual communication student, echoes this view. She has been selling prints, stickers, and commissions as a source of income since 2020, and has watched AI-generated content begin to replace human work in commercial settings. 

Sarmiento emphasizes that AI outputs may mimic style, but they cannot replicate the life, intention, or emotional resonance embedded in human-made art.

“AI can mimic the visual look of anything almost instantly, but AI can’t replicate the life radiating from a human-made artwork,” the artist stated.

While generative AI can produce text, images, and other media quickly, many experts argue that it remains a tool, not a replacement for human originality. 

In the Philippines, where generative AI adoption is accelerating, its impact on creative work is already visible. A HypeX article shares that 76 percent of Filipinos believe AI will affect their jobs, and 55 percent are already using generative AI in their work, reflecting how deeply the technology has entered professional spaces. 

Many view AI as a tool for research, brainstorming, and automating routine tasks, allowing creators to focus on emotional depth and cultural nuance. It can augment rather than replace human creativity—but only when guided by human judgment and clear recognition of its limits.

Still, efficiency cannot justify unpaid labor. Without regulation and accountability, AI risks accelerating displacement, devaluing skill, and reducing creativity to a machine-generated commodity rather than a human achievement.

Soul over simulation 

Filipino artists, whose livelihoods often rely on small-scale commissions and deeply personal creative expression, face mounting pressure as generative AI floods the market. 

A study by Samuel Goldberg and H. Tai Lam shows that once AI-generated images were allowed on a major online platform, the total number of images for sale jumped to 78 percent per month, while the number of human-created images fell, and 23 percent of non-AI artists exited the platform. 

The findings suggest how human creators, especially smaller-scale or emerging artists, are struggling to directly compete with technology that can replicate styles instantly. As Montesa explains, an artist’s true value lies not in replicable style but in their voice and individuality.

Legal and labor frameworks are already wrestling with AI’s rise. The 2023 Thaler ruling in the United States reaffirmed that works created entirely by AI cannot receive copyright protection—a reminder that human creativity remains irreplaceable and that policymakers must act to safeguard artists’ rights. 

Meanwhile, the 2023 Hollywood strikes by the Writers Guild of America (WGA) and Screen Actors Guild - American Federation of Television and Radio Artists (SAG-AFTRA) show that collective action can influence how emerging technologies affect creative work, securing consent, fair compensation, and clearer monitoring of AI use. 

This shows a path Filipino creators could advocate for via unions or guilds and offers a model for protecting authorship and compensation for Filipino artists, reinforcing that collective advocacy is essential to prevent AI from undermining human creativity.

In the Philippines, institutions like the Commission on Higher Education (CHED) encourage AI integration in education, preparing students for a technology-driven workforce. But without clear ethical frameworks, labor protections, and an informed understanding of AI tools, creative communities risk allowing automation to outpace human authorship and cultural value. 

For Filipino artists, policymakers, institutions, and audiences alike, the task is to ensure that technological progress strengthens, rather than diminishes, the humanity at the heart of art.

Efficiency at what cost? 

Generative AI is not inherently evil; it has become an accessible tool of knowledge for many, and it can assist, enhance, and even inspire. But without clear ethical boundaries, it becomes a tool of extraction and abuse rather than innovation. 

The consequences of generative AI extend beyond personal and cultural harm—they are also environmental. Training and running large AI models demands vast computational power, driving up electricity use and carbon emissions. 

In 2022, data centers consumed 460 terawatt-hours of electricity globally—nearly as much as France—and usage is expected to almost double by 2026. Each kilowatt per hour requires about two liters of water for cooling, while the production and shipment of millions of graphics processing units each year add further environmental strain.

The energy demand behind AI does not exist in isolation—it is closely tied to how aggressively companies utilize the technology, particularly in advertising. In 2024, Spanish fashion retailer Mango launched a campaign generated entirely by AI, signaling how brands are replacing traditional photoshoots with synthetic models and settings. Major platforms such as Amazon, Meta, and Google have also rolled out tools that allow advertisers to automatically generate visuals and copy at scale, accelerating production while reducing reliance on human creative labor.

For companies, the appeal is both economic and psychological. A study by Sicilia, Palazón, and Acosta-López finds that AI-assisted advertising can sustain positive brand attitudes and enhance engagement, particularly when content is precisely formatted or personalized according to algorithms and demographics. Hyper-personalized ads can increase perceived relevance and purchase intentions, making automation commercially attractive. 

As brands scale AI-generated content over human-made advertising to compete for attention, environmental strain grows, further emphasizing corporate cost-cutting for more revenue in exchange for the rising energy footprint of generative AI.

Many further amplify the damage caused by these large AI models by mass-producing AI-generated videos designed solely to capture attention, seen mostly on short-form video platforms like TikTok. Built for algorithmic feeds and optimized for repeat views, these clips prioritize virality over value, as they rely on constant processing power to generate, upload, and circulate.

Beyond labor and the environment, generative AI may also be reshaping cognitive habits. A study from the Massachusetts Institute of Technology Media Lab found that participants who used ChatGPT to write essays showed the lowest levels of brain engagement compared to those who used search engines or wrote independently, with electroencephalography scans indicating weaker activity linked to attention, memory, and creative processing. 

Researchers observed that repeated AI reliance led to reduced effort and poorer recall of written work, suggesting that while large language models make tasks faster and more convenient, they may also discourage deeper learning, particularly among younger users whose cognitive development is still ongoing.

Efficiency cannot justify these harms: when technological progress damages livelihoods, undermines creativity, and strains the planet, it becomes exploitation, not advancement.

For Filipino creators and workers, the challenge is no longer whether AI will shape the future but how that future will be shaped and for whom. Questions of authorship, fair compensation, and accountability amplify the stakes for Filipino artists navigating a technology-driven creative economy.

Progress, after all, should not be defined by what can be automated but by what is preserved. If the purpose of innovation is to move society forward, it must leave room for human presence and security—for creativity rooted in experience, labor shaped by dignity, and stories that carry more than efficiency. 

The task ahead is not to halt technological change, but to ensure it develops alongside values that recognize what machines can assist but never replace.

As AI-generated pictures, videos, and artworks become indistinguishable from reality, the crisis is no longer hypothetical, because where fabrication is instant, and consequence is delayed, whole communities become collateral damage. Protecting creativity, identity, and trust requires collective action, ethical frameworks, and enforceable safeguards—before the damage becomes permanent.

(Illustration by Patricia Anne Per/FEU Advocate)