Anyla’s Take: The urgent need for ethical regulation and public awareness of AI deepfakes

By Anyla McDonald The Black Lens

Artificial intelligence continues to reshape modern society, offering new opportunities for creativity, communication, and innovation. However, one of its most dangerous byproducts, the rise of AI deepfakes, poses a significant threat to personal safety, democratic stability, and digital trust. Deepfakes, which are hyper realistic but fabricated videos, images, or audio generated through deep learning, are advancing so quickly that many people can no longer distinguish authentic content from manipulated media. While some argue that AI should be allowed to evolve freely in the name of technological progress, the escalating misuse of deepfakes proves that public education and strong ethical regulation are necessary to protect individuals and communities.

Deepfakes have already demonstrated an alarming capacity for harm. A 2023 Deeptrace Labs study revealed that 96 percent of deepfakes circulating online were non consensual, overwhelmingly targeting women, which shows how AI can reinforce and magnify existing forms of gender based violence. In 2024, the FBI issued a public advisory after reports of AI generated voice scams, including fabricated emergency calls mimicking family members to exploit emotions and extort money. Europol further predicts that by 2026, up to 90 percent of online content could be AI-generated or AI altered, raising serious concerns about the future of information integrity. These statistics illustrate that deepfakes are not simply a technological curiosity; they are already being weaponized in ways that endanger real people.

Some critics claim that regulating AI could slow innovation or limit creative experimentation. However, leaving deepfakes unchecked ultimately threatens the functioning of democratic processes and the social stability that innovation depends on. Fabricated political videos can spread misinformation quickly, influencing public opinion based on lies. Once false information circulates widely, corrections rarely regain equal visibility or credibility. A 2024 Pew Research Center survey found that 62% of Americans are unsure whether they can reliably identify AI generated content, and 78% favor stronger regulations. When an entire society becomes uncertain about what is real, institutional trust weakens and civic engagement declines.

To address these risks, public awareness must be strengthened. Digital literacy cannot remain optional in a world where manipulated media circulates at high speed. Teaching individuals to verify sources, examine visual inconsistencies, slow down emotional reactions, and report suspicious content can significantly reduce the spread of misinformation. Basic safety practices such as enabling two-factor authentication or establishing family verification words also help protect against impersonation attempts fueled by deepfake audio or video.

However, education alone is not enough. Ethical regulation is essential to ensure that AI developers and institutions, rather than individuals alone, bear responsibility for the consequences of these technologies. UNESCO’s ethical AI principles emphasize transparency, consent, accountability, and fairness, all of which should guide policy decisions. Regulations requiring labels on AI-generated media, penalties for non consensual deepfakes, and mandatory bias testing in AI systems would protect human dignity while still allowing innovation to flourish responsibly.

In conclusion, the growing threat of AI deepfakes demands immediate and coordinated action. Strengthening public awareness and establishing ethical regulatory frameworks are not obstacles to technological progress they are necessary safeguards for a society increasingly shaped by artificial intelligence. Without decisive intervention, deepfakes will continue to erode trust, harm individuals, and destabilize democratic institutions.