Manufacturing “realities”. The impact of AI in the age of disinformation

Autores/as

  • Ioan-Claudiu Farcaș Technical University of Cluj-Napoca, Romania

DOI:

https://doi.org/10.55765/atps.i28.3650

Palabras clave:

ecosistema autosostenido, diseminación, guerra cognitiva, propaganda impulsada por la IA

Resumen

Este artículo examina el efecto de la inteligencia artificial generativa (IA) en la propaganda, induciendo un cambio significativo con respecto a los métodos tradicionales. La propaganda del siglo XX se basaba en narrativas simplificadas, destinadas a un público de masas. La IA amplía el horizonte, permitiendo la creación de una desinformación hiper-personalizada a muy gran escala. El análisis se centra en la relación entre las redes de bots basadas en IA y los algoritmos detrás de las plataformas de medios sociales. Muestra cómo estas herramientas se utilizan para suprimir los votos, desacreditar a los opositores, fortalecer los movimientos extremistas y alimentar la polarización social. Para hacer frente a esta amenaza, una contra-estrategia que implica varias partes interesadas (gobiernos, empresas tecnológicas, sociedad civil) se propone, favoreciendo la transición de una postura reactiva a una estrategia proactiva centrada en la construcción de una resiliencia social a largo plazo.

Citas

Bond, Shannon. “Fake viral images of an explosion at the Pentagon were probably created by AI”, [onlone], NPR, https://www.npr.org/2023/05/22/1177590231/fake-viral-images-of-an-explosion-at-the-pentagon-were-probably-created-by-ai, published 22 May 2023, accessed 28 Jun 2025.

Brenan, Megan. “Trust in Media at New Low of 28% in U.S.”, [online], https://news.gallup.com/poll/695762/trust-media-new-low.aspx, published 2 Oct 2025, accessed 27 Oct 2025.

Busch, Kristen E. “Social Media Algorithms: Content Recommendation, Moderation, and Congressional Considerations”, [pdf], Congressional Research Service, published 27 Jul 2023.

Clayton, Abené. “Fake AI-generated image of explosion near Pentagon spreads on social media”, [online], https://www.theguardian.com/technology/2023/may/22/ pentagon-ai-generated-image-explosion, published 23 May 2023, accessed 27 Oct 2025.

Elsner, Mark, Grace Atkinson, Saadia Zahidi. Global Risks Report 2025, published 15 Jan 2025, [pdf], https://reports.weforum.org/docs/WEF_Global_Risks_Report_2025. pdf, accessed 27 Oct 2025.

Europol, European Union Serious and Organised Crime Threat Assessment – The changing DNA of serious and organised crime, Publications Office of the European Union, Luxembourg, 2025.

Farcaș, Ana-Daniela. „Imagine și cultură: comunicarea vizuală persuasivă și publicitatea” (Image and Culture: Persuasive Visual Communication and Advertising”, in Buletin științific, Fascicula Filologie, Seria A, vol. XXXI, 2022, pp. 409-419.

Garde, Sameer. “Driving Performance with Content Hyper-Personalization Through AI And LLMs”, [online], Forbes, published 23 Feb 2024, accessed 28 Jun 2025.

Hao, Karen. “The Facebook whistleblower says its algorithms are dangerous. Here’s why”, [online], MIT Technology Review, https://www.technologyreview.com/2021/10/05/1036519/facebook-whistleblower-frances-haugen-algorithms/, published 5 Oct 2021, accessed 18 Jun 2025.

Harding, Emily. “A Russian Bot Farm Used AI to Lie to Americans. What Now?”, [online], CSIS https://www.csis.org/analysis/russian-bot-farm-used-ai-lie-americans-what-now, published 16 Jul 2024, accessed 27 Jun 2025.

Jackson Schiff, Kaylyn, Daniel S. Schiff, Natália S. Bueno. “The Liar’s Dividend: Can Politicians Claim Misinformation to Evade Accountability?”, [online], Cambridge University Press, https://www.cambridge.org/core/journals/american-political-science-review/article/liars-dividend-can-politicians-claim-misinformation-to-evade-accountability/687FEE54DBD7ED0C96D72B26606AA073, published 20 Feb 2024, accessed 15 Jun 2025.

Khalil, Mohammed. “Deepfake Statistics 2025: AI Fraud Data & Trends”, [online], https://deepstrike.io/blog/deepfake-statistics-2025, publisher 8 Sep 2025, accessed 27 Oct 2025.

King, Ashley. “Generative Music AI Platform Suno Being Used to Spread Hate”, [online], https://www.digitalmusicnews.com/2024/06/20/suno-hateful-music-generated-by-ai/, published 20 Jun 2024, accessed 19 Jun 2025.

Kinnard, Meg. “Election disinformation takes a big leap with AI being used to deceive worldwide”, [online], Quartz, https://qz.com/election-disinformation-takes-a-big-leap-with-ai-being-1851334182, published 14 Mar 2024, accessed 19 Jun 2025.

Klepper, David. “China-linked ‘Spamouflage’ network mimics Americans online to sway US political debate”, [online], https://apnews.com/article/china-disinformation-network-foreign-influence-us-election-a2b396518bafd8e36635a3796c8271d7, published 3 Sep 2024, accessed 21 Jun 2025.

Klepper, David. “Fake babies, real horror: Deepfakes from the Gaza war increase fears about AI’s power to mislead”, [online], AP News, https://apnews.com/article/artificial-intelligence-hamas-israel-misinformation-ai-gaza-a1bb303b637ffbbb9cbc3aa1e000db47, published 28 Nov 2023, accessed 22 Jun 2025.

Klincewicz, Michał, Mark Alfano and Amir Fard. “Slopaganda: The interaction between propaganda and generative AI”, in Filosofiska Notiser, Årgång 12, Nr. 1, 2025, p. 135–162.

Masood, Adnan. “The most dangerous aspect of AI propaganda is its invisibility”, [online], UST, https://www.ust.com/en/insights/adnan-masood-the-most-dangerous-aspect-of-ai-propaganda-is-its-invisibility, published 6 May 2025, accessed 28 Jun 2025.

Ross Arguedas, Amy, Craig T. Robertson, Richard Fletcher, Rasmus Kleis Nielsen. “Echo chambers, filter bubbles, and polarisation: a literature review”, [onlone], https://reutersinstitute.politics.ox.ac.uk/echo-chambers-filter-bubbles-and-polarisation-literature-review, published 19 Jan 2022, accessed 18 Jun 2025.

Rubin, Victoria L. Misinformation and Disinformation: Detecting Fakes with the Eye and AI, Springer, 2022.

Saab, Beatrice. “Manufacturing Deceit. How Generative AI supercharges Information Manipulation”, [pdf], 2024.

Sami, Waleed. „The Perilous Role of Artificial Intelligence and Social Media in Mass Protests”, [online], Modern Diplomacy, https://moderndiplomacy.eu/2024/12/07/ the-perilous-role-of-artificial-intelligence-and-social-media-in-mass-protests/, published 7 Dec 2024, accessed 20 Jun 2025.

Sedova, Katerina, Christine McNeill, Aurora Johnson, Aditi Joshi and Ido Wulkan. “AI and the Future of Disinformation Campaigns” [pdf], published by Center for Security and Emerging Technology, Dec 2021.

Smith, Michael. “AI Plus Social Media Bots = Large-Scale Disinformation Campaigns”, [online], Vercara, https://vercara.digicert.com/resources/ai-plus-social-media-bots-large-scale-disinformation-campaigns, published 15 Jul 2024, accessed 26 Jun 2025.

Sprenkamp, Kilian, Daniel Gordon Jones and Liudmila Zavolokina. “Large Language Models for Propaganda Detection”, în arXiv:2310.06422 [cs.CL], 27 Nov 2023.

Theobald, Emma; Alexis d’Amato, Joey Welles, Jack Rygg, Joel Elson and Samuel Hunter. “Examining the Malign Use of AI: A Case Study Report”, [online], Reports, Projects, and Research. 126, https://digitalcommons.unomaha.edu/ncitereportsresearch/ 126, published Apr 2025, accessed 28 Jun 2025.

Wack, Morgan, Carl Ehrett, Darren Linvill and Patrick Warren. “Generative propaganda: Evidence of AI’s impact from a state-backed disinformation campaign” [pdf] PNAS Nexus, Volume 4, Issue 4, April 2025.

Wakefield, Jane. “Deepfake presidents used in Russia-Ukraine war”, [online], BBC, https://www.bbc.com/news/technology-60780142, published 18 Mar 2022, accessed 22 Jun 2025.

Walch, Kathleen. “How Generative AI Is Driving Hyperpersonalization”, [online], Forbes, published 15 Jul 2024, accessed 28 Jun 2025.

Yaojun Yan, Harry, Garrett Morrow, Kai-Cheng Yang, John Wihbey. “The origin of public concerns over AI supercharging misinformation in the 2024 U.S. presidential election”, [online], Misinformation Review, https://misinforeview.hks.harvard.edu/ article/the-origin-of-public-concerns-over-ai-supercharging-misinformation-in-the-2024-u-s-presidential-election/, published 30 Jan 2025, accessed 25 Jun 2025.

Descargas

Publicado

2025-12-30

Cómo citar

Farcaș, I.-C. (2025). Manufacturing “realities”. The impact of AI in the age of disinformation. Revista Internacional animación, Territorios Y prácticas Socioculturales, (28), 259–270. https://doi.org/10.55765/atps.i28.3650