Voices from Beyond: The Ethical Dilemmas of AI-Generated Deepfakes in Public Advocacy
Role: Lead Research Planner
Overview/Problem Statement
Imagine hearing the voice of a loved one long after they’ve passed, not in a private memorial, but delivering a political message. AI-generated deepfakes allow us to recreate voices with startling accuracy, raising serious ethical questions about their use in public advocacy and policy. This project investigates the emotional and ethical dilemmas of using AI-generated voices in public spaces, especially in the context of political advocacy. The research focuses on how AI deepfakes interact with grief, memory, and public trust, as well as their potential to manipulate opinions in emotionally charged advocacy efforts.
Why This Matters: With AI technology rapidly evolving, there’s a growing trend in using AI-generated voices for both memorialization and public advocacy. Understanding how this technology impacts audiences emotionally, ethically, and politically is critical as its use becomes more widespread.
Research Aims & Goals
This study asked: How do AI-generated voice deepfakes of deceased individuals affect emotional states, influence public opinion, and raise ethical concerns in the context of public advocacy? Specifically, the research aimed to:
Explore the ethical dilemmas (consent, privacy, posthumous rights) surrounding the use of deepfakes.
Investigate the emotional reactions of people exposed to AI-recreated voices of deceased individuals.
Examine the effectiveness of AI-generated deepfakes in shaping public opinion in advocacy settings.
Methodology
Qualitative Interviews
Semi-structured interviews with family members, AI ethicists, and members of bereavement groups to help uncover deep emotional and ethical concerns related to the technology. Interviews can capture emotional depth and ethical perspectives that couldn’t be accessed through quantitative data alone.
Quantitative Surveys
Surveys distributed to broader communities, measuring public comfort with AI-generated voices and assessing opinions on the ethical use of these voices in public advocacy. Gauge broader public sentiment and measure attitudinal trends.
Experimental Design
Participants divided into two groups: one exposed to traditional advocacy content, and the other to AI-generated deepfake audio. Emotional responses and opinion shifts measured before and after exposure. Empirically assess the persuasive power of AI-generated voices compared to traditional methods, adding a quantifiable measure to the research.
Recruitment Criteria & Process
Participants recruited from bereavement support forums, AI ethics communities, and social media platforms. These channels were selected because they offer access to both those with direct emotional connections to the subject matter and individuals with a more detached, ethical focus. This combination ensured diverse perspectives, from deeply emotional responses to more objective ethical considerations.
Possible Challenges: Recruiting participants for such an emotionally charged topic posed significant ethical concerns. Ensuring that participants fully understood the emotional risks of engaging with AI-generated voices, especially in bereavement settings, was a critical consideration in designing the recruitment process.
Emotional Impact: Participants exposed to deepfake audio will express a wide range of emotional responses. Some will find comfort in hearing familiar voices, others could be deeply unsettled by the potential for emotional manipulation, especially without the deceased’s consent.
Ethical Concerns: Participants could raiseconcerns about privacy, posthumous rights, and the ethical appropriateness of using deepfakes in public advocacy. Discomfort with the possibility of AI-generated voices being used for political persuasion without proper ethical guidelines.
Influence on Public Opinion: The group exposed to deepfake audio could be more likely to shift their opinions on policy issues, demonstrating the power of familiar voices in shaping public sentiment, even when artificially generated.
Possible Findings & Insight
Possible Recommendations
-
ETHICAL DISCLOSURE
Suggest platforms using AI-generated voices in public advocacy integrate transparent disclosure systems to inform audiences when voices are artificially generated.
-
INFORMED CONSENT PROTOCOLS
Propose that organizations seeking to use AI-generated voices obtain explicit consent from the deceased’s family before using their voice in any public or political campaign.
-
REGULATORY GUIDELINES
Develop industry-wide ethical standards for the use of deepfakes in public advocacy, ensuring protection of privacy, emotional well-being, and informed consent.

Reflection & Lessons Learned
While these tools offer powerful opportunities for advocacy, the risks to public trust and emotional well-being are significant. This research plan emphasizes the need for clear ethical guidelines, transparent disclosure, and consent protocols to prevent misuse. Designing it taught me the importance of balancing technological innovation with human dignity and empathy, particularly in public and political discourse.
Challenges: Ensuring the emotional safety of participants when engaging with such sensitive technology would be a key challenge. Handling participant distress and ensuring ethical transparency throughout the study would also be necessary to maintain the integrity of the research.