Elon Musk is facing backlash after sharing a contentious video that sparked accusations of spreading misinformation about Vice President Kamala Harris. Posted last Friday, the video featured an AI-generated voice purportedly belonging to Harris, making derogatory remarks about President Biden and herself. Despite initial labeling by the “Mr. Reagan” account as parody, Musk’s repost lacked any disclaimer about the manipulated content, raising concerns of violating platform policies against misleading media.
The video pushes numerous right-wing attacks against Harris, including a fabricated assertion that “I, Kamala Harris, am your Democrat candidate for president because Joe Biden finally exposed his senility at the debate. I was selected because I am the ultimate diversity hire. I’m both a woman and a person of color, so if you criticize anything, I say you’re both sexist and racist.”
The Harris campaign swiftly condemned Musk’s actions, emphasizing the importance of truth in public discourse.
“We believe the American people want the real freedom, opportunity, and security Vice President Harris is offering; not the fake, manipulated lies of Elon Musk and Donald Trump,” the Harris campaign said in an email.
California Governor Gavin Newsom joined the criticism, advocating for legislative measures against voice manipulation in advertisements, which he pledged to pursue soon.
“Manipulating a voice in an ‘ad’ like this one should be illegal,” he wrote on X. “I’ll be signing a bill in a matter of weeks to make sure it is.”
Rob Weissman, co-president of Public Citizen, expressed concern that the video could deceive the public, remarking to the AP, “I don’t think that’s obviously a joke. I’m certain that most people looking at it don’t assume it’s a joke.”
Public Citizen has been advocating for federal oversight of generative AI technologies, describing the video as precisely the type of content they have warned against.
The controversy highlights mounting apprehensions over deepfakes and AI’s role in political advertising, with such technology already making inroads in the 2024 U.S. election cycle and abroad.
Last week, the Federal Communications Commission (FCC) moved forward with a proposal to mandate disclosures in TV and radio advertisements using AI. However, this regulation would not extend to online and streaming platforms, where Musk’s video was shared.
FCC Chair Jessica Rosenworcel emphasized, “Malicious actors are already using AI in robocalls to mislead consumers and misinform the public. We need rules that empower consumers to avoid misinformation and make informed choices.”
In response to the controversy, Musk defended his actions with a tongue-in-cheek remark, stating, “I checked with renowned world authority, Professor Suggon Deeznutz, and he said parody is legal in America.”
His subsequent repost included a clarification identifying the content as parody, seemingly in response to criticism over the initial omission.
However, Twitter users were quick to slam Musk, accusing him of hypocrisy and contributing to misinformation during sensitive election periods.
User @thejcole21 pointed out, “Except you didn’t label it as that to start with.”
“Bro you literally cried about people parodying you and suspended their accounts… this isn’t parody, this is a deepfake disinformation campaign during an election season by a prominent figure who has also decried Google being “election interference” cuz you couldn’t find ‘Donald'”, responded user @NiQ_108
@JacobRAdkins3 replied: “This ‘ad’ will be used by right-wing groups to push lies about liberals. There are definitively handfuls of people who believe the lies this ‘ad’ is pushing. Do better!”
The incident reignited discussions on the responsibilities of tech leaders in safeguarding digital media integrity and the need for robust regulatory frameworks to address emerging AI technologies in public discourse.