It’s common knowledge that technology had a role in swaying voters in the 2016 and 2020 elections. To add an additional layer of complications to the upcoming elections in the US, AI will likely play a heavier hand. While AI has been utilized in a multitude of ways societally, there are growing concerns about the use of generative AI during this election season, which may manipulate voters and undermine the elections.
What is generative AI?
Generative AI is artificial intelligence that is capable of generating photos, written information, and other data based on models that learn and process raw data as well as through user prompts.
How can generative AI be misused in this year’s election?
For every candidate who is using AI as a cost-saving measure, there are those who can use it for more malicious purposes. While AI can be used to distinguish and exclude ineligible voters from registries as well as signature matches, it may end up suppressing voters by knowingly or unknowingly removing voters who are actually eligible.
Chatbots and algorithms can be used to drum up incorrect information to voters, which can sway voters against certain candidates and issues. In the worst-case scenario, AI can amplify hot-button issues and potentially stir up violence.
MORE: AI WEARABLE CONTRAPTION GIVES YOU SUPERHUMAN STRENGTH
How tech and AI companies are failing to protect election integrity
Tech companies aren’t investing in election integrity initiatives. AI companies don’t have the connections and funding to manage any risks involved with how their tools get utilized for elections. This means that there is less and less human oversight on what AI generates as well as how the AI-generated information gets used.
The very nature of the American constitution will be in direct conflict with AI during this election season as free speech is part of the very fabric of American ideals, yet preventing and stopping misinformation is crucial to ensure a fair election.
Not only is the classic mud-slinging of candidates likely, but other countries such as China, Iran, and Russia have been recently caught trying to use content created with AI to manipulate U.S. voters.
MORE: CYBERATTACK ON DC ELECTION SITE EXPOSES VOTER DATA TO HACKERS
Ways to prevent misuse of AI
Social media has undoubtedly changed the way election campaigns are run. Various platforms have their own processes in place to deal with election information and misinformation. YouTube has changed its policy and states that “We will stop removing content that advances false claims that widespread fraud, errors, or glitches occurred in the 2020 and other past US Presidential elections.”
YouTube’s parent company, Google, requires election advertisers to prominently disclose when their aids include realistic synthetic content that’s been digitally altered or generated, including by AI tools. Also, over the coming months, YouTube will require creators to disclose when they’ve created realistic altered or synthetic content and will display a label that indicates to people that the content they’re watching is synthetic.
Meta, which owns Facebook, Instagram, and Threads, will put labels on images and ads that were made with AI. They say this is to help people know what is real and what is not, and to stop false or harmful information from spreading, especially during elections.
Additionally, several US states have passed laws regulating the use of political deepfakes, including California, Michigan, Minnesota, Texas, and Washington.
MORE: AI AND HOLOGRAMS BRING THE KING OF ROCK N ROLL BACK TO LIFE
FOR MORE OF MY SECURITY ALERTS, SUBSCRIBE TO MY FREE CYBERGUY REPORT NEWSLETTER HERE
1 comment