Home Artificial Intelligence The Rising Impact of AI on Election Scams This Season

The Rising Impact of AI on Election Scams This Season

by admin
mm

In 2022, malicious emails targeting Pennsylvania county election workers surged around its primary elections on May 17, rising more than 546% in six months. Paired with the potential for nefarious large language models (LLMs) on top of these traditional phishing attacks, there’s a high likelihood that the everyday American will be the target of an even more realistic scam this election season.

Governments are starting to take notice, especially as AI becomes integrated into our daily lives. For instance, the U.S. Cybersecurity and Infrastructure Security Agency launched a program to boost election security – demonstrating a growing demand from both the government and the public to protect themselves, and their data, from potential bad actors this election season.

And even more recently, at the 2024 Munich Security Conference, 20 technology and AI companies signed a “Tech Accord to Combat Deceptive Use of AI in 2024 Elections,” which highlights guiding principles to protect elections and the electoral process including prevention, provenance, detection, responsive protection, evaluation and public awareness. Made up of major tech players including Microsoft, Amazon, and Google, this signifies an important shift in the industry that even beyond political affiliations, data security is a topic that will concern citizens and cyber experts alike throughout the rest of this election year. Moreover, generative AI will greatly impact how bad actors can carry out their attacks, making it easier to make highly realistic scams.

Types of Election Scams

While election season is not the only time we see an increase in scams, when it comes time to vote, either in the primaries or general election, we tend to see an increase in several methods and techniques. Each of these are used with the typical goal of gaining access to an individual’s account or monetary gain and the consequences of falling for them can have major consequences. In fact, deepfake fraud alone has cost the U.S. more than $3.4 billion in losses.

Some examples of scams we see around election season include:

  • Phishing: Phishing involves the use of phony links, emails, and websites to gain access to sensitive consumer information – usually by installing malware on the target system. This data is then used to steal other identities, gain access to valuable assets and overload inboxes with email spam. In an election season, phishing emails can be camouflaged as donation emails getting a citizen to click the link, thinking they are donating to a candidate, but actually playing into a bad actor’s scheme.
  • Robocalls, Impersonations, and AI-generated voice or chatbots: As seen in New Hampshire when a robocall impersonated President Biden urging citizens to not vote, election season will bring a rise in impersonations of pollsters or political candidates to falsely earn trust and get sensitive information.
  • Deepfakes: With the rise of AI, deepfakes have become incredibly realistic today and can be used to impersonate a boss or even your favorite celebrity. Deepfakes are videos or images that utilize AI to replace faces or manipulate facial expressions or speech. Many of the deepfakes we encounter daily will be in the form of a video, with a doctored clip depicting the person saying or doing something they may have never done. This is expected to be especially prevalent this election season with the risk of deepfakes being created to impersonate candidates. Even outside of the U.S., such as in the UK, there are fears deepfakes could be used to ​​rig elections.

AI’s Impact on Elections

On top of these scams, AI algorithms are being used to generate more convincing and engaging fake messages, emails, and social media posts to trick users into giving up sensitive information.

Microsoft and OpenAI published a threat briefing, “Navigating Cyberthreats And Strengthening Defenses In The Era Of AI,” that noted five threat actors from Russia, North Korea, Iran and China have all already been using GenAI for new and innovative ways to enhance their operations against soft targets.

Scams like chatbots, voice cloning, and more are taken one step further with AI as a tool to spread misinformation, develop malware, and impersonate individuals. Voice cloning tools can create near-perfect replicas of an election figure’s voice or face, for example. AI could also be used to flood call centers with fake voter calls, overwhelming them with misinformation.

On the highest alert will be social media, as it is a main vehicle for campaigns this election season. Voters will share if they’ve voted and maybe even show support for their favorite candidate on their pages. However, this year poses a new threat as we see a new increase in AI phishing (to include smishing and vishing) scams.

Consider if someone posted to their social media account support for a specific candidate. A few minutes later, they get an email appearing to be from a campaign manager, thanking them for their support. That potential victim could engage with that email by clicking a link, opening them up to credential harvesting, financial loss, or malware installation. Because of AI’s ability to monitor, create and deliver targeted phishing campaigns in near real-time, seemingly innocent social media posts now open users up to a new level of realistic phishing schemes.

Remaining Vigilant this Election Season

Attacks like phishing will continue to be a common way for bad actors to create realistic scams that can slip by even the most knowledgeable, and in the age of generative AI the potential impact of these has only been accelerated to allow bad actors quicker access to sensitive information.

While businesses deploy technology to protect their data and employees, consumers need to also be aware of techniques to spot and avoid scams. Some of these include:

  • Looking out for random or misspelled hyperlinks or email subject lines
  • Not clicking on a link from an unknown sender
  • Employing two-factor authentication or biometric authentication wherever possible
  • Making social media accounts private
  • Reporting malicious activity
  • Educating other colleagues or family members
  • Look for a .gov website domain to verify the authenticity of an election candidate
  • If you have IT at your workplace, you can also ask about:
    • Zero Trust networks
    • Phishing-resistant two-factor authentication
    • Email security tools (DMARC, DKIM, SPF)
    • Methods to digitally sign content (or another way to cryptographically way to verify your communications)

Although election seasons are a time to be on high alert, attacks can happen at any time, so it’s important to ensure your cybersecurity foundations are strong and reliable year-round.

Source Link

Related Posts

Leave a Comment