top of page
Search
Needling Worldwide

The impact of Artificial Intelligence (AI) on social engineering and cyber attacks

Updated: Jun 19

Social engineering attacks continue to be a major business threat worldwide. Roughly 98% of cyber-attacks relating to this are successful, according to Splunk. The average business faces more than 700 different types of attacks annually.

While organizations can and sometimes do invest in very sophisticated cyber security and threat detection solutions to detect anonymous network activity, a malicious actor or untrained employee is typically the root cause of social engineering attacks. However, the explosion of AI has made social engineering attacks more sophisticated, covert, and extremely difficult to detect. Even the most security-aware and technically advanced teams often fall victim to such an attack driven by AI.

Including AI’s deep scripted algorithms and advanced processing capabilities often leads to providing malicious actors with the advanced ability to develop more complex attacks. Below are a few examples:

  • Hyper-personalized phishing – with the use of AI, hackers are now able to customize familiar names and logos and infest social media platforms while masquerading as legitimate users to the most minute detail. AI also provides a quick learning ability to capture the individual’s typical tone of discussion and words often used as convincing language to deceive users.

  • Natural language generation – AI allows a malicious attacker to generate human-like writing and dialogue while studying thousands of pages over the internet based on how the phishing attack is designed. This allows the hackers to craft persuasive malicious social engineering context at the highest scale possible, and to push the malicious script to target the most people possible, all within seconds.

  • Detection evasion – AI has already shown to be effective by raising the bar and outsmarting various security tools as well as quickly identifying blind spots. This is while capturing vital information and putting it back into a readable format. The purpose here is for the malicious attacker to target a bigger audience by providing all this information back to the attacker, again within mere seconds.

  • AI-generated deep fakes – AI can capture someone’s voice and pictures to manipulate both as if they’re coming from one’s boss. This is while sending out misleading information to target and blackmail the organization. By analyzing large data sets, AI can review, sort, and refine the data sets to the malicious actor’s discretion. This is to target specific audiences. The goal here is to target the organization’s image as leverage while bypassing all the initial security detection tools.

As AI gains more relevance, all businesses regardless of size are prime targets. Depending on the technical expertise of the company, they can be sitting ducks for malicious actors. One baseline defense used regularly now by companies hoping to offset AI is the use of multi-factor authentication (MFA).

Finally, exercising more advanced security awareness training programs and the implementation of sender policy framework (SPF) is now being gradually implemented within organizations to prevent malicious actors from using AI to send malicious emails on the organization’s behalf. Additionally, it’s often advised to block any risky file downloads from outside the organization’s known servers and track IP addresses. With this multi-layered approach, companies hopefully can use these safeguards to offset the threatened landscape that malicious actors are using AI for their own nefarious purposes.

 

7 views0 comments

Comments


bottom of page