A password is often the only thing preventing a cybercriminal from gaining unauthorized access to sensitive data stored on a secure network. Consequently, cybercriminals deploy a wide range of schemes designed to obtain passwords from system users.
Phishing scams have become one of the most popular cyberattacks aimed at obtaining passwords. These scams attempt to convince targets that the attacker is a legitimate user — IT support personnel, a representative from a well-known company, or an executive at the user’s company, for example — who can be trusted with their password. If the targeted user falls for the scam, the system has been breached.
Phishing has grown in popularity in recent years, primarily due to the rise of artificial intelligence. Cyberattackers armed with the power of AI can make phishing scams more effective by increasing their authenticity, complexity, and persuasiveness.
“Generative AI makes it much easier for cyber attackers to develop phishing campaigns,” says Marcelo Barros, Director of Global Operations for Hacker Rangers. “The power AI provides to create deepfakes also empowers new variations of phishing, such as vishing attacks that use AI to generate voice calls mimicking a boss or other person in authority.”
Clients around the world trust Barros and his team at Hacker Rangers to provide cutting-edge cybersecurity solutions aimed at preventing phishing and other common attacks from succeeding. The Hacker Rangers platform is built upon an innovative approach to cybersecurity training that leverages gamification to make cyber awareness fun and engaging for employees. With Hacker Rangers, companies can enhance in-house cybersecurity programs with training exercises that keep employees up to date on the latest cybersecurity threats and the most effective ways to neutralize them.
As Barros explains, today’s organizations must pay attention to the rise in phishing attacks and take steps to improve their security. “Nine out of ten organizations report that they fell prey to phishing attacks in 2023,” Barros recently said in Cyber Defense Magazine, “with nearly seven out of ten employees saying they contributed to the attacks’ success by knowingly taking risky actions such as handing over credentials to untrustworthy sources.”
On the attack: AI increases the authenticity of phishing attacks
While phishing scams may seem very simple, they have proven to be very effective because they circumvent a system’s cybersecurity control frameworks. These scams trick users into handing over their passwords by playing on their fears or mimicking the normal messages they receive throughout their workday.
“One of the primary reasons phishing is effective is its focus on deep-rooted human emotions,” Barros says. “Rather than seeking to overcome cyber defenses with computing power or zero-day exploits, it overcomes them by exploiting empathy, fear, and greed.”
For example, a phishing scam might involve messages that inform a company’s employees it’s time to update their passwords and follow instructions that lead to the attacker gaining access and control of their passwords. Employees may easily fall for this, as password resets are a common occurrence.
In years past, the weakness of phishing scams was that they often had flaws that a careful user could spot, such as typos, poor grammar, or inconsistencies in formatting. Training provided to users on identifying phishing messages usually focused on these types of issues as telltale signs that a cyberattacker was targeting them.
Today, however, AI has given cyberattackers the power to create messages that are error-free and easy to fall for. Additionally, AI can be used to gather and include details that make messages feel remarkably legitimate.
For example, a scammer can use AI to scan through a person’s social media accounts and identify their writing style, then develop phishing messages that appear to be coming from that person. Similarly, cyberattackers can use AI to learn about their targets on social media and other public platforms, allowing them to develop personalized messages that appear more authentic.
On the defense: AI can be used to uncover next-level phishing
One option for addressing next-level phishing attacks is to use artificial intelligence to bolster defenses. If AI can be trained to identify the telltale signs of phishing, it can support employees’ defense efforts by pointing out suspicious messages. To ensure that AI-powered systems are kept up to date on phishing schemes, companies should empower employees to contribute to the intelligence the organization gathers about the phishing attacks it faces.
“An organization’s best defense against phishing will be employees who are trained on the threat,” Barros says. “The training should provide a general understanding of how phishing works, how to identify it, and how to report it when it’s suspected. As employees add to an organization’s understanding of the threat it’s facing, they enhance its overall ability to keep systems secure.”
Artificial intelligence has significantly increased the risk of phishing attacks by making them harder to detect, meaning targets must increase their vigilance in response. Taking the time to carefully scrutinize every message and address even the slightest suspicion is critical to keeping today’s cyber attacks from succeeding.