A concerning new AI tool called WormGPT has emerged, posing a significant threat to regular users as it empowers hackers to target their personal information, including the ability to easily forge emails.
Generative AI and AI chatbots have gained immense popularity, with millions of people utilizing them for various purposes such as finding solutions or engaging in everyday conversations. As more of these tools are developed and made available to the public, it becomes increasingly challenging to distinguish reliable ones from those that come with warning signs. Surprisingly, there are AI tools that can be exploited by malicious actors to launch cyber attacks on unsuspecting individuals.
A recent addition to the generative AI tool family, similar to ChatGPT, is WormGPT. Its creator boldly claims that his tool is a direct adversary to ChatGPT, as it imposes no restrictions on unethical or illegal text generation.
But what exactly is the threat posed by WormGPT?
Those inclined towards criminal activities can utilize WormGPT to facilitate malicious endeavors. According to findings from SlashNext, the tool can generate phishing and business email compromise (BEC) attacks.
Security researcher Daniel Kelley stated, “This tool presents itself as a blackhat alternative to GPT models, specifically designed for malicious activities. Cybercriminals can employ such technology to automatically create highly persuasive fake emails that are personalized to the recipient, thereby significantly increasing the likelihood of a successful attack.”
By leveraging WormGPT’s chat memory retention and code formatting features, hackers can effortlessly craft sophisticated phishing emails, enhancing the effectiveness of their messages. In fact, cybercriminals won’t require advanced skills to exploit WormGPT for nefarious purposes.
Unlike ChatGPT, WormGPT lacks predefined boundaries for generating content. Consequently, its potential poses a serious problem of cyber attacks, leading to a rise in cyber scams and crimes.
To combat the use of such AI tools, ordinary individuals should remain vigilant regarding the emails they receive and the chats they encounter on social media or other platforms. The best strategy is to exercise caution and avoid clicking or interacting with unfamiliar individuals online.
Cybercriminals are not solely limited to manipulating ordinary messages and chats; they are also utilizing AI tools to create AI-based videos that impersonate people, friends, or family members. These videos are then used to deceive individuals on platforms like WhatsApp and trick them into parting with their money.