Researchers have warned to avoid chatbots that don’t appear on a company’s website or app and be cautious of providing any personal information to someone users are chatting with online, a new report said on Tuesday.
According to the Norton Consumer Cyber Safety Pulse report, cybercriminals can now quickly and easily craft email or social media phishing lures that are even more convincing by using AI chatbots like ChatGPT, making it more difficult to tell what’s legitimate and what’s a threat.
“We know cybercriminals adapt quickly to the latest technology, and we’re seeing that ChatGPT can be used to quickly and easily create convincing threats,” said Kevin Roundy, Senior Technical Director of Norton.
Moreover, the report said that bad actors can also use AI technology to create deepfake chatbots.
These chatbots can impersonate humans or legitimate sources, like a bank or government entity, to manipulate victims into turning over their personal information to gain access to sensitive information, steal money or commit fraud.
To stay safe from these new threats, experts advise users to think before clicking on links in response to unsolicited phone calls, emails or messages.
Further, they also recommend users to keep the security solution updated and ensure that it has a full set of security layers that go beyond known malware detection, such as behavioural detection and blocking.