The security team at Meta is acknowledging broad occurrences of fake ChatGPT malware that exists to hack user accounts and take over business pages.
In the company’s new Q1 security report, Meta shares that malware operators and spammers are following trends and high-engagement topics that get people’s attention. Of course, the biggest tech trend right now is AI chatbots like ChatGPT, Bing, and Bard, so tricking users into trying a fake version is now in fashion — sorry, crypto.
Meta security analysts have found about 10 forms of malware posing as AI chatbot-related tools like ChatGPT since March. Some of these exist as web browser extensions and toolbars (classic) — even being available through unnamed official web stores. The Washington Post reported last month about how these fake ChatGPT scams have used Facebook ads as another way to spread.
Some of these malicious ChatGPT tools even have AI built in to look as though it’s a legitimate chatbot. Meta went on to block over 1,000 unique links to the discovered malware iterations that have been shared across its platforms. The company has also provided the technical background on how scammers gain access to accounts, which includes highjacking logged-in sessions and maintaining access — a method similar to what brought down Linus Tech Tips.
For any business that’s been highjacked or shut down on Facebook, Meta is providing a new support flow to fix and regain access to them. Business pages generally succumb to hacking because individual Facebook users with access to them get targeted by malware.
Now, Meta is deploying new Meta work accounts that support existing, and usually more secure, single sign-on (SSO) credential services from organizations that don’t link to a personal Facebook account at all. Once a business account is migrated, the hope is that it’ll be much more difficult for malware like the bizarro ChatGPT to attack.