Cybercriminals are circumventing geofencing, leveraging the growing market for stolen ChatGPT premium accounts. A recent report from Check Point Research states that the instances of trade in stolen ChatGPT premium accounts are constantly rising. This is further dodging the geofencing restrictions set by OpenAI and enabling hackers to cherish unlimited access to this generative AI model.
The trend of account takeovers of stolen online memberships has become a flourishing business.
Previously, this hacking market focused solely on stealing financial services accounts, emails, and social media. However, with the exponential hype around OpenAI’s ChatGPT, cybercriminals have seemingly found it lucrative to steal and profit from ChatGPT premium accounts.
The activity has increased drastically since March 2023. According to Check Point, the American-Israeli multinational software provider, it has witnessed an increased discussion about the trade of stolen GPT accounts and breach of credentials. The organization has also reported that there have been discussions about tools to steal premium ChatGPT accounts.
How The Accounts are Being Stolen
It’s being said that cybercriminals are desperately exploiting the fact that users recycle the same password across different platforms. Taking this as a loophole, malicious actors load sets of combinations of passwords and emails into a dedicated software called an account checker. This account checker further carries out an attack against a particular online platform to fetch the credentials which match the platform’s login.
Finally, the malicious actor executes an account takeover by taking control of the account using the stolen credentials, and the account holder can hardly guess what’s going on.
Cybercriminals are using web testing suites like Silver Bullet. This enables users to perform requests toward a target web application. The suite features a combination of tools to scrape and parse data, test units using Selenium, automate pen testing, etc.
Elon Musk has also pointed out the threats of rapidly growing AI, proposing a six-month halt on all AI developments.
They are abusing this configurable suite to carry out the credential stuffing and account-checking attacks. Cybercriminals are even using brute force attacks against certain websites. This requires configuration files that adjust the process, enabling cyber criminals to access to accounts of the users automatically.
In the said ChatGPT scam, it has been identified that cybercriminals are providing a configuration file for a Silver Bullet, which can check credentials for OpenAI’s platform. Ironically, this configuration helps hackers to steal accounts, maintaining scalability.
As such, the process is 100% automatic, and it can permit up to 200 checks per minute. What’s more concerning, it allows proxy implementation – helping hackers to bypass crucial protection or security gateways.
It allows people that have zero knowledge of development to code malicious tools and easily to become an alleged developer. It simply lowers the bar to become a cybercriminal.Shykevich, Threat Intelligence Group Manager at Check Point Research
Things came to light when an English-speaking cyber-criminal came up with an advertisement that talked about getting ChatGPT Plus for a lifetime with a 100% satisfaction guarantee. The proposed lifetime access of GPT Plus costs around $59.99. However, in reality, OpenAI’s legitimate service pricing is $20 per month.
To reduce the cost further, the hacker was talking about sharing ChatGPT accounts with others against the one-time cost of $24.99. Several underground users have started using this service, and some vouched for it.