AI becomes an anti-fraud tool, Indian engineer counteracts the scam group

In New Delhi, India, an information engineer received a message claiming to be from a "former classmate turned high official," stating that his friend was about to be transferred and urgently needed to sell furniture and appliances at a low price. This type of "military transfer" scam is very common in the area, usually using fake identities, along with product photos and QR codes, to lure victims into making payments first. However, this engineer was not deceived; instead, he decided to use AI technology to counter the scammers, sparking heated discussions in the Indian community.

According to a post made by the individual under the username u/RailfanHS on Reddit, he used ChatGPT to generate a tracking website disguised as a payment page, enticing the scammers to click on it and grant access to the camera and location functions.

The engineer stated that the page consists of only about 80 lines of PHP code, yet it can simultaneously collect the GPS coordinates, IP address, and clear photos taken by the front camera when the fraudster clicks to upload a QR Code.

After the fraudster authorized without any vigilance, the engineer immediately received the other's location and images, and sent the intercepted information back to them. The post noted that the fraudster became panicked and flustered at that moment, continuously calling and messaging to plead for mercy, even promising to 'quit his ways,' capturing the attention of Indian netizens.

ChatGPT generated tracking pages triggered a wave of verification, with the tech community confirming its feasibility.

The 'anti-fraud counterattack' incident quickly spread within India's Reddit community, with many developers and AI enthusiasts starting to attempt to reproduce the methods used by the parties involved. Several users indicated that they successfully generated web pages with the same logic using ChatGPT.

User u/BumbleB3333 responded, pointing out that he has been able to create a simplified HTML page that obtains location information through the user's action of uploading images, and 'ChatGPT does not generate illegal surveillance programs, but it will produce web code that requests normal permissions, which is precisely why fraudsters fall into the trap.' Another user, u/STOP_DOWNVOTING, claimed to have developed an 'ethical version' of the code for research purposes.

The original author later added in the comments that he is an AI product manager, long familiar with how to adjust ChatGPT's output through prompts, and deploy the generated program to run on a VPS.

The tech community generally believes that this method does not exploit vulnerabilities, but rather combines social engineering and AI auto-generated code, causing fraudsters to inadvertently expose their information due to negligence.

The community applauded, but experts warned that 'countering fraud' also treads on legal gray areas.

As the incident went viral, many Indian netizens hailed the engineer as a 'modern Robin Hood,' praising him for using AI technology to strike back at fraudsters, with some even stating he is 'more efficient than the police.' However, cybersecurity experts remind that this kind of reverse tracking falls under 'hack-back' behavior, which remains a gray area in the laws of many countries, potentially violating laws by collecting others' information and inducing authorization.

Even so, for many Indians who have long suffered from fraud, this incident carries strong symbolic significance, highlighting that ordinary users can also enhance their self-protection capabilities using AI tools.

In recent years, fraud cases in India have grown rapidly, from fake investments and military police deployment fraud to cross-border money laundering networks emerging endlessly, making 'you must understand technology better than fraud groups' a social consensus.

The story of this engineer choosing to 'use fraud to counter fraud' also reflects the rapidly evolving role of generative AI in online offense and defense. Not only do fraud groups utilize AI to produce deceptive content in large quantities, but defenders are also beginning to make good use of AI to identify, counter, and expose suspicious behaviors. For the tech community, this case has become a vivid teaching material for discussing the safety and ethical issues of generative AI.

AI has become a new battlefield for fraud offense and defense, and digital literacy has become a necessary skill for the public.

With the widespread use of AI technology, fraud groups are heavily utilizing generative tools to create fake websites, forge documents, and conduct voice scams, with increasingly 'industrialized' attack methods. Conversely, this incident demonstrates that the general public can also use the same tools for self-defense, even turning the tables.

Experts point out that future online offense and defense will shift from 'tool against tool' to 'model against model' competition. Improving the public's digital literacy, distinguishing suspicious situations, and understanding AI operation logic will be more important than simply strengthening cybersecurity equipment.

The story of the Indian engineer is certainly dramatic, but the information behind it is even more profound: in the age of AI, the information gap is the risk gap. To prevent becoming the next victim, one must understand technology better than the fraudsters.

'Engineers use ChatGPT against fraud! Tracking techniques amaze the entire internet, exposing the fraudsters' images and locations' was first published on 'Crypto City'