27.9 C
Karachi
Friday, September 20, 2024

Can ChatGPT Find Phone Numbers? How an Easy Trick Revealed Sensitive Information

Researchers Expose Privacy Risks in ChatGPT

Potential privacy risks in ChatGPT have been uncovered by researchers, with personal information being revealed as extractable from the AI using simple tricks. The study was aimed at exploring the types of data that could be retrieved from the chatbot, and it was discovered that real phone numbers and email addresses were included in the training data.

Sensitive User Information Extracted Through Simple Commands

The research revealed that ChatGPT, designed to protect user privacy, could be triggered to disclose sensitive information. A specific command led to the chatbot exposing real user contact details. This exposure included email addresses and phone numbers of individuals and companies, underscoring a major flaw in the AI’s safety measures.

Study Highlights Vulnerabilities in AI’s Data Security

According to the study, which involved experts from institutions such as Google DeepMind, Cornell, and others, the vulnerability was surprisingly easy to exploit. They found that by using prompt-based language models (LLMs), ChatGPT could access and reveal user data gathered from the internet. This has raised concerns about how LLMs use real data for training.

Prompt-Based Language Models and Real User Data

One example shared by the researchers involved instructing ChatGPT to endlessly repeat a word. During this process, it unexpectedly revealed personal information, including a phone number and email address linked to a CEO. In another instance, a similar prompt led the AI to share contact details of a law firm. While OpenAI claimed to have addressed this issue in a patch, some sources noted that the problem persisted.

OpenAI’s Patch Claims and Ongoing Concerns

This discovery has further fueled criticism regarding ChatGPT’s data access. Scrutiny has been applied to the use of data obtained from the internet for improving AI models by OpenAI. Despite a statement made by CEO Sam Altman that training data from paying customers would no longer be used, concerns about the handling of user information continue to persist.

Criticism of ChatGPT’s Data Access and Training Models

There have been growing concerns over unauthorized data usage, especially from artists and writers, as ChatGPT collects vast amounts of information from the web. Additionally, fears of the AI’s potential misuse have been raised, particularly regarding its ability to generate harmful code when prompted by bad actors.

Ongoing Privacy and Security Concerns in AI Technology

Significant issues concerning privacy and security are posed by ChatGPT and other AI models. As user data is readily accessible online, these concerns are intensified. Although awareness is being raised, additional measures might be required to ensure that user data is kept secure and protected from such vulnerabilities.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

57,000FansLike
1,719FollowersFollow
13,500FollowersFollow
13,500SubscribersSubscribe

Latest Articles