AI chatbots are giving out people’s real phone numbers

Disclosure: Some links in this article are affiliate links. AI Maestro may earn a commission if you make a purchase, at no…

By AI Maestro May 13, 2026 2 min read
AI chatbots are giving out people’s real phone numbers

“`html




AI chatbots exposing real phone numbers

A 400% increase in AI-related privacy requests

DeleteMe, a company that helps customers remove their personal information from the internet, reports an increase of 400% in queries about generative AI over the past seven months. Specifically, 55% of these concerns reference ChatGPT, 20% reference Gemini, and 15% reference Claude.

Experts believe that this exposure is due to personally identifiable information (PII) being used in training data for LLMs like ChatGPT, Gemini, and Claude. These tools are built on large datasets that often include PII such as résumés, driver’s licenses, and credit card details.

Generative AI exposing real phone numbers

Several incidents have been reported where individuals’ phone numbers were exposed by these AI chatbots. For example:

  • A Redditor experienced calls from strangers who claimed to be looking for a lawyer or product designer, all of whom had his contact information.
  • In Israel, a software developer received a WhatsApp message with his personal number after Google’s Gemini provided incorrect customer service instructions.
  • At the University of Washington, a PhD candidate found her colleague’s phone number exposed by the AI tool.

This phenomenon is concerning because it exposes individuals to potential harassment or other malicious interactions. For instance:

Screenshot of the second part of a Google Gemini conversation.
Screenshot: Google Gemini provides MIT Technology Review with the incorrect number for PayBox.

Experts suggest that this issue is exacerbated by the lack of effective safeguards within these AI systems. For example, when Meira Gilbert searched for her friend Yael Eiger’s contact information using Google Gemini, the tool exposed her phone number despite instructions to avoid such outputs.

Imperfect Measures

While LLMs are designed with mechanisms to prevent the exposure of personal data, these measures do not always work as intended. For instance:

  • A University of Washington PhD student tested ChatGPT by searching for a professor’s information and found that it could provide their home address, purchase price, and spouse’s name.

These incidents highlight the need for better oversight and more robust privacy protections in AI systems. As data becomes harder to find, companies are increasingly looking to new sources like data brokers and people-search websites, which can lead to further exposure of personal information.

Key Takeaways

  • There is a significant increase in privacy-related requests for AI tools like ChatGPT, Gemini, and Claude.
  • The exposure of real phone numbers by these tools poses serious risks to individuals’ privacy.
  • No effective safeguards are currently preventing the accidental or intentional release of personal data by LLMs.



“`


Originally published at technologyreview.com. Curated by AI Maestro.

Stay ahead of AI. Get the most important stories delivered to your inbox — no spam, no noise.

Name
Scroll to Top