Stay Safe

Is AI Safe to Use? Privacy, Data, and What to Know

What happens to the information you share with AI tools, and how to protect yourself. A plain-English guide to AI privacy for adults over 50.

ConqueringAI Editorial Team||7 min read|AI-assisted content

Reviewed against FTC.gov and CISA.gov guidance ยท AI-assisted content โ€” see our editorial standards


Quick answer

Quick answer: AI tools like ChatGPT and Claude are safe for everyday tasks โ€” asking questions, drafting letters, planning trips. The main rule: never share your Social Security number, Medicare ID, bank account details, or passwords with any AI tool. Treat it like a knowledgeable stranger โ€” helpful and worth talking to, but you wouldn't hand them your wallet.


More than half of adults over 60 say privacy is their top concern when it comes to using AI. That concern is reasonable. These tools read everything you type, and many people aren't sure where that information goes.

The good news: for the kinds of tasks most people use AI for โ€” answering questions, drafting letters, understanding confusing documents โ€” the privacy risk is low if you follow a few simple rules. This guide explains exactly what happens to your data, what the real risks are, and what you can safely share.

56%

of adults 60+ cite privacy as their top AI concern

Source: AARP 2024

$0

cost of using the free tier of ChatGPT, Claude, or Gemini] stat

Source: 30 days|how long ChatGPT stores conversations by default before you can delete them|OpenAI Privacy Policy 2026


What actually happens when you type something into ChatGPT

When you send a message to ChatGPT, Claude, or any major AI tool, here is what happens:

Your message travels over an encrypted connection to the company's servers โ€” the same type of encryption your bank uses. The AI processes your message and generates a response. Depending on your settings and the company's policy, your conversation may be stored for a period of time and in some cases used to improve future versions of the AI.

That last part is what concerns most people. And it is worth understanding carefully.

OpenAI (ChatGPT): By default, ChatGPT stores your conversations and may use them to improve its models. You can turn this off. Go to Settings โ†’ Data Controls โ†’ toggle off "Improve the model for everyone." When you do this, your conversations are not used for training.

Anthropic (Claude): Claude.ai does not use your conversations to train its models by default for paid users. Free users should check the current privacy policy at anthropic.com/legal/privacy.

Google (Gemini): Conversations may be reviewed by human reviewers for safety and quality. You can turn off "Gemini Apps Activity" in your Google account to prevent storage.

💡 Tip

You can always delete your conversation history. In ChatGPT, click the three dots next to any conversation and select Delete. For Claude, go to Settings โ†’ Privacy. This removes what's stored โ€” though it may not erase data already used for training.


What is safe to share โ€” and what isn't

The line is not complicated. Think of it this way: what would you say out loud to a helpful stranger in a public place?

Safe to share:

  • General health questions ("what does metformin do?")
  • Your general situation without identifying details ("I'm 68 and on Medicare, what should I ask my doctor about Part D?")
  • The text of a confusing document, with your name and account numbers removed
  • Travel plans, recipe questions, letter drafts, family situations

Never share:

  • Your Social Security number or Medicare ID number
  • Bank account or credit card numbers
  • Passwords or security codes
  • Your full date of birth combined with your full name and address
  • Anything you would not say out loud in a coffee shop

⚠ Important

If a website or app claims to be an AI assistant and asks for your Social Security number, Medicare ID, or banking details โ€” close the tab immediately. Legitimate AI tools never need this information to help you.


The real risks โ€” and the ones that aren't

The risk people worry about most: "Will the AI company sell my data?"

The major companies โ€” OpenAI, Anthropic, Google โ€” do not sell your conversation data to advertisers. Their business models are subscription and API fees, not data brokering. This is verifiable in their public privacy policies.

The risk people underestimate: Sharing too much identifying detail in a single conversation.

If you paste in a letter that includes your full name, address, date of birth, Medicare ID, and diagnosis โ€” that complete picture is more sensitive than any single piece. Describe your situation in general terms instead. "I received a Medicare denial for a knee replacement" tells AI everything it needs to help you without exposing the identifying details.

The other real risk: Using an unofficial or fake AI tool.

Scammers create fake websites that look like ChatGPT or Claude. They collect everything you type. Always go directly to chat.openai.com, claude.ai, or gemini.google.com โ€” never through a link in an email or ad.

🚫 Rule

Always navigate to AI tools directly by typing the address in your browser. Never access ChatGPT, Claude, or any AI tool through a link in an email, text message, or advertisement โ€” these may be scam sites designed to harvest what you type.


Step-by-step: how to use AI more privately

1

Go directly to the official site

type chat.openai.com, claude.ai, or gemini.google.com directly into your browser. Bookmark it.

2

Create a free account with just an email

you do not need to provide your real name, phone number, or any sensitive details to sign up.

3

Turn off training data sharing

in ChatGPT: Settings โ†’ Data Controls โ†’ off. This prevents your conversations from being used to train future models.

4

Remove identifying details before pasting documents

cross out or delete your name, account numbers, and ID numbers before uploading or copying document text.

5

Delete conversations when done

especially if you discussed anything personal. This limits what's stored long-term.

6

Use a separate email for AI tools

if privacy is a priority, create a free Gmail or ProtonMail account specifically for AI sign-ups.


What about the Document Analyzer on this site?

Our [Document Analyzer processes your document entirely in memory โ€” it is never written to a server, database, or file system. The moment your session ends, the document is gone. We built it this way specifically because of the sensitivity of Medicare and insurance documents.

You still accept a disclosure before submitting, because the document does travel over the internet to reach Claude's API. If you are not comfortable with that, our written guides can help you understand your documents without uploading anything.

📄

Have a document you want explained without uploading it?

Upload it to our Document Analyzer โ€” plain-English explanation in under 20 seconds. Free, nothing stored.

Try Document Analyzer →

A real example

Dorothy, 71, from Florida, wanted to ask AI about a new blood pressure medication her doctor prescribed. She was nervous about privacy.

She typed: "I'm a 71-year-old woman. My doctor just prescribed lisinopril. What are the most common side effects and what should I watch out for?"

She did not include her name, her doctor's name, her insurance information, or any account numbers. The AI gave her a thorough answer about lisinopril. She printed it out and brought questions to her follow-up appointment.

That is exactly how AI is meant to be used for health questions โ€” general enough to protect your privacy, specific enough to be genuinely useful.


Frequently asked questions

Is ChatGPT listening to my phone calls or microphone?

No. ChatGPT and other text-based AI tools only process what you actively type or paste into them. They do not access your microphone, camera, contacts, or any other part of your device unless you explicitly grant permission for a specific feature (like voice input, which you would have to activate manually).

Can someone else see my conversations with ChatGPT?

OpenAI employees may review conversations for safety and quality purposes. This is disclosed in their privacy policy. They are not reading your conversations in real time โ€” it is more like how email providers can technically access your emails. Turning off training data sharing reduces (but does not eliminate) this possibility.

What if I already shared something sensitive with an AI tool?

First, don't panic โ€” isolated pieces of information are rarely useful to anyone. Delete the conversation from your history. If you shared something like a full Social Security number, consider placing a credit freeze as a precaution. Per the FTC, a credit freeze is free and can be done at all three bureaus (Equifax, Experian, TransUnion) in under 10 minutes online.

Are the AI tools on this site HIPAA-compliant?

No โ€” and we are upfront about that. ConqueringAI is not a healthcare provider or health plan, so HIPAA does not apply to us. Our tools use Claude's API, which is subject to Anthropic's privacy policy. You are informed of this and given the choice to proceed before submitting anything.

Is it safe to use AI on public Wi-Fi?

The connection between your device and the AI tool is encrypted, so someone snooping on the same network cannot read what you type. The larger risk on public Wi-Fi is using other, non-encrypted services at the same time. For AI use specifically, public Wi-Fi is not a significant concern.


Related reading:

Have a confusing document?

Upload it for a plain-English explanation - free.

Document AnalyzerLetter Writer