Between February 2024 and March 2026, every major AI assistant — ChatGPT, Claude, Gemini, Copilot — flipped the switch on long-term memory. They went from forgetting your last conversation the moment you closed the tab to retaining a running, personal context about you indefinitely.[1][2]
The feature is real, and for a lot of users it's genuinely useful. The privacy question is real too, and most of the coverage you'll find about it is a checklist comparison: which platform stores what, where you can turn it off, what each provider claims about training. That's worth knowing. But it's not the question I find more interesting, which is: what should AI memory actually look like if it's designed for the person, not the platform?
I build Amicai, an AI relationship intelligence app whose entire value depends on remembering things about you and the people you care about. So I've spent more time than most thinking about how to make a memory feature that doesn't quietly become a privacy liability. Here's what I'd look for.
The two kinds of memory most products are blending
When a current AI product says "memory," it usually means one of two things — and most products mash them together without distinguishing.
Episodic memory is "what we talked about." A running, additive list of the things you've said in past sessions. Useful for continuity ("you mentioned last week that you were prepping for a half marathon"). High volume. Mostly low-stakes.
Semantic memory is "what's true about you." Distilled facts the AI has decided are stable and worth carrying forward. Your dietary preferences. Your job. Your kids' names. Your medical conditions. Lower volume. Much higher stakes.
The thing that makes consumer AI memory feel uneasy is that the distillation step happens silently. You don't see the moment the model decides something you said in passing should become a long-term fact about you. It just shows up later, weeks down the line, in a context you didn't expect.
The right design separates the two and lets you see — and control — what got promoted to a fact.
What to actually look for
Forget the marketing. If you're evaluating an AI memory product (including ours), here are the things that actually matter:
1. Can you see what it remembers?
Not a settings page that says "Memory: ON." A list. Every fact the AI considers stable about you. With timestamps. With the source — was this from a journal entry I wrote? A passing remark in a chat? Something I confirmed?
The thing you want is full visibility into the semantic layer. If you can't see the list, the system is making decisions about you that you can't audit.
2. Can you delete a fact, and does it actually leave?
"Delete" is one of those words that does a lot of work. There's a difference between removed from your memory view and purged from the system, including from the underlying provider's logs. The latter is the one that matters. Some providers retain logs for 30 days even after you delete locally;[3] some retain longer.
If you can't get an honest answer about provider-side retention, the answer is "longer than you think."
3. Did the AI ask before remembering?
This is the design choice I care about most. There are roughly three modes a memory system can operate in:
- Auto-store, no review. AI silently extracts facts and stores them. Default for most consumer chatbots.
- Auto-store, post-hoc review. AI extracts facts, you can see them later, you can delete them.
- Propose, then store on approval. AI surfaces a candidate fact, you approve or decline, only then does it persist.
The third one is uncommon. It's also the only one that puts the user in the loop before a sensitive fact becomes part of the AI's persistent model of them.
Amicai's Soul File works this way by default. The AI proposes insights — "you tend to pull back when conversations get emotionally direct" — and you approve, edit, or reject each one before it becomes part of the long-term profile. That's a deliberate design choice. The cost is that fewer things get stored automatically. The benefit is that you know what's in there because you put it there.
4. Is sensitive content masked before the model sees it?
This is the question almost no consumer AI memory product answers cleanly. If your raw phone numbers, email addresses, payment details, or named contacts get fed into the LLM as part of the memory context, then any successful jailbreak — or any model that happens to repeat its context in an unusual way — can leak that data.
The defense is to strip or mask sensitive content before the prompt is built. Not just to instruct the AI not to repeat it. Instructions are not defenses.
For more on how we think about that boundary specifically, see Is My AI Chatbot Data Safe? Here's How to Tell..
5. Is there a category for things you said you don't want remembered?
People say things in passing they don't want carried forward. Venting. A bad day. Something about a family member that was true that morning and not true by dinner. A category for "noticed and intentionally not stored" is the difference between a memory feature that respects you and one that takes notes on you.
The current state of the art on most platforms: they don't have this category. Either it's stored, or it's not.
What this looks like for relationships
Memory privacy gets sharper when the AI isn't just remembering things about you — it's remembering things about other people in your life. Friends, family, partners, coworkers. None of them signed up for an AI to retain facts about them.
This is the reason Amicai treats sensitive contacts as a first-class feature. You can flag any contact as off-limits, and that flag propagates through the whole stack — their messages don't get analyzed, their name doesn't appear in prompts, and existing data tied to them is purged across more than 20 internal tables. (Background: Some Conversations Are Off Limits. Your AI Should Know That..)
Most consumer AI memory products treat third parties — the people you mention but don't control — as part of your context. That's a privacy decision they're making on behalf of people who didn't consent. Worth asking how an AI app you're considering handles it.
The version of memory worth trusting
The question is not "does AI memory exist." It does. Every major model has it now, and the trend is toward retaining more, by default, with longer windows.[4]
The question is whether the memory you've been given is a feature or a habit. A feature is something you control — visible, editable, deletable, gated by your approval. A habit is something the system does to you that you can't easily change. The two look identical from the outside. They're very different from the inside.
The standard I'd hold any AI memory product to: I should be able to read every persistent fact, edit any of it, delete any of it cleanly, and trust that what I said I didn't want stored, isn't.
If a product can't pass that test, the memory isn't yours.
References
[1] Glasp. "AI Memory Wars: How ChatGPT, Claude, and Gemini Remember You." Glasp, 2026.
[2] LumiChats. "ChatGPT Memory vs Claude vs Gemini vs Grok: Which AI Actually Remembers You in 2026." LumiChats, 2026.
[3] Anuma. "2026 AI Chat Privacy Report: How 15 Leading Platforms Handle Your Data." Anuma Blog, 2026.
[4] ShareUHack. "How Claude Memory Works in 2026: Free Tier Setup, ChatGPT Import, and Privacy Controls." ShareUHack, 2026.