Before I get into the updates, three things from the last six weeks have been sitting with me.
One. In January, a federal court ordered OpenAI to produce 20 million ChatGPT output logs as part of the New York Times copyright case. In March, the same court compelled additional batches — 78 million and 10 million — over OpenAI's objections.[1] Hundreds of millions of conversations that users thought of as private are now sitting in litigation discovery. Not because anyone did anything wrong. Because the company that holds the data is the company being sued, and the data went to that company by default.
Two. Anthropic — whose models Amicai runs on — quietly reduced API log retention from 30 days to 7 days last September.[2] That's a real improvement, and the kind of thing the better labs are doing without a press release. But it still means: the conversation you had this morning lives somewhere outside your account for a week, regardless of how careful you are. The defaults at the frontier labs are "data leaves your device, lands in someone else's logs, gets deleted on their schedule, not yours."
Three. Apple has spent the last year pushing Apple Intelligence toward on-device-first, with a hybrid Private Cloud Compute layer for the harder requests.[3] I don't think they've gotten it right yet — the model is small, the integration with iMessage is shallow, and the developer story is still controlled. But the direction is correct. The future of AI in your personal life is not "every message you've ever sent flows to one company's logs." It's a layered thing: small models that run on your phone for the work that fits, larger models you choose for the work that doesn't.
These aren't isolated stories. They're the same story from three angles: your data path matters, and the default path is not the one you'd pick if you were paying attention.
That's the gap Amicai is being built into. The ship this week is two things, and both of them are about giving you control of the data path.
You can now bring your own AI provider key. If you have an Anthropic or OpenAI account, you can plug it into Amicai and the inference runs through your account, with your retention agreement, on your bill. Your conversations don't get pooled into our usage. We never see the key — only that one is set. And if you don't have an account or don't want one, the managed default still works exactly like it did.
On Android, four of the most sensitive jobs — your daily reflection, the contact-scoped Navigator situation, journal enrichment, and your evening journal prompt — now run on the device. No cloud round-trip for those, on Android. iOS will follow when the on-device story tightens up. This isn't "everything runs locally" — it's a first wave for the jobs where local makes the most sense.
Now, what shipped.
Pick your AI provider
You can now choose how Amicai runs:
- Managed default — what we've always done, maintained by us.
- Cloud only with your key — bring an Anthropic or OpenAI key; inference runs through your account.
- Local preferred — use your device for the jobs that fit, fall back to cloud for the rest.
- Local only — only run jobs that can stay on the device.
Settings → AI tab. Onboarding has the same choice now. There's a status panel that tells you which provider was used for the last run, what the cost was, and what fell back where.
The path here is:
- Today: pick your provider.
- Soon: more providers (Gemini and self-hosted endpoints are next).
- Eventually: the best open-source models on your phone, and you decide whether the cloud is even part of the loop.
Aligned with the direction Apple is pushing — but with you choosing the provider, not us choosing it for you.
On-device AI on Android (first wave)
The four most sensitive job types now run on-device on Android, gated by manifest and device state with fail-closed provenance checks:
- Daily reflection — your morning summary, generated locally.
- Situation Navigator — the contact-scoped warm-start when you have a hard conversation coming up.
- Journal enrichment — when you write about someone, the contact pickup runs locally.
- Journal reminder prompts — the evening "what came up today?" prompt.
If your device can't run a job locally, it falls back to your chosen cloud provider. We don't silently downgrade — the run logs tell you what happened.
iOS isn't there yet. Apple's developer-facing on-device APIs are opening up, but the integration story for a third-party app like Amicai isn't tight enough yet. We're tracking it. As soon as the path is real, iOS will get the same first wave.
Reflect on iPhone
The iOS Reflect dashboard now matches web. Source-aware Calls & FaceTime card, full Trajectory widget, Goals summary, Soul summary, Life Events nested into Digest, final card ordering pass — and the branded loading state across all of it. If you've been using web-only because iOS was missing pieces, this closes that gap.
A new way to describe Amicai
The landing page got rewritten this week around what Amicai actually is — a private AI companion for your real relationships. A second opinion on the people you care about. Not a tool to manage friendships, not a feed, not a streak. The longer founder note on this — Amicai is your digital best friend — is going up next.
Behind the scenes: contact identity cleanup
Less visible but worth noting. We finished a long-running cleanup of how Amicai resolves who's who across your contacts — the same person showing up with three different identifiers (a phone number, a UUID, an email) used to occasionally cross-wire. We shipped a canonical resolver, an audit dashboard for the rare case it does, and ran a full repair pass across all production accounts. Twelve users scanned, zero affected, 288 corrective writes total.
You almost certainly will not notice this. That's the point. The reason for flagging it: this is the kind of work that doesn't get done in companies optimizing for growth. We did it because the alternative was your daily reflection occasionally calling your sister by your coworker's name, and that's not acceptable.
Smaller stuff that also shipped
- Chat agent now resolves contact names through the canonical resolver across the board — fewer "wait, who?" moments.
- Source-aware call insights on Reflect: cellular vs FaceTime audio vs FaceTime video shown distinctly.
- BYOK security copy on the landing page made more honest. The old version implied stronger guarantees than we technically have. The new version says what's actually true.
- Multi-provider rollout monitoring + admin observability for which provider ran what.
- A bunch of behind-the-scenes routing work — chat memory embeddings, transcription, goals depth measurement — all migrated through the shared inference router so the provider choice flows everywhere.
One more thing
The thing I keep coming back to is this: the AI in your personal life should run on a path that you understand. Not because you'll audit it — most people won't — but because the option to should exist, and the company building the tool should make that option real.
The court order on those 20 million OpenAI logs is going to keep happening, in different forms, to different companies, for years. The right answer isn't to step away from AI. It's to use tools where the data path is something you chose, not something you defaulted into.
More soon.
— Wylie
References
[1] Lawyer Monthly. "OpenAI Discovery Breach: 20M Chat Logs Mandated in SDNY (2026 Analysis)." Lawyer Monthly, January 2026.
[2] Anthropic. "How long do you store my data?" Anthropic Privacy Center.
[3] Apple. "Apple Intelligence and privacy on iPhone." Apple Support, 2026.