Before I get into the updates, I want to explain why we build Amicai the way we do. Three stories from the last six weeks have been sitting with me.
One. In February, an 18-year-old in Tumbler Ridge, British Columbia killed eight people. OpenAI's staff had flagged concerning conversations from his ChatGPT account and banned it in June 2025 — eight months before the shooting. They never told law enforcement. Sam Altman apologized yesterday in a public letter. The province's premier called the apology "necessary, and yet grossly insufficient."[1] One of the victims' families is suing, alleging OpenAI had specific knowledge of the shooter's long-range planning of a mass casualty event.
Two. Starting in March, Claude Code — the coding tool I personally pay Anthropic for to build Amicai — quietly degraded. Paid subscribers were burning through monthly quotas in nineteen minutes instead of five hours. One user reverse-engineered the binary and found caching bugs silently inflating costs by 10 to 20x.[2] Anthropic denied any intentional degradation, but only after weeks of complaints did they acknowledge three product-layer changes that had shipped without a user-facing notice. The fixes landed April 20. That's the kind of asymmetry I don't want Amicai to replicate — the company selling you the tool knows something about the tool that you don't, and stays quiet because the incentive is to stay quiet. I'm a paying customer of the same company whose models Amicai runs on, and I still found out about this from a Hacker News thread.
Three. Last October, Senator Bernie Sanders' office published a report estimating that AI could eliminate nearly 100 million U.S. jobs over the next decade.[3] Amazon, Walmart, UnitedHealth, and JPMorgan have now told investors explicitly that AI will let them cut payroll while posting record profits. The founder class is promising a utopia they don't plan to share.
These aren't isolated accidents. They're what happens when the incentives point away from the user. The big platforms are going to do what they do — hook you, keep you scrolling, sell your data, or in the API providers' case, quietly charge more for less. I want Amicai to be the opposite of that.
Amicai shouldn't be something you open and scroll. It shouldn't be a black box either. The more time I spend in this AI world, the more I'm focused on transparency, on you owning your data, and on making it easy to take it with you whenever you want. AI should feel like magic, but you should still be able to see what's powering it and stay in control.
The more I look at where this is heading, the less I trust what the major labs and the news are saying. The technology can do real good or real harm. My ambition — for me and for you — is to push it toward the good.
Now, what shipped this week. Six things worth flagging.
Android is live
Amicai now runs on Android. We sideload directly from getamicai.com/android — signed APK, hosted updater, push notifications, the works. Onboarding pulls in your messages and the same intelligence kicks in.
We chose direct download over a Play Store placeholder. The full write-up is at We Built the Android Friendship App We Wanted.
Turn a journal entry into a thread
You can now spin any journal entry into a thread — a running line of follow-up entries on the same situation, person, or thought. Instead of one isolated entry that gets buried, you get a thread you can return to as the story develops, and the AI surfaces what's connected. Each new entry enriches the contact behind it.
More on why the journal-to-AI loop matters: Your Journal Now Shows Up in Your Chat.
Photo journaling on iOS
If you write in a real notebook, you can now snap a photo of the page and we'll handle it like any other journal entry — transcription, contact enrichment, the whole pipeline. Use the camera or pull from Photos. iOS first; Android coming.
More on the case for handwritten notebooks: Your Handwritten Journal, Now Legible to the AI That Knows Your People.
Journal works offline
Typed journal entries on iOS and Android now save instantly even with no signal. The app queues them and syncs when you're back online. Retries are safe — we won't duplicate anything if a save replays. Useful on planes, the subway, hikes — anywhere you'd otherwise just lose the thought.
Post-call reflections
After a meaningful FaceTime or phone call, you'll see a small card asking what came up. Save the note straight to that contact — it shows up later in chat, daily reflection, and the contact's intelligence feed. Live on web and iOS.
The thinking behind it: The Hardest Part of a Phone Call Is the Ten Minutes After.
Mac sync downtime alerts
If your Mac stops syncing for too long — sleep, app crash, permissions reset — you'll get an email so you can fix it instead of finding out a week later that nothing's been processed. You can opt out under Settings.
Smaller stuff that also shipped
- Daily reflections were occasionally cross-wiring group activity to the wrong contact. Fixed end to end.
- Contacts page is smarter: better sibling deduplication, accurate Key Contact badge (only when you've explicitly hearted someone), default filter is now "All".
- Journal entries now appear in your timeline the instant you save them (was sometimes a 60-second delay).
- Situation Navigator now shows timestamps in your local timezone.
- Regenerating a daily reflection is now async with progress — no more long blank wait.
- You can pull any journal entry into chat with "Talk about this".
- iOS chat replies reveal read-first now, so you can scan before tapping in.
- iOS handwritten-review keyboard trap fixed.
One more thing
If you read the news about Tumbler Ridge, or the Claude Code rollback, or the Sanders report, and felt the same mix of "not surprised" and "this is bad" that I did — that's the feeling I'm trying to build against. The answer isn't to step away from AI. It's to build it differently. I'd rather you use a tool where you can see exactly what it's doing with your data than a black box that's cheaper or shinier.
More soon.
— Wylie
References
[1] CBC News. "OpenAI's Sam Altman writes apology to community of Tumbler Ridge." CBC, April 24, 2026.
[2] The Register. "Anthropic admits Claude Code quotas running out too fast." The Register, March 31, 2026.
[3] U.S. Senate Committee on Health, Education, Labor & Pensions. "Sanders Releases Report on Big Tech Oligarchs' War Against Workers." U.S. Senate, October 6, 2025.