Essay

Your AI Assistant Doesn't Work For You

The answer depends on who's paying
April 2026

"Hey Siri, what should I do today?"

"There's a brand new ice cream shop down the street — you should give it a try!"

"Hey assistant, what should I do today?"

"You haven't been spending enough time with your daughter recently. How about a hike?"

Same question. Completely different answers. Because they're optimizing for different things.

The answer depends on who's paying

Siri's suggestion isn't random. Neither is Google's. When your assistant recommends something, that recommendation was shaped by a profile — of who you are, what you do, where you go, what you're likely to respond to.

The question nobody asks is: who built that profile, and what are they using it for?

If the company that built the profile makes money from advertising, the profile serves advertisers. Not always overtly. It's not "buy this product."

It's subtler. It's the ice cream shop surfacing instead of the hike. The new restaurant instead of the home-cooked meal. The purchase instead of the walk. Every recommendation carries the invisible weight of someone else's business model.

Your assistant knows you haven't seen your daughter much this week. It knows you like hiking. It knows she's been asking about the creek trail.

But it will never suggest the hike, because there's no money in it. The ice cream shop is a new business that benefits from foot traffic. That's not a conspiracy. It's just the math working as designed.

How they know you

Every push notification on your Android phone is delivered through Google's servers. Every notification on your iPhone goes through Apple's. That's not optional — it's how the technology works.

When your bank sends you "You spent $47.23 at Whole Foods," that information passes through Google before it reaches your lock screen.

When WhatsApp tells you "Mom: are you coming for dinner?", Google delivers that message preview. When your pharmacy says "Your prescription is ready," Google routes it.

Every app on your phone is broadcasting a structured stream of your life through infrastructure owned by the same company that controls your AI assistant.

They don't need to be inside the Chase app. Chase tells them what you're doing, voluntarily, every time it sends you a notification. That's the only way to reach your lock screen.

Now think about every time an app has begged you to turn on notifications. Every "You're missing out!" dialog. Every "Allow notifications?" prompt that defaults to yes.

That's not about your convenience. That's about feeding the stream.

The profile and the interface

Having a profile of someone is powerful. Controlling what they see is powerful. Having both is something else entirely.

If you know what someone worries about, and you control the answer when they ask for help — you don't just influence their decisions. You shape their reality.

Search engines did this with links. You could at least see ten options and choose.

AI assistants do it with answers. There are no options. There's just the answer. And most people don't fact-check their assistant. Why would they? The whole point is to trust it.

When you ask "what should I do today?" and the assistant says "try this ice cream shop" — that becomes your day. Not because you were manipulated in some dramatic way. Just because you asked a question and got an answer and followed it.

The way everyone does, dozens of times a day, about things much more consequential than ice cream:

What restaurant should I go to? What neighborhood should I move to? What should I think about this political issue? Is this medical symptom serious? What school should my kid go to?

Each question answered by a system optimizing for someone else's objective function. Not yours.

"Don't be evil"

Google's original code of conduct opened with "Don't be evil." In 2018, they quietly removed it.

This matters less as a symbolic gesture and more as a structural one. A constraint was removed.

When you have the most comprehensive behavioral profile of a significant fraction of all humans alive, combined with direct control over the primary interface those humans use to make decisions — removing a constraint on what you do with that power is not a small thing.

They didn't become evil because bad people took over. The structure permits it. The profile plus the interface plus the incentive produces the outcome deterministically. The motto was the only thing that didn't fit, so the motto went.

What changes

I'm not going to tell you what to do about this. I don't have a product to sell you. I just think people should see the mechanism clearly.

When your AI assistant tells you something, ask yourself: who is this answer for?

Is it for me — based on what I actually need, what I actually care about, what would actually make my life better?

Or is it for someone else — shaped by an advertiser's budget, a business model's incentive, a platform's need to keep me engaged, spending, scrolling?

The answer is usually obvious once you ask the question. The trick is that the interface is designed so you never ask it.

Your assistant knows about your daughter.

It just doesn't think she's worth mentioning.

Someone else is paying for that slot.