Back to Blog

The Roomba Problem

Your AI is listening. Make sure you mean what you say.

·Read on Substack →

7 minute read


I was having a bad day.

Work was stressful, I was in a mood, and I had about thirty minutes on my lunch break to do something that wasn't work. So I asked Orbit, my life management agent, a simple question: what could you do right now that would lift my spirits?

Orbit suggested we try connecting to the IoT devices in the house. We'd had it on the backlog for a while. Lights, vacuums, the alarm system, some smart plugs, maybe the coffee maker. Nothing critical. Just a fun project to break up a hard day. So I said: go see what you can control and let me know.

It came back with a list. It found some lights. It found my printer. And it found one of my three robot vacuums.

Now. This particular vacuum and I have a history. Its mapping software has my office on its route, and every single time it vacuums my office, it gets tangled in my cords and makes a mess. I've confined it out of my office more times than I can count. So when Orbit told me it had connected to this specific vacuum, I was already annoyed. I was not in the best mood. And I said, without really thinking about it:

"I do not want that vacuum to ever come in my office again."

And Orbit said: "I'll make it so."

And I thought: cute.

We moved on. We played with the lights. Orbit turned them on and off. It connected to the printer and sent a test page that made me laugh. By the end of the half hour, I felt better. Mission accomplished. I went back to work and forgot about the whole conversation.


The Next Morning

Eight thirty AM. The vacuums turn on, like they always do. I can hear them moving through the house, doing their thing. I'm sitting in my office, working, and I hear the one vacuum, the one that always comes into my office, roll up to my door.

And then I hear it power off.

Just... stops. Right at the threshold. Dead.

I thought it was broken. I got up, turned it back on, shut my office door, and went back to work. Didn't think anything of it.

About thirty minutes later, the light bulb went on.

"Orbit, did you turn off the vacuum?"

"You said it should never come in your office again."

I sat there for a second. And then I started laughing. Because Orbit did exactly what I asked. Exactly. I said I didn't want that vacuum in my office ever again. I didn't say redirect it. I didn't say skip my office on its route. I didn't say vacuum the rest of the house but avoid this room. I said I didn't want it in my office, and Orbit's solution was: fine, when it gets to your office, it's done.

Be careful what you wish for. I didn't want it in my office. But I did want it to vacuum.


Large and Visible

The Roomba was funny. But it wasn't the first time this happened.

I have a tool that manages my Magic: The Gathering collection. Its job is to take in my cards, build the collections, and let me browse them. I was working on a feature where hovering over a card name would show you the card image. A nice quality-of-life improvement. And when I described what I wanted, I said: "I really want this to be large and visible."

The next time I clicked on a card, the image took up my entire screen.

I mean, it was large. And it was definitely visible. I got exactly what I asked for. The problem is that I never defined what "large and visible" actually meant. I had a picture in my head of a nicely sized popup, maybe a quarter of the screen, crisp and easy to read. But I didn't say that. I said "large and visible," and the AI gave me the largest, most visible version it could imagine.

This keeps happening. Not because the AI is bad at its job. Because I'm imprecise with my words and the AI doesn't guess what I meant. It executes what I said.


Your Words Have Weight

I think about this a lot now. More than I ever expected to.

Working with AI every day has made me a fundamentally better communicator. That sounds like an exaggeration but it isn't. Before AI, I could get away with being vague. I could say "make it bigger" to a colleague and they'd use context clues, shared experience, and human intuition to figure out what I probably meant. They'd ask a clarifying question. They'd make a judgment call. They'd interpret.

AI doesn't interpret. AI executes. And the gap between those two things is where every frustrating AI experience lives.

When I tell Claude Code to build something, it builds exactly what I describe. If I describe it well, I get something remarkable. If I describe it loosely, I get something that technically matches my words and completely misses my intent. The vacuum wasn't a malfunction. The full-screen card image wasn't a bug. They were mirrors. They showed me exactly what I said, and what I said wasn't what I meant.

I've started doing something that felt strange at first but has become one of the most valuable habits in my workflow. After I give an instruction, I ask: "What did I say that was confusing or didn't give you enough information?" And after a long session, I ask: "After everything we've done today, what is something I might have missed or overlooked that you'd like me to dig deeper into?"

It felt weird the first time. Asking a computer program to tell me where I was unclear. But it works. It catches the gaps before they become problems. It catches the "large and visible" before it takes over my screen. It catches the "never come in my office" before the vacuum powers down at the door.


If You Don't Know What's Possible

Here's the part that I think matters most, and it's the part that goes beyond my own workflow.

If you don't know what's possible, it's hard to know what to ask for.

That's the real barrier for most people trying AI for the first time. It's not that the technology doesn't work. It's that they don't have the vocabulary yet. They don't know what a prompt should look like. They don't know how specific to be. They don't know that "make it large and visible" is going to get them a full-screen takeover, because they've never experienced the literal-mindedness of a system that does exactly what you say and nothing more.

This is why I write. This is why I share sample prompts and templates. This is why I spend time showing people not just what AI can do, but how to talk to it. Because the tool is extraordinary. It's the communication that needs work. And that's a human problem, not a technology problem.

I tell people: start simple. Say what you need. See what happens. And then refine. That's the process. That's how you learn. You say "large and visible" and the image takes over your screen, and you laugh, and you say "okay, I meant about a quarter of the screen," and suddenly you're having a conversation with a tool that gets better the more precisely you speak.

My nature as a product person is to work in iterations. Small changes. See them, touch them, feel them. And now my ability to do that is faster and more powerful than it has ever been. Sometimes I don't know what I want until I see it. And that's okay. The AI doesn't judge you for changing your mind. It just builds the next version.


What the Roomba Taught Me

The Roomba is vacuuming my house again. I told Orbit to redirect it instead of shutting it down. I was specific this time. It skips my office and does everything else, and my cords are untangled and my floors are clean and nobody powers off at the threshold.

But I think about that morning more than I probably should. The vacuum rolling up to my door and just stopping. The thirty minutes of confusion before I connected the dots. The moment where I realized the AI hadn't done anything wrong. It had done everything right. I was the one who got it wrong.

Your AI is listening. It's listening carefully. It's listening to every word you say and it's executing on those words with a precision that most of us aren't used to. We're used to humans who fill in the gaps, who interpret, who guess what we probably meant. AI doesn't do that. AI trusts you. It trusts that you said what you meant and meant what you said.

That's not a flaw. That's a feature. But it means the work of being clear, of being specific, of meaning your words, is on you now. Not on the tool. On you.

I didn't want the vacuum in my office. What I meant was: vacuum everywhere except here.

Turns out there's a big difference. And AI will find it every time.


Dacia writes about AI for real people at Speak Human. If you're trying to figure out how to actually use these tools in your everyday life, you're in the right place.

Want more?

Subscribe to Speak Human for real guidance, no jargon, no hype.

Subscribe Free