Someone asks you a question. You don’t know the answer off the top of your head, so you paste it into Claude, copy the response, and send it back.

That’s not a human interaction. That’s a very slow API call with extra steps.

I keep seeing this pattern, at work, on forums, in social communities, and it makes me wonder if we’ve completely missed the point. Not of AI. Of ourselves.

Here’s what the pattern tells the person on the other end: I could not be bothered to engage with your question. The response might be accurate. It might even be helpful. But it carries a clear signal: you were not worth the effort of actual thought. The sender probably didn’t mean it that way. It doesn’t matter. That’s what arrived.

If I wanted an LLM answer, I would have asked the LLM. I asked you.

The useful version of this technology isn’t a faster copy-paste. It’s a forcing function. The interactions that don’t require a human (the FAQ, the status update, the “what does this acronym mean”) should go away entirely. Build a better doc. Point to a bot. Remove the friction. That’s the job.

What should be left over is the stuff that actually requires a person. Judgment calls. Messy context. The question behind the question. The moments that only work if someone actually gives a shit.

Those interactions deserve more, not less. If AI is buying back any of your time, that’s where it goes. And if you’re one of the people who already gets that — start talking about it. The people around you are probably already losing the thread, and they don’t know it yet.

What’s happening instead is the opposite. The low-effort questions get a low-effort LLM response dressed up as a human answer. The hard questions get the same treatment. And slowly, the expectation of genuine engagement just… lowers.

There’s a word for what this looks like at scale: enshittification. And I don’t think the people doing it are cynical. I believe they’re trying. They picked up - or were forced to use - a powerful tool and pointed it at a real problem. Nobody handed them a manual for which problems it should and shouldn’t touch.

But good intentions don’t change what lands on the other end. The person who asked you something real still got a hollow answer. The gap between meaning well and doing well is exactly where the work is.

The technology isn’t the problem. Mistaking the shortcut for the improvement is. And that mistake doesn’t land evenly. For some people, a hollow answer isn’t an inconvenience - it’s confirmation of something they were already afraid was true. The impact isn’t equally distributed. Neither is the responsibility to fix it.