<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Llm on Coffee or Blog</title>
    <link>/tags/llm/</link>
    <description>Recent content in Llm on Coffee or Blog</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en</language>
    <lastBuildDate>Fri, 10 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="/tags/llm/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>The Pass-Through Problem</title>
      <link>/posts/2026-04-10---the-pass-through-problem/</link>
      <pubDate>Fri, 10 Apr 2026 00:00:00 +0000</pubDate>
      
      <guid>/posts/2026-04-10---the-pass-through-problem/</guid>
      <description>Someone asks you a question. You don&amp;rsquo;t know the answer off the top of your head, so you paste it into Claude, copy the response, and send it back.
That&amp;rsquo;s not a human interaction. That&amp;rsquo;s a very slow API call with extra steps.
I keep seeing this pattern, at work, on forums, in social communities, and it makes me wonder if we&amp;rsquo;ve completely missed the point. Not of AI. Of ourselves.</description>
      <content>&lt;p&gt;Someone asks you a question. You don&amp;rsquo;t know the answer off the top of your head, so you paste it into Claude, copy the response, and send it back.&lt;/p&gt;
&lt;p&gt;That&amp;rsquo;s not a human interaction. That&amp;rsquo;s a very slow API call with extra steps.&lt;/p&gt;
&lt;p&gt;I keep seeing this pattern, at work, on forums, in social communities, and it makes me wonder if we&amp;rsquo;ve completely missed the point. Not of AI. Of ourselves.&lt;/p&gt;
&lt;p&gt;Here&amp;rsquo;s what the pattern tells the person on the other end: &lt;em&gt;I could not be bothered to engage with your question.&lt;/em&gt; The response might be accurate. It might even be helpful. But it carries a clear signal: you were not worth the effort of actual thought. The sender probably didn&amp;rsquo;t mean it that way. It doesn&amp;rsquo;t matter. That&amp;rsquo;s what arrived.&lt;/p&gt;
&lt;p&gt;If I wanted an LLM answer, I would have asked the LLM. I asked &lt;strong&gt;you&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;The useful version of this technology isn&amp;rsquo;t a faster copy-paste. It&amp;rsquo;s a forcing function. The interactions that &lt;em&gt;don&amp;rsquo;t&lt;/em&gt; require a human (the FAQ, the status update, the &amp;ldquo;what does this acronym mean&amp;rdquo;) should go away entirely. Build a better doc. Point to a bot. Remove the friction. That&amp;rsquo;s the job.&lt;/p&gt;
&lt;p&gt;What should be left over is the stuff that actually requires a person. Judgment calls. Messy context. The question behind the question. The moments that only work if someone actually gives a shit.&lt;/p&gt;
&lt;p&gt;Those interactions deserve more, not less. If AI is buying back any of your time, that&amp;rsquo;s where it goes. And if you&amp;rsquo;re one of the people who already gets that — start talking about it. The people around you are probably already losing the thread, and they don&amp;rsquo;t know it yet.&lt;/p&gt;
&lt;p&gt;What&amp;rsquo;s happening instead is the opposite. The low-effort questions get a low-effort LLM response dressed up as a human answer. The hard questions get the same treatment. And slowly, the expectation of genuine engagement just&amp;hellip; lowers.&lt;/p&gt;
&lt;p&gt;There&amp;rsquo;s a word for what this looks like at scale: &lt;a href=&#34;https://pluralistic.net/2023/01/21/potemkin-ai/#hey-ho-lets-go&#34;&gt;enshittification&lt;/a&gt;. And I don&amp;rsquo;t think the people doing it are cynical. I believe they&amp;rsquo;re trying. They picked up - or were forced to use - a powerful tool and pointed it at a real problem. Nobody handed them a manual for which problems it should and shouldn&amp;rsquo;t touch.&lt;/p&gt;
&lt;p&gt;But good intentions don&amp;rsquo;t change what lands on the other end. The person who asked you something real still got a hollow answer. The gap between meaning well and doing well is exactly where the work is.&lt;/p&gt;
&lt;p&gt;The technology isn&amp;rsquo;t the problem. Mistaking the shortcut for the improvement is. And that mistake doesn&amp;rsquo;t land evenly. For some people, a hollow answer isn&amp;rsquo;t an inconvenience - it&amp;rsquo;s confirmation of something they were already afraid was true. The impact isn&amp;rsquo;t equally distributed. Neither is the responsibility to fix it.&lt;/p&gt;
</content>
    </item>
    
  </channel>
</rss>
