<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Dhrubajyoti’s Newsletter]]></title><description><![CDATA[My views on science & tech.]]></description><link>https://codewdhruv.substack.com</link><generator>Substack</generator><lastBuildDate>Thu, 09 Apr 2026 07:03:20 GMT</lastBuildDate><atom:link href="https://codewdhruv.substack.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Dhrubajyoti Chakraborty]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[codewdhruv@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[codewdhruv@substack.com]]></itunes:email><itunes:name><![CDATA[Dhrubajyoti Chakraborty]]></itunes:name></itunes:owner><itunes:author><![CDATA[Dhrubajyoti Chakraborty]]></itunes:author><googleplay:owner><![CDATA[codewdhruv@substack.com]]></googleplay:owner><googleplay:email><![CDATA[codewdhruv@substack.com]]></googleplay:email><googleplay:author><![CDATA[Dhrubajyoti Chakraborty]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Why your LLM stack needs a semantic layer]]></title><description><![CDATA[Build the cache. Save the compute. Make your agents fast.]]></description><link>https://codewdhruv.substack.com/p/why-your-llm-stack-needs-a-semantic</link><guid isPermaLink="false">https://codewdhruv.substack.com/p/why-your-llm-stack-needs-a-semantic</guid><dc:creator><![CDATA[Dhrubajyoti Chakraborty]]></dc:creator><pubDate>Sat, 03 Jan 2026 23:34:12 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/24365240-e61e-4c5f-a481-9f2124d2ce6d_2816x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you are running an AI product in production right now, you are likely bleeding money. Not just in the obvious way, which is API bills from OpenAI or Anthropic but also in the subtle, compounding way that kills retention: latency.</p><p>We have normalized waiting 3k to 5k milliseconds for a computer to respond. In the world of distributed systems, that is an eternity. If your Postgres query took 4 seconds, you would fire your DBA. But because it&#8217;s &#8220;Gen AI&#8221; we kinda accept it.</p><p>Well maybe we shouldn&#8217;t.</p><p>As we move from &#8220;chatbots&#8221; to autonomous agents that execute multi-step reasoning loops, latency isn&#8217;t just a UX annoyance but it would eventually become a compounding tax on performance. If an agent needs five &#8220;thoughts&#8221; to solve a problem, and each thought takes 2 seconds, your user is now waiting 10 seconds for a basic output. That is non-viable software.</p><p>The right solution might not be GPT-6/7 or Claude AGI etc. The solution potentially is rethinking the internal architecture. It&#8217;s about realizing that 40-60% of the queries hitting the expensive inference endpoint are semantically identical to queries you answered yesterday.</p><p>This is <strong>Semantic Caching</strong>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7AT7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22d71963-14ae-4777-8740-1ac1362db0c9_2816x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7AT7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22d71963-14ae-4777-8740-1ac1362db0c9_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!7AT7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22d71963-14ae-4777-8740-1ac1362db0c9_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!7AT7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22d71963-14ae-4777-8740-1ac1362db0c9_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!7AT7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22d71963-14ae-4777-8740-1ac1362db0c9_2816x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7AT7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22d71963-14ae-4777-8740-1ac1362db0c9_2816x1536.png" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/22d71963-14ae-4777-8740-1ac1362db0c9_2816x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4809816,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://codewdhruv.substack.com/i/183306685?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22d71963-14ae-4777-8740-1ac1362db0c9_2816x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!7AT7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22d71963-14ae-4777-8740-1ac1362db0c9_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!7AT7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22d71963-14ae-4777-8740-1ac1362db0c9_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!7AT7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22d71963-14ae-4777-8740-1ac1362db0c9_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!7AT7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F22d71963-14ae-4777-8740-1ac1362db0c9_2816x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3>The failure of the exact match (hash-based caching)</h3><p>In traditional web development, caching is trivial. You take a request say <code>GET /product/123</code>, you hash it, and you store the payload in Redis. If the hash matches, you serve the data. O(1) complexity. Fast.</p><p>This fails strictly in NLP because human language is high-entropy.</p><ul><li><p>User A: &#8220;How do I change my password?&#8221;</p></li><li><p>User B: &#8220;I need to update my login credentials.&#8221;</p></li><li><p>User C: &#8220;Password reset steps.&#8221;</p></li></ul><p>To a standard SHA-256 hash, these are three distinct keys. You will pay for three separate inference calls to generate the exact same semantic response.</p><p>To solve this, we move from <strong>lexical matching</strong> (exact text) to <strong>semantic matching</strong> (intent).</p><h3>The architecture: vectors and similarity</h3><p>Semantic caching relies on <strong>vector embeddings</strong>. Before your user&#8217;s query ever touches an LLM, it should pass through an embedding model (like OpenAI&#8217;s <code>text-embedding-3-small</code> or a local HuggingFace model like <code>all-MiniLM-L6-v2</code>).</p><p>This model compresses the &#8220;meaning&#8221; of the input text into a fixed-size vector (a list of floats).</p><p>Instead of doing a key-value lookup, we perform a <strong>vector search</strong> (specifically, Approximate Nearest Neighbor or ANN) against our cache database. We represent the new query as a vector in multi-dimensional space and ask the database: <em>&#8220;What other vectors are located within a specific radius of this one?&#8221;</em></p><p>If we find a vector that is &#8220;close enough,&#8221; we don&#8217;t generate a new answer. We return the stored completion associated with that neighbor.</p><h3>The engineering challenge: Precision vs. Recall</h3><p>This is where it gets technical and where most implementations fail. It is not binary. It is probabilistic.</p><p>You define &#8220;closeness&#8221; using a distance metric, usually <strong>Cosine Similarity</strong> (measuring the angle between vectors) or <strong>Euclidean Distance</strong>.</p><p>You must define a <strong>Similarity Threshold</strong>. This is a hyperparameter you have to tune.</p><ul><li><p><strong>Threshold too loose (e.g. 0.7):</strong> You get high recall (lots of cache hits), but low precision. The user asks &#8220;How do I delete my account?&#8221; and the cache matches it with &#8220;How do I edit my account?&#8221; because they are semantically close. The user gets the wrong instructions. This is a catastrophic failure.</p></li><li><p><strong>Threshold too tight (e.g. 0.99):</strong> You get high precision, but the cache rarely triggers. You are back to paying for redundant compute.</p></li></ul><h3>The 2-pass system</h3><p>To run this in production without serving garbage answers, you cannot rely solely on a single cosine similarity check. You need a ranking logic.</p><p><strong>1. The Retrieval Step (Fast):</strong></p><p>Use a bi-encoder (standard embedding model) to retrieve the top 5 candidates from your Vector DB based on a generous threshold. This is extremely fast because bi-encoders map sentences to independent vectors.</p><p><strong>2. The Re-ranking Step (Accurate):</strong></p><p>Pass the user&#8217;s query and the potential cached question into a Cross-Encoder.</p><p>A cross-encoder processes both inputs simultaneously. It pays attention to the interaction between the two sentences. It outputs a score from 0 to 1 indicating how much sentence A entails sentence B.</p><ul><li><p><em>Bi-encoder:</em> &#8220;Apple&#8221; and &#8220;Orange&#8221; are close (both fruit).</p></li><li><p><em>Cross-encoder:</em> &#8220;Can I eat an apple?&#8221; vs &#8220;Can I eat an orange?&#8221; -&gt; High similarity. &#8220;I hate apples&#8221; vs &#8220;I love apples&#8221; -&gt; Low semantic alignment for response reuse.</p></li></ul><p>This second step adds latency (maybe 50ms) but it increases your <strong>Cache Precision</strong> dramatically.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!WJ58!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc30d7e60-03dc-4570-802e-eb18cacab678_2816x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!WJ58!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc30d7e60-03dc-4570-802e-eb18cacab678_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!WJ58!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc30d7e60-03dc-4570-802e-eb18cacab678_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!WJ58!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc30d7e60-03dc-4570-802e-eb18cacab678_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!WJ58!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc30d7e60-03dc-4570-802e-eb18cacab678_2816x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!WJ58!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc30d7e60-03dc-4570-802e-eb18cacab678_2816x1536.png" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c30d7e60-03dc-4570-802e-eb18cacab678_2816x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5346538,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://codewdhruv.substack.com/i/183306685?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc30d7e60-03dc-4570-802e-eb18cacab678_2816x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!WJ58!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc30d7e60-03dc-4570-802e-eb18cacab678_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!WJ58!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc30d7e60-03dc-4570-802e-eb18cacab678_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!WJ58!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc30d7e60-03dc-4570-802e-eb18cacab678_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!WJ58!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc30d7e60-03dc-4570-802e-eb18cacab678_2816x1536.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><h3>LLM validation</h3><p>If you are operating in a high-risk vertical (fintech, health), even a cross-encoder might feel too risky.</p><p>You can implement an <strong>LLM Validator</strong>. When you get a cache hit, before serving it to the user, you send a prompt to a smaller, faster model (like a distilled Llama 3 or GPT-3.5):</p><blockquote><p><em>&#8220;User asked: [New Query]. Cached question was: [Old Query]. Does the answer to the cached question satisfy the new query? Reply YES or NO.&#8221;</em></p></blockquote><p>This is still faster and cheaper than generating a full reasoning response from a frontier model like Gemini 3, but it provides a semantic safety rail.</p><h3>Why this matters for agents</h3><p>We are entering the era of Agentic workflows. Agents are recursive.</p><p>An agent tasked with &#8220;Analyze this competitor&#8217;s pricing&#8221; might break that down into 10 sub-steps. Three of those steps might be generic queries it has performed a hundred times before for other clients.</p><p>Without a semantic cache, the agent re-derives first principles every single time. It is slow, and it is expensive.</p><p>With a semantic cache, the agent acts as if it has <strong>Long-Term Memory</strong>. It &#8220;remembers&#8221; it solved a similar sub-problem 10 minutes ago, or 10 days ago. It retrieves the result instantly and moves to the next step.</p><p>This is how you build agents that feel &#8220;smart&#8221; and responsive. You minimize the cognitive load on the LLM by offloading memory to the cache.</p><h3>The metrics that matter</h3><p>Do not just track &#8220;Hit Rate.&#8221; That is a vanity metric. You need to instrument:</p><ol><li><p><strong>Latency Savings:</strong> (Avg Inference Time) - (Avg Cache Retrieval Time).</p></li><li><p><strong>False Positive Rate:</strong> How often did a user reject or downvote a cached response?</p></li><li><p><strong>Cache Recall:</strong> Of all the queries that <em>could</em> have been cached, how many did we actually catch?</p></li></ol><h3>Closing thoughts</h3><p>Stepping back a bit, it&#8217;s helpful to look at this from first principles.</p><p>The user doesn&#8217;t care about your parameter count. They don&#8217;t care if you&#8217;re using a mixture-of-experts model or a quantized 7B parameter local model. They care about the interaction loop. Latency is friction, and friction kills the magic of Software 3.0.</p><p>Right now running a raw LLM application is a bit like having a human re-derive the laws of physics every time they want to catch a ball. It&#8217;s a massive waste of cognitive compute. Semantic caching effectively gives your AI a &#8220;System 1&#8221; mode of thinking that is fast, instinctive, and cheap while reserving the expensive &#8220;System 2&#8221; (the actual LLM inference) only for novel, complex problems that actually require reasoning.</p><p>We are moving away from the era where &#8220;it works&#8221; is the benchmark. </p><p>We are moving into the optimization phase. We are taking O(n) compute costs where costs scale linearly with usage and collapsing them toward O(1) efficiency for redundant tasks.</p><p>If you want to build agents that can actually run in the real world without burning a hole in your GPU budget or your user&#8217;s patience, you really need this layer. </p><p>Build the cache. Save the compute. make your agents fast.</p><div><hr></div><p><em>&#8212; Dhrubajyoti<br>Product @ Harness</em></p>]]></content:encoded></item><item><title><![CDATA[AI prototyping as a PM]]></title><description><![CDATA[Why am I building the prototypes first before I write the specs]]></description><link>https://codewdhruv.substack.com/p/ai-prototyping-as-a-pm</link><guid isPermaLink="false">https://codewdhruv.substack.com/p/ai-prototyping-as-a-pm</guid><dc:creator><![CDATA[Dhrubajyoti Chakraborty]]></dc:creator><pubDate>Fri, 05 Dec 2025 01:39:14 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!HslM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F409b16b8-f0bd-43a1-9452-e153b20722a3_697x500.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There is a specific, uncomfortable phase in every product cycle. I call it the &#8220;Interpretation Gap.&#8221;</p><p>It happens right at the beginning. You have an insight, you have data, and you have a vision. You write a PRD, create some rough wireframes, and present it to your team. Everyone accepts &amp; says they understand.</p><p>But they don&#8217;t. Not precisely.</p><p>The designer imagines a specific interaction pattern. The engineering lead mentally starts architecting a database schema based on an assumption you didn&#8217;t mean to imply. The stakeholders starts to picture a feature set twice as large as the one you described.</p><p>Traditionally, we tried to fix this with more documentation. We wrote longer specs, held more alignment meetings, and created more detailed tickets. We tried to legislate clarity into existence.</p><p>Over the last few months, I&#8217;ve stopped doing that. I have fundamentally changed how I start products. Instead of documents, I now start with AI generated prototypes.</p><p><strong>V0</strong> for interfaces and <strong>Claude Code</strong> for logic. I have moved from writing about products to building these working prototypes before I set up the kickoff meeting.</p><p>This isn&#8217;t about doing the engineer's job. It&#8217;s about respecting their time by bringing them a validated reality rather than a vague hypothesis.</p><p>In this post I try to capture the playbook I used at Harness to adopt an AI-first approach to product design &amp; delivery.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!HslM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F409b16b8-f0bd-43a1-9452-e153b20722a3_697x500.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!HslM!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F409b16b8-f0bd-43a1-9452-e153b20722a3_697x500.jpeg 424w, https://substackcdn.com/image/fetch/$s_!HslM!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F409b16b8-f0bd-43a1-9452-e153b20722a3_697x500.jpeg 848w, https://substackcdn.com/image/fetch/$s_!HslM!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F409b16b8-f0bd-43a1-9452-e153b20722a3_697x500.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!HslM!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F409b16b8-f0bd-43a1-9452-e153b20722a3_697x500.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!HslM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F409b16b8-f0bd-43a1-9452-e153b20722a3_697x500.jpeg" width="697" height="500" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/409b16b8-f0bd-43a1-9452-e153b20722a3_697x500.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:500,&quot;width&quot;:697,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:79367,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://codewdhruv.substack.com/i/163716235?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F409b16b8-f0bd-43a1-9452-e153b20722a3_697x500.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!HslM!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F409b16b8-f0bd-43a1-9452-e153b20722a3_697x500.jpeg 424w, https://substackcdn.com/image/fetch/$s_!HslM!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F409b16b8-f0bd-43a1-9452-e153b20722a3_697x500.jpeg 848w, https://substackcdn.com/image/fetch/$s_!HslM!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F409b16b8-f0bd-43a1-9452-e153b20722a3_697x500.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!HslM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F409b16b8-f0bd-43a1-9452-e153b20722a3_697x500.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>The problem with &#8220;Imagined&#8221; software</strong></h2><p>The traditional kickoff meeting usually centers on a &#8220;blank page&#8221; problem &amp; the &#8220;Interpretation Gap&#8221; stage as I mentioned earlier. </p><p>Disconnect exists but we usually don&#8217;t discover these until weeks later, during the first design review or, worse, the first staging build. By then, time has been wasted.</p><p>The solution is to stop describing the product and start showing it.</p><h2><strong>Start with V0/Bolt/Lovable </strong></h2><p>Describing a user interface in text is incredibly inefficient. This is where generative UI services like <strong>V0</strong> helps. V0 allows me to bypass the &#8220;lo-fi wireframe&#8221; stage entirely. Instead of drawing boxes on excildraw, I prompt the system with my intent.</p><p>This is highly useful for PMs who do not have prior experience with writing code or are not highly technical in terms of understanding code or generate code driven prototypes. </p><p><strong>Why this matters:</strong> It creates an immediate &#8220;Straw Man.&#8221; It is rarely perfect. The padding might be wrong, the color contrast might be off, or the information hierarchy might be strange. But it is <strong>tangible</strong>.</p><p>When I put this screen in front of a team, the feedback changes from abstract to concrete:</p><ul><li><p><em>&#8220;That chart takes up too much vertical space.&#8221;</em></p></li><li><p><em>&#8220;We are missing a user profile entry in the sidebar.&#8221;</em></p></li><li><p><em>&#8220;The &#8216;Done column needs a collapse state.&#8221;</em></p></li></ul><p>We no longer debate philosophy but we really start to critique the product.</p><h2><strong>Claude Code: The logic layer</strong></h2><p>If V0 provides the skin, <strong>Claude Code</strong> provides the nervous system.</p><p>This is the phase where most PMs would probably hesitate, but it is also where the highest value exists. A V0 generation looks good, but it fails to build functional / logical experiences in most scenarios. To truly validate a product idea, you really need to test the behavior.</p><p>Claude Code acts as an agentic interface that runs in your terminal. It allows you to manipulate the code without needing to really know the exact code.</p><p><strong>The workflow:</strong> Once I have the V0 design, I usually pull it into a local environment and use Claude Code to customize the logic.</p><ul><li><p><em>&#8220;Make the &#8216;In Progress&#8217; column sortable by date.&#8221;</em></p></li><li><p><em>&#8220;What happens if the user tries to drag a &#8216;Done&#8217; item back to &#8216;Draft&#8217;? Add a confirmation modal.&#8221;</em></p></li><li><p><em>&#8220;Simulate a data delay so I can see what the loading state looks like.&#8221;</em></p></li></ul><p>This allows me to answer functional questions that usually stall development later on. I do not assume how the sort function might work, but I start to use it.</p><h2><strong>A new PM skillset</strong></h2><p>This shift necessitates a change in how we view the PM skillset.</p><p>For years, we have debated whether PMs need to be technical. In this AI-first model, the answer is yes but with a caveat. You do not need to be an engineer. You do not need to know how to optimize database shards or manage memory.</p><p>But you must be able to <strong>read and reason about code.</strong></p><p>You need to look at the artifact Claude produced and understand:</p><ul><li><p>How are we handling edge cases?</p></li><li><p>What is the logic flow for an error state?</p></li></ul><p>If you can read code, these systems give you significant leverage.</p><p>If you cannot, you will struggle to direct the AI effectively. The modern PM needs to be &#8220;code-conversational,&#8221; capable of inspecting the mechanics of the product they own.</p><h2><strong>My playbook</strong></h2><h3><strong>1) Choose a baseline (production screenshot or Figma frame)</strong></h3><p>Start with something real: a production screenshot or a well-structured Figma frame gives the AI an anchor. This helps with component recognition and gives you a clear &#8220;source of truth&#8221; to generalize from.</p><p><strong>Why:</strong> The model maps components more accurately from a high-fidelity reference than from a vague description.</p><h3><strong>2) Component-first extraction (prompt the model)</strong></h3><p>Tell the tool to analyze the image/frame for repeatable components and build a component library first and not just to generate a single page.</p><p><strong>Key instruction:</strong> &#8220;Identify component candidates, output a component list with props, and generate a minimal component library before composing the page.&#8221;</p><p><strong>Why:</strong> A component-first approach yields artifacts that are reusable and maintainable, which makes later engineering adoption far easier.</p><h3><strong>3) Generate an initial prototype composition</strong></h3><p>Have the model recompose the screen from the generated component library. This gives you a working structure and identifies mismatches quickly.</p><h3><strong>4) Iterate using two parallel loops</strong></h3><ul><li><p><strong>Prompt-based iteration:</strong> Change layout, swap icons, or alter labels using prompts. Useful for quick experiments (e.g. &#8220;Make filters vertical on desktop, horizontal on mobile&#8221;).</p></li><li><p><strong>Code-based iteration (Optional):</strong> Open the generated files and move DOM nodes, adjust flexbox and Tailwind classes, add new components for missing bits. This is where you get from &#8220;close&#8221; to &#8220;useful&#8221;.</p></li></ul><p><strong>Why both loops:</strong> Prompts are fast for large changes while code edits ensure structural correctness and predictable behavior. </p><h3><strong>5) Try sketches selectively (and simplify)</strong></h3><p>You can feed sketches into the model, but keep them simple and zone-focused. Avoid overloading a sketch with micro-details as the model interprets simple zones (header, left-nav, primary content, right pane) far more reliably.</p><h3><strong>6) Map well-structured Figma (using MCP server) to components</strong></h3><p>If you have an organized Figma file (good layer names, components, variables), point the model at that frame and instruct it to implement using your prototype components.</p><p><strong>Tip:</strong> The more consistent your Figma naming / tokens, the fewer surprises you&#8217;ll see in the generated code.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!W-R6!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9ba436e-068b-40f3-b02e-35b4fae9cce1_948x910.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!W-R6!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9ba436e-068b-40f3-b02e-35b4fae9cce1_948x910.png 424w, https://substackcdn.com/image/fetch/$s_!W-R6!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9ba436e-068b-40f3-b02e-35b4fae9cce1_948x910.png 848w, https://substackcdn.com/image/fetch/$s_!W-R6!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9ba436e-068b-40f3-b02e-35b4fae9cce1_948x910.png 1272w, https://substackcdn.com/image/fetch/$s_!W-R6!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9ba436e-068b-40f3-b02e-35b4fae9cce1_948x910.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!W-R6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9ba436e-068b-40f3-b02e-35b4fae9cce1_948x910.png" width="948" height="910" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e9ba436e-068b-40f3-b02e-35b4fae9cce1_948x910.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:910,&quot;width&quot;:948,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:138664,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://codewdhruv.substack.com/i/163716235?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9ba436e-068b-40f3-b02e-35b4fae9cce1_948x910.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!W-R6!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9ba436e-068b-40f3-b02e-35b4fae9cce1_948x910.png 424w, https://substackcdn.com/image/fetch/$s_!W-R6!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9ba436e-068b-40f3-b02e-35b4fae9cce1_948x910.png 848w, https://substackcdn.com/image/fetch/$s_!W-R6!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9ba436e-068b-40f3-b02e-35b4fae9cce1_948x910.png 1272w, https://substackcdn.com/image/fetch/$s_!W-R6!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe9ba436e-068b-40f3-b02e-35b4fae9cce1_948x910.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>7) Add mocked data &amp; states</strong></h3><p>Wire up fake fetch calls, loading states, and empty/error states. That is where product behaviors reveal themselves and you can test edge cases with real interactions.</p><h3><strong>8) Share live preview early and iterate with stakeholders</strong></h3><p>Send a sandbox link to design, engineering, and product marketing. Ask them to try common flows and give concrete feedback and they&#8217;ll return actionable items rather than vague &#8220;looks fine.&#8221;</p><h3><strong>9) Validate with customers (if appropriate)</strong></h3><p>Use the prototype in discovery calls or usability sessions. Watch where people hesitate, what labels confuse them, and which controls they ignore.</p><h2><strong>Why this is better for the business</strong></h2><p>At the company level, moving from documents to working prototypes reduces waste.</p><p><strong>Faster Alignment</strong> A prototype forces decisions. You cannot hide behind vague language in a coded interface. The button either works or it doesn&#8217;t. This forces stakeholders to align on specific functionality days or weeks earlier than usual.</p><p><strong>Better Customer Conversations</strong> Showing a customer a PDF and asking &#8220;Would you use this?&#8221; yields polite, unreliable data. Letting a customer click through a live prototype yields actual behavioral data. You learn where they get stuck and what they ignore.</p><p><strong>Risk Mitigation</strong> The biggest risk in product development is building the wrong thing. By prototyping the logic early, we expose the complexity before we commit expensive engineering resources to the final build.</p><h3><strong>Final thoughts</strong></h3><p>We are moving away from an era where PMs were judged by the length and clarity of their documents. We are entering an era where PMs are judged by the clarity of their prototypes.</p><p>AI is not replacing the product manager. It is removing the administrative friction that sits between having an idea and seeing that idea function.</p><p>If you&#8217;re a PM on enterprise SaaS and are even moderately comfortable reading or editing code, this will become table stakes. You don&#8217;t need to be an engineer. You need curiosity and a willingness to express product ideas in code even when the code is rough.</p><p>Next time you&#8217;re stuck writing a PRD to explain a UX flow or waiting a week for mocks, try prompting instead. You might be surprised how quickly your idea becomes something real.</p><div><hr></div><p><em>&#8212; Dhrubajyoti<br>Product @ Harness.io</em></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[A more honest way to think about developer relations]]></title><description><![CDATA[My thoughts on where DevRel&#8217;s effort goes today and why so much of it misses the work that actually helps developers succeed]]></description><link>https://codewdhruv.substack.com/p/a-more-honest-way-to-think-about</link><guid isPermaLink="false">https://codewdhruv.substack.com/p/a-more-honest-way-to-think-about</guid><dc:creator><![CDATA[Dhrubajyoti Chakraborty]]></dc:creator><pubDate>Fri, 05 Dec 2025 00:24:27 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!tZVK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa764635b-7fe5-4e47-9635-bb99846779a2_3008x2000.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Every few months, I try to sit with teams who are trying to understand why their DevRel efforts feel extremely aggressive yet fail to influence the product in any measurable way. These are not really disengaged or traditional teams. They care about their work, they care about developers, and they consistently put in the hours. The challenge isn&#8217;t effort. It&#8217;s clarity about what actually moves the business forward.</p><p>From a distance, DevRel can look like a sequence of well-intentioned activities. Travel. Conference talks. A steady stream of blog posts. Engaging with the community. Updating documentation. It&#8217;s easy to assume that a full calendar equals a healthy program. That assumption is comforting, but it is also wrong.</p><p>If you strip the work down to its purpose, the definition becomes straightforward. DevRel should make it easier for a developer to go from curious to productive. That is the job. Everything else is secondary. And when you evaluate DevRel through this mental model, much of the work that looks impressive at first glance turns out to have little long term value.</p><p>The observations below are patterns I see repeatedly. Each one reflects the difference between activity and impact, and what DevRel should be doing instead.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!tZVK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa764635b-7fe5-4e47-9635-bb99846779a2_3008x2000.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!tZVK!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa764635b-7fe5-4e47-9635-bb99846779a2_3008x2000.png 424w, https://substackcdn.com/image/fetch/$s_!tZVK!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa764635b-7fe5-4e47-9635-bb99846779a2_3008x2000.png 848w, https://substackcdn.com/image/fetch/$s_!tZVK!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa764635b-7fe5-4e47-9635-bb99846779a2_3008x2000.png 1272w, https://substackcdn.com/image/fetch/$s_!tZVK!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa764635b-7fe5-4e47-9635-bb99846779a2_3008x2000.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!tZVK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa764635b-7fe5-4e47-9635-bb99846779a2_3008x2000.png" width="1456" height="968" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a764635b-7fe5-4e47-9635-bb99846779a2_3008x2000.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:968,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5594082,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://codewdhruv.substack.com/i/180460231?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa764635b-7fe5-4e47-9635-bb99846779a2_3008x2000.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!tZVK!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa764635b-7fe5-4e47-9635-bb99846779a2_3008x2000.png 424w, https://substackcdn.com/image/fetch/$s_!tZVK!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa764635b-7fe5-4e47-9635-bb99846779a2_3008x2000.png 848w, https://substackcdn.com/image/fetch/$s_!tZVK!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa764635b-7fe5-4e47-9635-bb99846779a2_3008x2000.png 1272w, https://substackcdn.com/image/fetch/$s_!tZVK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa764635b-7fe5-4e47-9635-bb99846779a2_3008x2000.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>It should treat conferences as experiments, not annual commitments</strong></h3><p>Teams often feel obligated to attend a long list of conferences each year. The energy of these events can create the illusion of progress. Booths and talks give a sense of presence. But when you track outcomes, the yield is usually low. Very few conversations turn into adoption. Most talks are forgotten by the time someone leaves the venue.</p><p>A better approach is simple. DevRel should select only the events where the audience and product truly overlap, run activation experiments, and return only if the event produces qualified usage. The goal isn&#8217;t visibility. The goal is measurable impact. Everything else is optional.</p><h3><strong>It should write to remove friction, not to fill a content calendar</strong></h3><p>Content is an easy way to signal productivity. Publishing frequently looks like discipline and creates something tangible to share internally. The problem is that much of it never reaches developers at the moment they actually need it.</p><p>Writing should begin with a real problem a developer is facing. The intent should be to remove confusion, not to produce volume. High quality, targeted content almost always outperforms a high-frequency schedule. What matters is whether the writing helps a developer succeed faster.</p><h3><strong>It should build communities that function without constant intervention</strong></h3><p>Community involvement is important, but it becomes counterproductive when the DevRel team becomes the first responder for every question. This unintentionally trains the community to wait for official answers rather than helping each other.</p><p>A strong community should not depend on the DevRel team for constant engagement. The team&#8217;s job is to create the conditions for peer-to-peer support, highlight contributors, and step in only when a conversation stalls. This model scales. Constant hand-holding does not.</p><h3><strong>It should show product teams where the journey breaks</strong></h3><p>Many DevRel teams see themselves as conduits for feature requests. The intent is good, but the signal is weak. Developers frequently request features they do not ultimately use. Teams get caught in loops of opinion-based conversations that rarely lead to the right outcomes.</p><p>DevRel should focus on where the journey actually breaks. Where developers abandon onboarding. Where activation collapses. Where documentation fails. These points of friction carry far more weight than feature requests. When DevRel brings clarity instead of noise, product teams can make faster and more confident decisions.</p><h3><strong>It should track the numbers that reveal reality</strong></h3><p>It is easy to report vanity metrics: traffic, impressions, follower counts, event attendance. Internally, these numbers look positive and make teams feel productive. The issue is simple: they do not predict adoption or revenue.</p><p>DevRel should measure activation, time to first value, retention, and support deflection. These numbers are harder to move but directly tied to product success. They also expose uncomfortable truths, which is exactly why they matter.</p><h3><strong>It should avoid adding new programs until foundational issues are fixed</strong></h3><p>When DevRel feels stuck, teams often reach for new initiatives: a hackathon, a podcast, an ambassador program. These ideas can make a team feel innovative, but they rarely solve the real problem when the developer journey itself is still broken.</p><p>The more disciplined approach is to identify the single largest friction point and fix it fully before doing anything else. This kind of work is not glamorous, but it compounds. You earn more from removing one major blocker than from launching five new programs.</p><h3><strong>It should treat documentation as a critical product interface</strong></h3><p>Documentation is often maintained as an operational task. But updates alone do not translate into better adoption. Even well organized docs can fail if they ignore how developers actually behave.</p><p>Documentation should be treated as one of the core product surfaces. It should be structured around real tasks, informed by data, and revised based on where developers get stuck. This is slow, methodical work, but it pays off more consistently than almost anything else DevRel can do.</p><h3><strong>Why this matters</strong></h3><p>If you simplify DevRel to its essence, the work is not complicated. The team exists to improve the path developers take to adopt the product. It should reduce friction, increase clarity, improve activation, and support long term success.</p><p>When DevRel keeps this objective at the center, the work stops being a performance and becomes an operating function. The team focuses on outcomes, not optics. They make decisions rooted in data, not habit. And the entire company benefits from the clarity.</p><h3><strong>A question every team should ask</strong></h3><p>When the program feels disorganized or overloaded, one question cuts through noise:</p><blockquote><p>If we stopped doing this for a month, would developers reach value slower?</p></blockquote><p>If the answer is no, the work is nonessential.<br>If the answer is yes, the work is DevRel.</p><p>Most teams discover they have been doing far more nonessential work than they realized. Letting go of that work is uncomfortable, but it creates space for real impact.</p><h3><strong>Closing thoughts</strong></h3><p>DevRel should feel like a steady, durable function that strengthens the foundation developers rely on. It should not rely on charisma or constant activity. It should be intentional, data-informed, and focused on long term value.</p><p>When DevRel operates this way, it becomes part of the company&#8217;s infrastructure. You may not see it every day, but you feel the difference in how consistently developers succeed. That is what true DevRel should be doing.</p><div><hr></div><p><em>&#8212; Dhrubajyoti<br>Product @ Harness.io</em></p>]]></content:encoded></item><item><title><![CDATA[Gemini 3, reasoning, and the evolution of visual intelligence]]></title><description><![CDATA[My learnings from stress testing the Gemini 3 model]]></description><link>https://codewdhruv.substack.com/p/gemini-3-reasoning-and-the-evolution</link><guid isPermaLink="false">https://codewdhruv.substack.com/p/gemini-3-reasoning-and-the-evolution</guid><dc:creator><![CDATA[Dhrubajyoti Chakraborty]]></dc:creator><pubDate>Wed, 26 Nov 2025 15:30:14 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!FnwM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94f68267-7bb0-4795-bd2d-1b44c9f9286f_1408x768.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I have spent the past few days running Gemini 3 through a wide set of prompts I&#8217;ve collected while running discovery on reasoning-aware generative models. The model is way better at image generation than I expected, not just in terms of how the images look, but also how well they&#8217;re structured and logically put together.</p><p>It becomes really interesting to understand why it performs this well. We obviously can&#8217;t see what&#8217;s happening inside the model but based on how it responds clearly a few consistent patterns are starting to make sense.</p><h3><strong>Reasoning before rendering</strong></h3><p>Gemini 3 consistently produces images that reflect a pre-generation reasoning step. You see this most clearly in prompts that require:</p><ul><li><p>Concept composition (e.g. an isometric robotic arm with three distinct coordinate frames)</p></li><li><p>Spatial alignment (e.g. maintaining RGB&#8594;XYZ axis conventions across different joints)</p></li><li><p>Multi-step geometric constraints (e.g. merging or overlaying shapes without distorting their relationships)</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!K2GY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff59ac191-1b10-4e7b-a418-e16c4216c09f_1742x1622.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!K2GY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff59ac191-1b10-4e7b-a418-e16c4216c09f_1742x1622.png 424w, https://substackcdn.com/image/fetch/$s_!K2GY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff59ac191-1b10-4e7b-a418-e16c4216c09f_1742x1622.png 848w, https://substackcdn.com/image/fetch/$s_!K2GY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff59ac191-1b10-4e7b-a418-e16c4216c09f_1742x1622.png 1272w, https://substackcdn.com/image/fetch/$s_!K2GY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff59ac191-1b10-4e7b-a418-e16c4216c09f_1742x1622.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!K2GY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff59ac191-1b10-4e7b-a418-e16c4216c09f_1742x1622.png" width="1456" height="1356" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f59ac191-1b10-4e7b-a418-e16c4216c09f_1742x1622.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1356,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:435145,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://codewdhruv.substack.com/i/179817811?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff59ac191-1b10-4e7b-a418-e16c4216c09f_1742x1622.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!K2GY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff59ac191-1b10-4e7b-a418-e16c4216c09f_1742x1622.png 424w, https://substackcdn.com/image/fetch/$s_!K2GY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff59ac191-1b10-4e7b-a418-e16c4216c09f_1742x1622.png 848w, https://substackcdn.com/image/fetch/$s_!K2GY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff59ac191-1b10-4e7b-a418-e16c4216c09f_1742x1622.png 1272w, https://substackcdn.com/image/fetch/$s_!K2GY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff59ac191-1b10-4e7b-a418-e16c4216c09f_1742x1622.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Where typical diffusion models often blend objects together or lose positional accuracy, Gemini 3 preserves:</p><ul><li><p>Clean separation between components</p></li><li><p>Consistent orientation across the entire scene</p><p>Correct relative positioning (base &#8594; joint &#8594; tool frame)</p></li><li><p>Meaningful adherence to geometric conventions</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!FnwM!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94f68267-7bb0-4795-bd2d-1b44c9f9286f_1408x768.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!FnwM!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94f68267-7bb0-4795-bd2d-1b44c9f9286f_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!FnwM!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94f68267-7bb0-4795-bd2d-1b44c9f9286f_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!FnwM!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94f68267-7bb0-4795-bd2d-1b44c9f9286f_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!FnwM!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94f68267-7bb0-4795-bd2d-1b44c9f9286f_1408x768.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!FnwM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94f68267-7bb0-4795-bd2d-1b44c9f9286f_1408x768.jpeg" width="1408" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/94f68267-7bb0-4795-bd2d-1b44c9f9286f_1408x768.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1408,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:921276,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://codewdhruv.substack.com/i/179817811?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94f68267-7bb0-4795-bd2d-1b44c9f9286f_1408x768.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!FnwM!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94f68267-7bb0-4795-bd2d-1b44c9f9286f_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!FnwM!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94f68267-7bb0-4795-bd2d-1b44c9f9286f_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!FnwM!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94f68267-7bb0-4795-bd2d-1b44c9f9286f_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!FnwM!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94f68267-7bb0-4795-bd2d-1b44c9f9286f_1408x768.jpeg 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Based on the outputs, the model behaves as if it creates a <strong>coherent layout</strong> before adding detail because the global structure remains stable across the render. This doesn&#8217;t imply any specific internal mechanism but we could understand that the final images reliably reflect the logic of the prompt rather than drifting into visual noise.</p><p>The pattern resembles what prior research like HuggingFace&#8217;s <strong><a href="https://diffusion-cot.github.io/reflection2perfection/">ReflectionFlow</a></strong> observed: introducing reasoning-like steps improves compositional integrity in complex multimodal tasks. Seeing this pattern in a production-scale model is encouraging.</p><h3><strong>The STEM visual training signal matters</strong></h3><p>A clear improvement shows up in tasks involving diagrams, plots, math figures, and other structured visuals. Most generators treat these formats as just &#8220;images.&#8221; Gemini 3 doesn&#8217;t.</p><p>I tested it on:</p><ul><li><p>multi-panel technical infographics</p></li><li><p>annotated geometry constructions</p></li><li><p>multivariate scatter/line plots</p></li><li><p>nested flowcharts</p></li><li><p>software/ML architecture diagrams</p></li><li><p>symbolic math diagrams</p></li></ul><p>Across all of these, it keeps axes straight, labels aligned, shapes consistent, and region boundaries intact. It behaves like a model that has actually <strong>seen</strong> diverse STEM diagrams during training and not like other models that are improvising from natural-image priors.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!AcX2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F480b5d2f-7d59-4158-bd76-83730f2f23d7_1408x768.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!AcX2!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F480b5d2f-7d59-4158-bd76-83730f2f23d7_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!AcX2!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F480b5d2f-7d59-4158-bd76-83730f2f23d7_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!AcX2!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F480b5d2f-7d59-4158-bd76-83730f2f23d7_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!AcX2!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F480b5d2f-7d59-4158-bd76-83730f2f23d7_1408x768.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!AcX2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F480b5d2f-7d59-4158-bd76-83730f2f23d7_1408x768.jpeg" width="1408" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/480b5d2f-7d59-4158-bd76-83730f2f23d7_1408x768.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1408,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:512957,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://codewdhruv.substack.com/i/179817811?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F480b5d2f-7d59-4158-bd76-83730f2f23d7_1408x768.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!AcX2!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F480b5d2f-7d59-4158-bd76-83730f2f23d7_1408x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!AcX2!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F480b5d2f-7d59-4158-bd76-83730f2f23d7_1408x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!AcX2!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F480b5d2f-7d59-4158-bd76-83730f2f23d7_1408x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!AcX2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F480b5d2f-7d59-4158-bd76-83730f2f23d7_1408x768.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>This lines up with what <strong><a href="https://structvisuals.github.io/">StructVisuals</a></strong> found:</p><ul><li><p>Models break when they don&#8217;t learn structural priors for STEM diagrams</p></li><li><p>Small geometric distortions compound across edits</p></li><li><p>Technical imagery requires different supervision than natural imagery</p></li></ul><p>Gemini 3 narrows that gap. It understands, for example, that a scatter plot shouldn&#8217;t collapse into a heatmap, that component labels must stay anchored, and that flowchart lines can&#8217;t wander or cross randomly.</p><p>None of this looks like diffusion &#8220;luck.&#8221; It looks like exposure to the right data and objectives.</p><h3><strong>Consistency across variants</strong></h3><p>A subtle but important detail: Gemini 3 maintains consistency across a series of related prompts.<br>Examples:</p><ul><li><p>&#8220;Generate the diagram.&#8221;</p></li><li><p>&#8220;Rotate it 90&#176; and keep all labels in place.&#8221;</p></li><li><p>&#8220;Remove the bottom-left component while keeping everything else fixed.&#8221;</p></li></ul><p>Most diffusion models drift after each modification but Gemini 3 is noticeably more stable. The behavior suggests an internal representation that abstracts the diagram rather than storing a single image sample. For technical and research workflows, this is a meaningful step forward.</p><h3><strong>Performance on the &#8220;long-tail&#8221; prompts</strong></h3><p>I&#8217;ve accumulated a library of prompts built specifically to expose failure cases:</p><ul><li><p>Deeply compositional blueprint instructions</p></li><li><p>Hybrid symbolic&#8211;visual queries</p></li><li><p>Transformations involving coordinate geometry</p></li><li><p>Nested constraints that break typical diffusion models</p></li></ul><p>Gemini 3 succeeds on far more of these than expected. Not flawlessly. There are still failure modes but the success rate is high enough that it materially shifts prior assumptions about what mainstream models can handle.</p><h3><strong>The overall impression</strong></h3><p>The model feels like the result of three converging efforts:</p><ol><li><p><strong>A solid reasoning substrate</strong> likely trained with some form of planning traces or structured supervision.</p></li><li><p><strong>Extensive multimodal STEM exposure</strong> giving the model robust priors about diagrams, charts and other structured content.</p></li><li><p><strong>Improved architecture + training stability</strong> which reduces drift across variants and preserves compositional fidelity.</p></li></ol><p>None of this is mysterious. This is what happens when the training data actually reflects the difficulty of the tasks we expect the model to perform.</p><p>For people who have been studying structured visual generation, the direction is not surprising but in practice, the improvements are real and substantive.<br>Gemini 3 is far from &#8220;solved reasoning + vision,&#8221; but the trajectory is clearly in the right direction.</p><div><hr></div><p><em>&#8212; Dhrubajyoti<br>Product @ Harness </em></p>]]></content:encoded></item><item><title><![CDATA[Don’t ship AI Agents without this]]></title><description><![CDATA[Security, governance, and all the things you really need to think about before releasing AI Agents into production]]></description><link>https://codewdhruv.substack.com/p/dont-ship-ai-agents-without-this</link><guid isPermaLink="false">https://codewdhruv.substack.com/p/dont-ship-ai-agents-without-this</guid><dc:creator><![CDATA[Dhrubajyoti Chakraborty]]></dc:creator><pubDate>Mon, 05 May 2025 16:17:58 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!E1es!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38b3b14e-e3d6-4f88-b555-c8441667ce24_1400x860.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There&#8217;s a kind of magic to watching an AI agent handle a workflow autonomously. A few weeks ago, I watched a demo where an agent took a product spec, created Jira tickets, generated a Slack update, and even then created a feature branch with a boilerplate in GitHub &#8212; all without a single click from a human. The whole thing took under 45 seconds.</p><p>It was fast. It was slick. It was terrifying.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!E1es!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38b3b14e-e3d6-4f88-b555-c8441667ce24_1400x860.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!E1es!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38b3b14e-e3d6-4f88-b555-c8441667ce24_1400x860.jpeg 424w, https://substackcdn.com/image/fetch/$s_!E1es!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38b3b14e-e3d6-4f88-b555-c8441667ce24_1400x860.jpeg 848w, https://substackcdn.com/image/fetch/$s_!E1es!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38b3b14e-e3d6-4f88-b555-c8441667ce24_1400x860.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!E1es!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38b3b14e-e3d6-4f88-b555-c8441667ce24_1400x860.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!E1es!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38b3b14e-e3d6-4f88-b555-c8441667ce24_1400x860.jpeg" width="1400" height="860" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/38b3b14e-e3d6-4f88-b555-c8441667ce24_1400x860.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:860,&quot;width&quot;:1400,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:94798,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://codewdhruv.substack.com/i/162808020?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38b3b14e-e3d6-4f88-b555-c8441667ce24_1400x860.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!E1es!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38b3b14e-e3d6-4f88-b555-c8441667ce24_1400x860.jpeg 424w, https://substackcdn.com/image/fetch/$s_!E1es!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38b3b14e-e3d6-4f88-b555-c8441667ce24_1400x860.jpeg 848w, https://substackcdn.com/image/fetch/$s_!E1es!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38b3b14e-e3d6-4f88-b555-c8441667ce24_1400x860.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!E1es!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F38b3b14e-e3d6-4f88-b555-c8441667ce24_1400x860.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Because the more autonomy we give these systems, the more we expose ourselves <em>and our customers</em> to a completely new class of risk. And let&#8217;s be honest: most of us are still figuring out how to build reliable <em>human-owned</em> systems. Giving software the freedom to act on its own, without the right safety checks, is like handing your intern the AWS root credentials and saying, &#8220;Try not to break anything.&#8221;</p><p>If you&#8217;re a product manager, engineering leader, or security partner evaluating the use of AI agents, here&#8217;s the uncomfortable truth:</p><blockquote><p><em>You are not deploying a smarter chatbot. You&#8217;re deploying a decision-making system with access to real infrastructure, data, and users.</em></p></blockquote><p>This post is my attempt to help you frame that decision the right way and avoid the mistakes so many teams are on the verge of making.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://codewdhruv.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Dhrubajyoti&#8217;s Newsletter! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h1>What is an AI Agent really?</h1><p>Before we get into the complexities of governance and security, let&#8217;s start by clarifying something fundamental: <strong>What exactly is an "AI agent"?</strong></p><p>If you&#8217;ve been following the AI space, you&#8217;ve probably heard the term thrown around a lot. But it&#8217;s worth pausing for a moment to define what we mean when we say &#8220;AI agent.&#8221;</p><p>At its core, an AI agent is a system that exhibits three key capabilities:</p><ul><li><p><strong>Observes</strong>: It can ingest and process external data i.e. anything from documents, APIs, to telemetry. It takes in the world around it.</p></li><li><p><strong>Plans</strong>: Based on that data, it then makes decisions. It can use planning frameworks like ReAct or chain-of-thought to figure out what needs to happen next.</p></li><li><p><strong>Acts</strong>: After processing and planning, it actually does something whether that&#8217;s writing a file, making an API call, or sending a Slack message.</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!5KHx!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec76a164-d653-40ca-866d-af84a2d80eba_2390x1071.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!5KHx!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec76a164-d653-40ca-866d-af84a2d80eba_2390x1071.png 424w, https://substackcdn.com/image/fetch/$s_!5KHx!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec76a164-d653-40ca-866d-af84a2d80eba_2390x1071.png 848w, https://substackcdn.com/image/fetch/$s_!5KHx!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec76a164-d653-40ca-866d-af84a2d80eba_2390x1071.png 1272w, https://substackcdn.com/image/fetch/$s_!5KHx!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec76a164-d653-40ca-866d-af84a2d80eba_2390x1071.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!5KHx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec76a164-d653-40ca-866d-af84a2d80eba_2390x1071.png" width="1456" height="652" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ec76a164-d653-40ca-866d-af84a2d80eba_2390x1071.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:652,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:117170,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://codewdhruv.substack.com/i/162808020?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec76a164-d653-40ca-866d-af84a2d80eba_2390x1071.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!5KHx!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec76a164-d653-40ca-866d-af84a2d80eba_2390x1071.png 424w, https://substackcdn.com/image/fetch/$s_!5KHx!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec76a164-d653-40ca-866d-af84a2d80eba_2390x1071.png 848w, https://substackcdn.com/image/fetch/$s_!5KHx!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec76a164-d653-40ca-866d-af84a2d80eba_2390x1071.png 1272w, https://substackcdn.com/image/fetch/$s_!5KHx!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fec76a164-d653-40ca-866d-af84a2d80eba_2390x1071.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>But that&#8217;s just the basic setup. AI agents can also do a lot more:</p><ul><li><p><strong>Persist memory</strong>: This means storing information like facts or insights into a vector database, enabling them to "remember" things over time.</p></li><li><p><strong>Use tools and plugins</strong>: Think integrations with platforms like GitHub, Jira, or Notion to improve the agent&#8217;s functionality &amp; scope.</p></li><li><p><strong>Operate in a loop</strong>: The agent can improve its outputs over time, revising and adjusting its actions until it hits the desired goal.</p></li></ul><p>At this stage, we&#8217;re talking about something way more powerful than just summarizing meeting notes. These agents aren&#8217;t just passive observers. They have the potential to change how systems work, manage production environments, engage with customers, or even handle large-scale infrastructure.</p><p>And here&#8217;s the catch: When we&#8217;re working with AI agents, the old product checklist of "Does it pass QA?" is no longer enough. These are systems that aren&#8217;t just running code but they&#8217;re also making decisions, executing actions, and learning from the process. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!xXt8!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82c31d15-2e8d-433e-ae9b-6eb6d0b94701_6956x2767.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!xXt8!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82c31d15-2e8d-433e-ae9b-6eb6d0b94701_6956x2767.png 424w, https://substackcdn.com/image/fetch/$s_!xXt8!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82c31d15-2e8d-433e-ae9b-6eb6d0b94701_6956x2767.png 848w, https://substackcdn.com/image/fetch/$s_!xXt8!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82c31d15-2e8d-433e-ae9b-6eb6d0b94701_6956x2767.png 1272w, https://substackcdn.com/image/fetch/$s_!xXt8!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82c31d15-2e8d-433e-ae9b-6eb6d0b94701_6956x2767.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!xXt8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82c31d15-2e8d-433e-ae9b-6eb6d0b94701_6956x2767.png" width="1456" height="579" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/82c31d15-2e8d-433e-ae9b-6eb6d0b94701_6956x2767.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:579,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:775116,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://codewdhruv.substack.com/i/162808020?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82c31d15-2e8d-433e-ae9b-6eb6d0b94701_6956x2767.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!xXt8!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82c31d15-2e8d-433e-ae9b-6eb6d0b94701_6956x2767.png 424w, https://substackcdn.com/image/fetch/$s_!xXt8!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82c31d15-2e8d-433e-ae9b-6eb6d0b94701_6956x2767.png 848w, https://substackcdn.com/image/fetch/$s_!xXt8!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82c31d15-2e8d-433e-ae9b-6eb6d0b94701_6956x2767.png 1272w, https://substackcdn.com/image/fetch/$s_!xXt8!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F82c31d15-2e8d-433e-ae9b-6eb6d0b94701_6956x2767.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Security - The invisible blast radius</h2><p>One of the hardest parts about AI agent security is that the attack surface is... everything. It&#8217;s not just the obvious stuff like prompt injection or unauthorized access &#8212; though those are very real risks. It&#8217;s that agents, by design, are built to take action. Autonomously. That&#8217;s powerful, but it also means the potential impact of a mistake or an exploit can be massive.</p><p>Let&#8217;s try to walk through some of the big security questions we need to ask when building or integrating AI agents.</p><h3><strong>Who does the Agent think it is? (Identity &amp; Permissions)</strong></h3><p>Most agents run under a service identity or API key. But here&#8217;s the problem: we often plug them in quickly to get things working, without slowing down to ask what <em>permissions</em> that key actually has.</p><p>I&#8217;ve made this mistake. I&#8217;ve seen others make it too:<br>Giving the agent full, org-wide permissions just to move fast.</p><p>It works. Until it doesn&#8217;t.</p><p><strong>What we should be doing instead:</strong></p><ul><li><p><strong>Apply the least privilege always: </strong>The agent should only access what it absolutely needs. Nothing more.</p></li><li><p><strong>Scope access by project, environment, and action: </strong>Read-only access for dev? Full write for a single prod service? Be deliberate.</p></li><li><p><strong>Create dedicated service accounts: </strong>Avoid shared keys or inherited roles. Make it clear who did what, and why.</p></li><li><p><strong>Rotate credentials. Monitor usage: </strong>Keys age, scopes change, and agents evolve. Don&#8217;t let access go stale.</p></li></ul><h3><strong>What can it remember and where? (Data Storage &amp; Leakage)</strong></h3><p>One of the most exciting (and risky) trends in agent design is persistent memory. Whether it&#8217;s storing documents, embeddings, or structured logs the agent&#8217;s ability to &#8220;remember&#8221; past interactions can be super helpful for personalization and efficiency.</p><p><strong>But here&#8217;s the grey area: </strong></p><p>That memory can become a liability fast.<br>It only takes one misstep for an agent to absorb sensitive info (say a customer's personal information or a secret) and then <em>accidentally</em> resurface it in the wrong context.</p><p><strong>Here&#8217;s what you need to ask yourself:</strong></p><ul><li><p><strong>Is the agent&#8217;s memory actually secure? </strong>That means encrypted both <em>at rest</em> and <em>in transit</em> with no shortcuts.</p></li><li><p><strong>Do you have retention policies in place? </strong>How long should this memory live? What happens after that? Define it upfront.</p></li><li><p><strong>Can you inspect or purge what it remembers? </strong>If something sensitive gets stored, can you find it and remove it quickly?</p></li><li><p><strong>Are your vector databases treated like real data stores? </strong>Spoiler: they <em>are</em> real data stores. That means:</p><ul><li><p>Schema discipline</p></li><li><p>Role-based access control</p></li><li><p>Logging and audit trails</p></li><li><p>Avoiding wild-west dumps of unstructured data</p></li></ul></li></ul><h3><strong>What happens when things go wrong? (Isolation &amp; Fail-Safes)</strong></h3><p>Let&#8217;s be honest AI agents don&#8217;t always fail gracefully.<br>They don&#8217;t throw neat 500 errors or crash with a stack trace. Instead, they might:</p><ul><li><p>Get stuck in an infinite loop</p></li><li><p>Hammer your CI/CD pipeline with nonsense requests</p></li><li><p>Auto-close a critical customer escalation based on a bad assumption</p></li></ul><p>These aren&#8217;t traditional bugs. They&#8217;re breakdowns in reasoning. And with agents, <em>they&#8217;re not rare.</em></p><p><strong>So, how do you protect your system (and your team) when the agent goes off the rails?</strong></p><ul><li><p><strong>Rate-limit everything. </strong>Whether the agent is hitting APIs, running jobs, or triggering workflows impose strict QPS and concurrency limits. This should be enforced at the gateway or orchestrator level.</p></li><li><p><strong>Use sandbox environments for testing. </strong>Use sandboxed namespaces, containers, or virtual projects with scoped credentials. Never give an experimental agent write access to production. You should be treating it like an intern on day one.</p></li><li><p><strong>Set timeouts and retry limits. </strong>If an agent is looping through a planner or calling external services repeatedly, use circuit breakers and fail-fast logic. No task should be allowed to consume unbounded resources.</p></li><li><p><strong>Add human-in-the-loop gates. </strong>Block sensitive or irreversible actions unless a person explicitly approves. You can choose to use policy engines (e.g. OPA, AWS IAM conditions) or workflow tools (e.g. Argo, Airflow) to require human sign-off before performing destructive actions.</p></li><li><p><strong>Log everything. </strong>Full traceability of decisions, inputs, outputs, and side effects isn&#8217;t optional. You&#8217;ll need it the moment something misfires.</p></li></ul><h2>Governance: Making the invisible visible</h2><p>Security is about prevention. Governance is about explanation.<br>It&#8217;s what helps you answer the question everyone asks <em>after</em> something goes wrong: <br><br><strong>&#8220;Why did this happen?&#8221;</strong></p><p>And if your team can't answer that, then your system isn't just broken it's ungovernable.</p><p>This is the part a lot of teams skip in the rush to ship. And they <em>always</em> regret it later, usually when leadership, legal, or a customer is asking for answers you can&#8217;t give.</p><h3><strong>Can you explain the Agent&#8217;s behavior? (Observability)</strong></h3><p>The most basic governance question is: <em>Why did the agent do that?</em></p><p>If you can&#8217;t answer that question, your system is effectively ungovernable.</p><p>You need:</p><ul><li><p><strong>Full logs of every input, output, and tool/action invocation. </strong>That includes prompts, API calls, function executions, and any state changes the agent made. Think audit trail but for intent and action.</p></li><li><p><strong>Traces of the reasoning path </strong>especially for multi-step agents using planning or chain-of-thought logic. You should be able to replay how the agent arrived at a decision even if it was the wrong one.</p></li><li><p><strong>Dashboards that non-engineers can use. </strong>Ops, support, and compliance teams shouldn&#8217;t need to read JSON logs or trace vector math. Give them timelines, decision trees, and search filters they can actually use.</p></li></ul><p>Put differently: your compliance team should be able to reconstruct an incident <em>without knowing what a vector embedding is</em>.</p><h3><strong>Do you know when to get a human involved? (Autonomy Modes)</strong></h3><p>Autonomy is not binary. I recommend thinking in levels:</p><ul><li><p><strong>Level 0 &#8211; Manual: </strong>Human does everything. The agent observes, maybe logs.</p></li><li><p><strong>Level 1 &#8211; Assistive: </strong>Agent suggests, but a human approves. Think copilots, draft generators, or PR suggestions.</p></li><li><p><strong>Level 2 &#8211; Guarded Autonomy: </strong>Agent can take action, <em>but only within strict boundaries.</em> Maybe it can restart a service or reassign a ticket, but not delete anything or change customer data.</p></li><li><p><strong>Level 3 &#8211; Full Autonomy: </strong>Agent operates independently. It acts, logs, and <em>you review the impact later.</em></p></li></ul><p><strong>Production environments should start at Level 1.</strong> Graduate to Level 2 only after months of observation and incident-free behavior. Treat Level 3 like launching a rocket. If it fails, there&#8217;s no undo. You need confidence, containment, and a whole lot of telemetry.</p><h3><strong>Are You Auditing and Improving Over Time?</strong></h3><p>Governance isn&#8217;t a checkbox you tick once and forget. It&#8217;s a <strong>feedback loop</strong> and it only works if you commit to running it, consistently.</p><p>Here&#8217;s the basic lifecycle:</p><ol><li><p><strong>The agent takes action</strong></p></li><li><p><strong>You log what happened</strong></p></li><li><p><strong>You review and audit those actions</strong></p></li><li><p><strong>You improve the agent or adjust its constraints</strong></p></li></ol><p>Skip that last step, and your agent will drift from &#8220;safe&#8221; to &#8220;unpredictable&#8221; faster than you think.</p><p><strong>What should this look like in practice?</strong></p><ul><li><p><strong>Immutable audit logs, stored externally. </strong>Don&#8217;t rely on the agent&#8217;s internal memory or runtime to preserve critical history. Store logs in a secure, centralized system something your security team already trusts.</p></li><li><p><strong>Behavioral drift detection. </strong>Over time, agents can change due to upstream LLM updates, data shifts, or configuration changes. You need a way to track deviations in how the agent behaves across similar tasks before it causes a regression.</p></li><li><p><strong>Scheduled governance reviews. </strong>Block time (monthly or quarterly) to sit down with security, product, and engineering. Review incidents, trends, and any &#8220;weird&#8221; agent behavior. This is where you decide if it&#8217;s time to level up autonomy or roll it back.</p></li></ul><h2>The pre-prod checklist you should be using</h2><p>If you&#8217;re serious about evaluating an AI agent before production, here&#8217;s a checklist I&#8217;ve used internally:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7lCc!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6063af0d-33ee-41a0-8536-7414305e2370_4611x2886.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7lCc!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6063af0d-33ee-41a0-8536-7414305e2370_4611x2886.png 424w, https://substackcdn.com/image/fetch/$s_!7lCc!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6063af0d-33ee-41a0-8536-7414305e2370_4611x2886.png 848w, https://substackcdn.com/image/fetch/$s_!7lCc!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6063af0d-33ee-41a0-8536-7414305e2370_4611x2886.png 1272w, https://substackcdn.com/image/fetch/$s_!7lCc!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6063af0d-33ee-41a0-8536-7414305e2370_4611x2886.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7lCc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6063af0d-33ee-41a0-8536-7414305e2370_4611x2886.png" width="1456" height="911" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6063af0d-33ee-41a0-8536-7414305e2370_4611x2886.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:911,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1220962,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://codewdhruv.substack.com/i/162808020?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6063af0d-33ee-41a0-8536-7414305e2370_4611x2886.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!7lCc!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6063af0d-33ee-41a0-8536-7414305e2370_4611x2886.png 424w, https://substackcdn.com/image/fetch/$s_!7lCc!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6063af0d-33ee-41a0-8536-7414305e2370_4611x2886.png 848w, https://substackcdn.com/image/fetch/$s_!7lCc!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6063af0d-33ee-41a0-8536-7414305e2370_4611x2886.png 1272w, https://substackcdn.com/image/fetch/$s_!7lCc!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6063af0d-33ee-41a0-8536-7414305e2370_4611x2886.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Closing thoughts</h2><p>It&#8217;s easy to move fast especially when the prototype works and the results feel magical. But moving fast doesn&#8217;t protect production systems. <strong>Discipline does.</strong></p><p>Here&#8217;s what&#8217;s worth remembering:</p><p>An AI agent isn&#8217;t just a tool. <strong>It&#8217;s a system.</strong><br>It&#8217;s like hiring a new team member, one with no ethics, no intuition, and no context.<br>It just has access. And it moves <em>fast.</em></p><p>That&#8217;s power. But it needs <strong>boundaries</strong>.</p><p>So instead of asking, <em>&#8220;Can we automate this?&#8221;</em><br>Ask <em>&#8220;Should this run unattended?&#8221;</em> And if yes, <em>why now?</em></p><p>When you do ship, make sure the agent is aligned, observable, and well-contained. Then let it run.</p><p>Because responsible autonomy isn&#8217;t about saying &#8220;no.&#8221;<br>It&#8217;s about saying <em>&#8220;not yet&#8221; </em>until you&#8217;re sure what you&#8217;re trusting it to do.</p><div><hr></div><p><em>Curious how other orgs are building secure AI agents? I&#8217;m compiling case studies for a future post. Subscribe or <a href="https://www.linkedin.com/in/dhruvvjyoti/">drop me a note</a> if you want to share your own lessons.</em></p><p><em>&#8212; Dhrubajyoti<br>Product @ Harness | Thinking about agent infrastructure, developer experience, and LLM applications.</em></p><p></p>]]></content:encoded></item><item><title><![CDATA[PLG vs SLG – What should you be using for your next product?]]></title><description><![CDATA[Maybe it doesn&#8217;t matter where you start because you might need both.]]></description><link>https://codewdhruv.substack.com/p/plg-vs-slg-what-should-you-be-using</link><guid isPermaLink="false">https://codewdhruv.substack.com/p/plg-vs-slg-what-should-you-be-using</guid><dc:creator><![CDATA[Dhrubajyoti Chakraborty]]></dc:creator><pubDate>Sun, 04 May 2025 09:59:45 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7fb08cb8-f24c-4ded-bb00-a9384172b958_827x460.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Let me start with the answer nobody likes: <strong>it depends</strong>.</p><p>In my first startup, we tried to sell a productivity tool for remote teams. We were convinced it was a PLG play. &#8220;Just build it, let them try it, and they&#8217;ll love it&#8221;.</p><p>We threw up a landing page, added a free trial, pushed it on Reddit and Product Hunt, and waited. People signed up. Some even used the product. But most never came back. And the ones who did? They asked for things we never intended to build&#8212;or they came from markets we never intended to serve.</p><p>It took us 2-3 months to realize: that <strong>growth is not just about distribution but more about fit</strong>. That&#8217;s when I learned this truth the hard way: <strong>PLG isn&#8217;t always the go-to-market strategy, and SLG isn&#8217;t just for enterprises. And choosing between them isn&#8217;t a binary decision </strong>&#8212; it's a strategy grounded in your product's complexity, your customer&#8217;s buying behavior, and the current stage of your product&#8217;s maturity.</p><p>In this post, we will try to break them down, figure out what actually works, and more importantly &#8212; <strong>when</strong>.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://codewdhruv.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Dhrubajyoti&#8217;s Newsletter! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>What really is PLG and SLG?</h2><h4><strong>Product-Led Growth (PLG)</strong></h4><p>This is where the <em>product</em> does most of the heavy lifting. Users discover your product, sign up, explore its value (often via a free trial or freemium tier), and ideally convert &#8212; <em>without</em> ever needing to talk to sales.<br>Think: Notion, Figma, Calendly. Clean onboarding, intuitive UX, and clear aha moments are critical. If the product&#8217;s good, it sells itself.</p><p>You invest more in growth loops, activation, self-serve funnels, and usage-based upgrades. It's great for bottoms-up adoption and tends to start with individuals or small teams.</p><h4><strong>Sales-Led Growth (SLG)</strong></h4><p>Here, sales is the driver. Prospects usually enter through marketing or outbound outreach, then a sales team engages &#8212; running demos, navigating procurement, working through objections, negotiating pricing, etc.<br>Think: Salesforce, Workday, Oracle. Complex, high-ticket deals where the buyer isn&#8217;t always the end user. You optimize for lead qualification, stakeholder alignment, and enterprise requirements.</p><p>This motion typically suits products with long sales cycles, high ACVs (average contract value), or heavy customization.</p><h3>Most great companies today use both</h3><p>The best companies build <em>products that sell themselves</em> &#8212; and then add <em>humans to help when it matters</em>.</p><p>Take <strong>Zoom</strong>. You can sign up, host a meeting, and share a link &#8212; all within minutes. Classic PLG: fast, frictionless, and user-first. But if you&#8217;re an IT manager at a 500-person org looking for SSO, detailed analytics, or compliance features? There&#8217;s a sales team ready to walk you through procurement, security reviews, and volume pricing. That&#8217;s SLG.</p><p>Same with <strong>Slack</strong>. It started with viral, bottoms-up adoption &#8212; teams inviting each other in like wildfire. But once it hit a certain scale, the need for admin controls, integrations, and company-wide rollouts made SLG a natural layer.</p><p>Even newer players like <strong>Linear</strong> are catching on. Most users find them through word of mouth or social, but when a high-intent lead from a large team comes in, they&#8217;re not afraid to jump on a call and help close the loop.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!unZl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d272df4-ad92-4ab9-afa9-deb067ab179b_2609x1525.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!unZl!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d272df4-ad92-4ab9-afa9-deb067ab179b_2609x1525.png 424w, https://substackcdn.com/image/fetch/$s_!unZl!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d272df4-ad92-4ab9-afa9-deb067ab179b_2609x1525.png 848w, https://substackcdn.com/image/fetch/$s_!unZl!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d272df4-ad92-4ab9-afa9-deb067ab179b_2609x1525.png 1272w, https://substackcdn.com/image/fetch/$s_!unZl!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d272df4-ad92-4ab9-afa9-deb067ab179b_2609x1525.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!unZl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d272df4-ad92-4ab9-afa9-deb067ab179b_2609x1525.png" width="1456" height="851" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6d272df4-ad92-4ab9-afa9-deb067ab179b_2609x1525.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:851,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:270129,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://codewdhruv.substack.com/i/162637290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d272df4-ad92-4ab9-afa9-deb067ab179b_2609x1525.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!unZl!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d272df4-ad92-4ab9-afa9-deb067ab179b_2609x1525.png 424w, https://substackcdn.com/image/fetch/$s_!unZl!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d272df4-ad92-4ab9-afa9-deb067ab179b_2609x1525.png 848w, https://substackcdn.com/image/fetch/$s_!unZl!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d272df4-ad92-4ab9-afa9-deb067ab179b_2609x1525.png 1272w, https://substackcdn.com/image/fetch/$s_!unZl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d272df4-ad92-4ab9-afa9-deb067ab179b_2609x1525.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>The 0&#8211;1 phase - Everyone starts with Sales, even if they don&#8217;t admit it</h2><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!RxQN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7fb08cb8-f24c-4ded-bb00-a9384172b958_827x460.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!RxQN!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7fb08cb8-f24c-4ded-bb00-a9384172b958_827x460.png 424w, https://substackcdn.com/image/fetch/$s_!RxQN!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7fb08cb8-f24c-4ded-bb00-a9384172b958_827x460.png 848w, https://substackcdn.com/image/fetch/$s_!RxQN!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7fb08cb8-f24c-4ded-bb00-a9384172b958_827x460.png 1272w, https://substackcdn.com/image/fetch/$s_!RxQN!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7fb08cb8-f24c-4ded-bb00-a9384172b958_827x460.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!RxQN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7fb08cb8-f24c-4ded-bb00-a9384172b958_827x460.png" width="827" height="460" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7fb08cb8-f24c-4ded-bb00-a9384172b958_827x460.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:460,&quot;width&quot;:827,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:407498,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://codewdhruv.substack.com/i/162637290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7fb08cb8-f24c-4ded-bb00-a9384172b958_827x460.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!RxQN!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7fb08cb8-f24c-4ded-bb00-a9384172b958_827x460.png 424w, https://substackcdn.com/image/fetch/$s_!RxQN!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7fb08cb8-f24c-4ded-bb00-a9384172b958_827x460.png 848w, https://substackcdn.com/image/fetch/$s_!RxQN!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7fb08cb8-f24c-4ded-bb00-a9384172b958_827x460.png 1272w, https://substackcdn.com/image/fetch/$s_!RxQN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7fb08cb8-f24c-4ded-bb00-a9384172b958_827x460.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Here&#8217;s the part no one puts on their launch blog:<br><strong>Every early-stage product is sales-led.</strong><br>It just happens to be the founder (or the first PM, or the designer-engineer duo) doing the selling.</p><p>I&#8217;ve been on teams where we added weeks into building simplified onboarding flows, user-friendly tooltips, and designing the perfect freemium dashboard.<br>But none of it really mattered &#8212; because we didn&#8217;t yet know <em>what problem we were really solving</em> or <em>who actually cared</em>.</p><p>So we did what works: we dropped the playbook and hit the ground.</p><ul><li><p>Picked 30 ideal customer profiles (ICPs).</p></li><li><p>Wrote thoughtful cold emails.</p></li><li><p>Got on Zooms.</p></li><li><p>Did white-glove onboarding.</p></li><li><p>Rewrote our pitch every single day based on what landed (and what didn&#8217;t).</p></li></ul><p>Was it scalable? Not even close.<br>Did it work? Absolutely.</p><p>Because in the 0&#8211;1 phase, you don&#8217;t need scale.<br>You need <em>clarity</em>.<br>You need <em>10 users who deeply love what you&#8217;re building</em> &#8212; not 1,000 drive-bys who never come back.</p><p>You&#8217;re not optimizing for conversions yet.<br>You&#8217;re optimizing for <strong>insight</strong> &#8212; and insight comes from conversations, not dashboards.</p><h2>So&#8230; What should you use for <em>your</em> next product?</h2><p>Here&#8217;s how I like to think about it:</p><h3>Are you pre-product market fit? Do the unscalable things.</h3><p>Seriously &#8212; you don&#8217;t &#8220;choose&#8221; between PLG or SLG at this stage. You do whatever it takes to get your first 10&#8211;20 customers across the line.</p><p>I have known companies where the CEO personally cold-emailed engineering leaders for weeks.<br>No marketing site. No pricing page.<br>Just grit.</p><p>He ran every demo. Took every call. Did follow-ups. Tweaked the pitch live.<br>It wasn&#8217;t scalable &#8212; but it <em>worked</em>. And more importantly &#8212; it gave the team a priceless signal about who cared, what resonated, and where the real pain was.</p><p>Only <em>after</em> that do you build your self-serve onboarding.<br>Only <em>then</em> does PLG make sense &#8212; because now, you know exactly what the product needs to <em>show</em>, <em>say</em>, and <em>do</em> to convert.</p><blockquote><p>Early on, <strong>SLG disguised as founder-led hustle</strong> is your best friend.<br>It&#8217;s a forcing function to learn fast and tighten your value prop.<br>Once you&#8217;ve nailed your ICP and their pain, <em>then</em> you can scale that insight into a product-led motion that converts.</p><p>Build the engine <em>after</em> you know where you&#8217;re going.</p></blockquote><h3>Is your product easy to adopt? or Does it need hand-holding?</h3><p>This is one of the first questions I ask when thinking about go-to-market strategy.</p><p>If you&#8217;re building something <em>horizontal</em>, intuitive, and habit-forming &#8212; PLG can work wonders.</p><p>A friend of mine recently launched an AI writing tool for recruiters. The aha moment is instant: paste a job description, hit a button, and candidates get personalized outreach. People go from trial to paid in minutes. No call. No pitch. Just product value doing the talking.</p><p>Now compare that with something like a cost optimization engine for Kubernetes clusters. You could have the cleanest UI in the world &#8212; but most DevOps teams won&#8217;t <em>feel</em> the value until someone walks them through how it fits into their existing setup. That&#8217;s where human guidance matters.</p><blockquote><p>&#128161; <strong>Rule of thumb</strong>:<br>If a user can find value in under 10 minutes, PLG might be your wedge.<br>If not, lean into SLG &#8212; or a hybrid motion.</p></blockquote><h3>Can your product <em>talk back</em>? Maybe try product-led sales?</h3><p>PLG works best when your product feeds signals back to your team. For example:</p><ul><li><p>Who activated?</p></li><li><p>Who invited others?</p></li><li><p>Who hit a usage threshold?</p></li><li><p>Who&#8217;s stuck?</p></li></ul><p>This is where things get interesting.</p><p>In a PLS model, the product still drives the initial motion &#8212; trial, freemium, self-serve. But behind the scenes, your team is watching for intent signals:<br>Who activated? Who invited teammates? Who&#8217;s bumping into usage limits?</p><p>And when the right signals show up &#8212; that&#8217;s when your sales team steps in.</p><p>Not with a cold pitch. But with help.</p><blockquote><p>&#8220;Hey, noticed your team&#8217;s close to the user limit &#8212; want to hop on a quick call to see how we can unlock more value?&#8221;</p></blockquote><p>It&#8217;s consultative. It&#8217;s human. And most importantly, it&#8217;s <em>timely</em>.<br>No cold outbound. No spam. Just value &#8594; insight &#8594; human support.</p><h3>When SLG is the only way</h3><p>That said, there are products where PLG alone just isn&#8217;t enough.</p><ul><li><p>Your product is complex or mission-critical.</p></li><li><p>Time-to-value is long.</p></li><li><p>Your buyer isn&#8217;t your user.</p></li><li><p>You sell to enterprises with legal, compliance, and procurement gates.</p></li></ul><p>Take a developer security platform. Your end users might be engineers. But your buyer is probably a CISO or a VP of Eng. They&#8217;re not going to swipe a card after a free trial. They want a demo. A whitepaper. Maybe an RFP. They want to trust the <em>people</em> behind the tool as much as the product itself.</p><p>This is where a classic sales-led motion shines. You need outbound, solution engineers, consultative selling &#8212; the full playbook.</p><p>But even here, a <strong>PLG mindset still helps</strong>.<br>Give your sales team a playground. A sandbox. A limited-feature free trial. Let prospects <em>see</em> some value, even if the deal still runs through procurement.</p><h3>Don&#8217;t be religious. Be pragmatic.</h3><p>One thing I&#8217;ve learned building and shipping products:<br><strong>No one cares how you close deals &#8212; just that you do.</strong></p><p>If your product is simple but the market is crowded, PLG helps you stand out.<br>It gets you in the hands of users fast, builds love, and scales reach.</p><p>If your product is complex and high-stakes, SLG gives you the structure to navigate the buying maze &#8212; from procurement to security reviews to IT handoffs.</p><p>And if you&#8217;re like most of us?<br>You&#8217;ll do both.</p><p>A hybrid strategy isn&#8217;t a compromise &#8212; it&#8217;s a progression.<br>You lead with the product. Layer in sales when signals show intent.<br>And evolve your motion as your customer base, product surface, and deal size grow.</p><p><strong>Use what works. Drop what doesn&#8217;t.</strong><br>That&#8217;s not a bad plan switch. That&#8217;s just good product sense.</p><h2>Use this cheat sheet to guide your motion</h2><p>I&#8217;ve put together a quick summary of what we&#8217;ve discussed, which can help you decide whether PLG or SLG is the best fit for your model. This cheat sheet is a simple way to check where you stand.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!HgK5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96996212-d818-4f74-93df-091f0dee715d_1476x783.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!HgK5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96996212-d818-4f74-93df-091f0dee715d_1476x783.png 424w, https://substackcdn.com/image/fetch/$s_!HgK5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96996212-d818-4f74-93df-091f0dee715d_1476x783.png 848w, https://substackcdn.com/image/fetch/$s_!HgK5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96996212-d818-4f74-93df-091f0dee715d_1476x783.png 1272w, https://substackcdn.com/image/fetch/$s_!HgK5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96996212-d818-4f74-93df-091f0dee715d_1476x783.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!HgK5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96996212-d818-4f74-93df-091f0dee715d_1476x783.png" width="1476" height="783" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/96996212-d818-4f74-93df-091f0dee715d_1476x783.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:783,&quot;width&quot;:1476,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1629054,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://codewdhruv.substack.com/i/162637290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc842b0c5-d9ba-4014-b33f-5a0d5eddbedb_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!HgK5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96996212-d818-4f74-93df-091f0dee715d_1476x783.png 424w, https://substackcdn.com/image/fetch/$s_!HgK5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96996212-d818-4f74-93df-091f0dee715d_1476x783.png 848w, https://substackcdn.com/image/fetch/$s_!HgK5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96996212-d818-4f74-93df-091f0dee715d_1476x783.png 1272w, https://substackcdn.com/image/fetch/$s_!HgK5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F96996212-d818-4f74-93df-091f0dee715d_1476x783.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Closing thoughts</h2><p>The next time you find yourself debating PLG vs. SLG, flip the question around.</p><p>Ask yourself:</p><ul><li><p><strong>How does my product create value?</strong></p></li><li><p><strong>How do my customers buy?</strong></p></li><li><p><strong>Where does my team have leverage?</strong></p></li></ul><p>Once you have answers to those, try choosing the strategy that best supports your reality. Don&#8217;t get stuck in the analysis paralysis. PLG and SLG do not come with strict guidelines. They&#8217;re frameworks. Use both when it makes sense.</p><p>And above all, keep talking to your customers.</p><p>They&#8217;ll always show you the way.</p><div><hr></div><p><em>&#8212; Dhrubajyoti<br>Product @ Harness | Thinking about agent infrastructure, developer experience, and LLM applications.</em></p>]]></content:encoded></item><item><title><![CDATA[How to scale AI adoption across developers in your organization?]]></title><description><![CDATA[Why developers don't trust AI yet (and how you can fix it)]]></description><link>https://codewdhruv.substack.com/p/how-to-scale-ai-adoption-across-developers</link><guid isPermaLink="false">https://codewdhruv.substack.com/p/how-to-scale-ai-adoption-across-developers</guid><dc:creator><![CDATA[Dhrubajyoti Chakraborty]]></dc:creator><pubDate>Mon, 28 Apr 2025 19:49:22 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Mc6B!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff38514cb-b67f-4dfc-bcd1-f0b3e3f535b2_1000x536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you&#8217;re at a large enterprise and recently rolled out AI coding assistants like <strong>Cursor</strong> or <strong>Windsurf</strong> to your developer teams&#8230; and now you're looking at the adoption numbers and seeing way less usage than you expected &#8212; you're not alone.</p><p>Rolling out new tools is hard. Rolling out <em>AI</em> tools that fundamentally change how developers work? Even harder.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://codewdhruv.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Dhrubajyoti&#8217;s Newsletter! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Mc6B!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff38514cb-b67f-4dfc-bcd1-f0b3e3f535b2_1000x536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Mc6B!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff38514cb-b67f-4dfc-bcd1-f0b3e3f535b2_1000x536.png 424w, https://substackcdn.com/image/fetch/$s_!Mc6B!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff38514cb-b67f-4dfc-bcd1-f0b3e3f535b2_1000x536.png 848w, https://substackcdn.com/image/fetch/$s_!Mc6B!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff38514cb-b67f-4dfc-bcd1-f0b3e3f535b2_1000x536.png 1272w, https://substackcdn.com/image/fetch/$s_!Mc6B!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff38514cb-b67f-4dfc-bcd1-f0b3e3f535b2_1000x536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Mc6B!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff38514cb-b67f-4dfc-bcd1-f0b3e3f535b2_1000x536.png" width="1000" height="536" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f38514cb-b67f-4dfc-bcd1-f0b3e3f535b2_1000x536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:536,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:456609,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://codewdhruv.substack.com/i/162209857?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F779ecc33-e109-4a33-a4e1-08c5e6054c6f_1000x562.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Mc6B!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff38514cb-b67f-4dfc-bcd1-f0b3e3f535b2_1000x536.png 424w, https://substackcdn.com/image/fetch/$s_!Mc6B!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff38514cb-b67f-4dfc-bcd1-f0b3e3f535b2_1000x536.png 848w, https://substackcdn.com/image/fetch/$s_!Mc6B!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff38514cb-b67f-4dfc-bcd1-f0b3e3f535b2_1000x536.png 1272w, https://substackcdn.com/image/fetch/$s_!Mc6B!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff38514cb-b67f-4dfc-bcd1-f0b3e3f535b2_1000x536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>This week, I came across a research paper that really helped put a lot of the things I&#8217;ve been noticing into perspective: <em><a href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&amp;arnumber=10705659">"Understanding and Designing for Trust in AI-Powered Developer Tooling"</a></em> the latest release from Google&#8217;s <em>Developer Productivity for Humans</em> series.</p><p>And here&#8217;s the key takeaway that hit home for me:</p><p>Adoption isn&#8217;t just about giving people access &#8212; it&#8217;s about trust, real workflows, team culture, and the day-to-day pain points developers actually <em>feel</em>. If the rollout didn&#8217;t address those, low adoption isn&#8217;t surprising &#8212; it&#8217;s a signal that there&#8217;s more work to do.</p><p>The good news: this is fixable. But it needs a different approach than just another training session or email announcement.</p><h2>You gave developers smarter tools &#8212; but that&#8217;s not enough</h2><p>Rolling out tools like Cursor or Windsurf seems like a no-brainer, right?</p><ul><li><p>Code suggestions get faster.</p></li><li><p>Reviews get tighter.</p></li><li><p>Bug fixes get surfaced earlier.</p></li><li><p>Teams should be more productive.</p></li></ul><p><strong>On paper</strong>, it all adds up.<br><strong>In reality?</strong> Developers are hesitant.<br>Usage doesn&#8217;t really take off immediately.<br>Skepticism creeps in.</p><p>And if you ask developers &#8212; really ask them &#8212; it&#8217;s usually not about disliking AI, or being resistant to change.</p><p>It&#8217;s about something much deeper: <strong>trust</strong>.</p><h2>Developers don&#8217;t blindly trust new apps &#8212; and they shouldn&#8217;t</h2><p>Picture a developer in a typical enterprise environment:</p><ul><li><p>They&#8217;re working in highly complex, often poorly documented, hybrid stacks.</p></li><li><p>They&#8217;re handling customer PII and multiple services with dependent systems under tight regulatory oversight.</p></li><li><p>They&#8217;re operating in environments where one wrong code change can trigger not just outages &#8212; but compliance breaches, audits, and career risks.</p></li></ul><p>Now imagine an AI tool that promises &#8220;faster coding&#8221; and &#8220;smarter suggestions.&#8221;</p><p>The natural instinct isn&#8217;t "Awesome!" &#8212; it&#8217;s <strong>caution</strong>:</p><ul><li><p><em>"Does this tool actually understand the internal frameworks?"</em></p></li><li><p><em>"Will it suggest something that breaks compliance or security?"</em></p></li><li><p><em>"Will I be held responsible if an AI suggestion causes an incident?"</em></p></li></ul><p>And honestly? They&#8217;re right to ask.</p><p>Because in an enterprise setting, it&#8217;s not just productivity at stake &#8212; it&#8217;s <strong>security, stability, and professional reputation</strong>.</p><p>And if an AI assistant:</p><ul><li><p>Hallucinates a bad API call,</p></li><li><p>Suggests a method that violates SOC2 rules,</p></li><li><p>Or autocompletes code that leaks customer data,</p></li></ul><p>then trust isn&#8217;t just damaged &#8212; <strong>it&#8217;s broken</strong>, possibly for years.</p><h2>Security and access are fundamental frictions</h2><p>And there&#8217;s another layer: <strong>Access friction</strong>.</p><p>In enterprises, getting an AI tool rolled out isn&#8217;t just "turning it on."</p><ul><li><p>There are <strong>network restrictions</strong> and <strong>proxy issues</strong>.</p></li><li><p>There are <strong>internal approval workflows</strong> for tool usage.</p></li><li><p>There are <strong>security reviews</strong> that slow or block third-party integrations.</p></li><li><p>There are <strong>IP protection concerns</strong>: "Where does the code I type go? Is it stored?"</p></li></ul><p><strong>If trying the tool feels risky or painful, adoption dies early &#8212; no matter how good the tech is.</strong></p><h2>How to start fixing the trust gap</h2><p>If you&#8217;re facing low AI adoption today, here&#8217;s what you should focus on:</p><h3><strong>Find early champions, Not enforcers</strong></h3><p>Start small.<br>Identify developers with influence who are curious about AI &#8212; not just the most senior people, but the ones who others trust technically. Don&#8217;t force adoption top-down. <em>Curiosity beats compliance.</em></p><p>Invest in white-glove onboarding for them. Help them find real wins, and let them share stories organically.</p><h3><strong>Make AI&#8217;s decision-making visible</strong></h3><p>Developers hate black boxes.<br>Cursor and Windsurf both have features that can surface <em>why</em> a suggestion was made &#8212; amplify those.<br>Push for features that show confidence scores, references, or traceability back to real documentation.</p><p>If your developers can say, "I see why it suggested that," you&#8217;re halfway to trust.</p><h3><strong>Customize to local context aggressively</strong></h3><p>Enterprise developers want AI that understands <em>their</em> stack &#8212; their private libraries, their weird config setups, their internal naming patterns.<br>Set up fine-tuning, embedding retrieval, or local context expansion wherever possible.<br>The closer the AI feels to "this was trained on <em>our</em> systems," the better.</p><h3><strong>Respect developer agency, Always</strong></h3><p>Cursor and Windsurf &#8212; and honestly, any good AI dev tool &#8212; let users easily accept, reject, or modify suggestions.</p><p><strong>Hammer this home in your rollout messaging:</strong></p><blockquote><p><em>"You&#8217;re always in control. AI assists; it never replaces your judgment."</em></p></blockquote><p>Nothing builds resentment faster than feeling overridden.</p><h3><strong>Invest in long-term enablement, Not a one-time launch</strong></h3><p>Adoption isn&#8217;t a light switch.<br>Plan for live demos, office hours, async feedback channels.</p><ul><li><p>Collect usage patterns &#8212; not to judge, but to learn.</p></li><li><p>Treat rollout like a slow, ongoing open beta.</p></li></ul><p>Iterate based on real-world feedback &#8212; <em>not just what the vendor&#8217;s success slides say should happen</em>.</p><h2>Closing thoughts</h2><p>Rolling out AI tooling across an organization is still a very new experience for most teams. It&#8217;s a trust-building exercise.</p><p>We&#8217;re still early in this journey, and we'll continue to learn as these tools gradually become a natural part of every developer&#8217;s daily workflow.</p><p>You&#8217;re not just introducing Cursor or Windsurf.<br>You&#8217;re introducing a new way of working where a machine becomes a true collaborator in the coding process.</p><p>That&#8217;s a huge psychological shift.<br>It&#8217;s messy. It&#8217;s slow. It&#8217;s deeply human.</p><p>But if you invest the time to build trust and not demand it? the payoff is massive:<br>Teams move faster. Code quality improves. Engineers feel empowered, not threatened.</p><p>And your AI tooling doesn&#8217;t end up as <em>"yet another installed thing."</em></p><p>It becomes how real work gets done with confidence and clarity.</p><div><hr></div><p><em>Thanks for reading. If you&#8217;re building in this space &#8212; or just exploring &#8212; feel free to <a href="https://www.linkedin.com/in/dhruvvjyoti/">connect with me</a>. I&#8217;d love to hear how you&#8217;re thinking about agent systems, protocol standards, and where it&#8217;s all headed.</em></p><p><em>&#8212; Dhrubajyoti<br>Product @ Harness | Thinking about agent infrastructure, developer experience, and LLM applications.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://codewdhruv.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Dhrubajyoti&#8217;s Newsletter! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[What is good writing really?]]></title><description><![CDATA[Hacks to writing well &#8212; for devrel, product & engineering]]></description><link>https://codewdhruv.substack.com/p/what-is-good-writing-really</link><guid isPermaLink="false">https://codewdhruv.substack.com/p/what-is-good-writing-really</guid><dc:creator><![CDATA[Dhrubajyoti Chakraborty]]></dc:creator><pubDate>Fri, 25 Apr 2025 08:08:37 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!bwif!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff055f841-2129-4e36-bb08-e0539ab61bbd_1920x1080.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you&#8217;re anything like me, you didn&#8217;t get into tech thinking you&#8217;d spend so much time writing.</p><p>You wanted to build great products. Solve real problems. Work with smart engineers. Talk to users. Ship things. Writing? That was supposed to be a side effect, not the main event.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://codewdhruv.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Dhrubajyoti&#8217;s Newsletter! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>But over time &#8212; without even realizing it &#8212; you start writing everything.</p><ul><li><p>Product specs</p></li><li><p>Release notes.</p></li><li><p>DevRel blogs and tutorials (if you&#8217;re in the advocacy seat).</p></li><li><p>Engineering RFCs.</p></li><li><p>Internal docs.</p></li><li><p>Strategy briefs.</p></li><li><p>Team updates.</p></li><li><p>Twitter threads (&#128517;)</p></li></ul><p>And whether you realize it or not &#8212; you&#8217;re judged by the clarity of your words <em>a lot</em> more than your thoughts, or even your ideas.</p><p>Because if you can&#8217;t make it understandable, it&#8217;s as good as not said.</p><p>So that got me thinking: what actually makes writing <em>good </em>&#8212; especially when you&#8217;re surrounded by dev tools, APIs, and B2B software?</p><p>After spending almost 3 years into devrel, product, and just enough engineering to get myself in trouble, I&#8217;ve started to see some patterns. Here&#8217;s what I&#8217;ve learned &#8212; and a few writing hacks I keep coming back to &#128071;</p><h3>Why does writing feel so hard?</h3><p>Here&#8217;s my theory: Writing feels hard because it forces you to think clearly. No ambiguity, no vibe-checking in meetings, no shoulder taps to clarify. Just you and your thoughts on a blank page.</p><p>It exposes fuzzy ideas.</p><p>It reveals when you're not sure.</p><p>And it makes you ask: <em>Who am I even writing this for &amp; why?</em></p><p>The moment that clicked for me &#8212; the &#8220;aha&#8221; &#8212; was when I stopped asking &#8220;Is this good writing?&#8221; and started asking:</p><blockquote><p><strong>&#8220;What decision does this unblock?&#8221;</strong></p></blockquote><p>Every doc, spec, tweet, or thread exists to move something forward. It&#8217;s either helping someone learn, decide, align, or act. If it&#8217;s not doing that &#8212; it&#8217;s noise.</p><h3>What makes writing "good"?</h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bwif!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff055f841-2129-4e36-bb08-e0539ab61bbd_1920x1080.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bwif!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff055f841-2129-4e36-bb08-e0539ab61bbd_1920x1080.jpeg 424w, https://substackcdn.com/image/fetch/$s_!bwif!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff055f841-2129-4e36-bb08-e0539ab61bbd_1920x1080.jpeg 848w, https://substackcdn.com/image/fetch/$s_!bwif!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff055f841-2129-4e36-bb08-e0539ab61bbd_1920x1080.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!bwif!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff055f841-2129-4e36-bb08-e0539ab61bbd_1920x1080.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bwif!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff055f841-2129-4e36-bb08-e0539ab61bbd_1920x1080.jpeg" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f055f841-2129-4e36-bb08-e0539ab61bbd_1920x1080.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:129996,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://codewdhruv.substack.com/i/162051811?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff055f841-2129-4e36-bb08-e0539ab61bbd_1920x1080.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!bwif!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff055f841-2129-4e36-bb08-e0539ab61bbd_1920x1080.jpeg 424w, https://substackcdn.com/image/fetch/$s_!bwif!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff055f841-2129-4e36-bb08-e0539ab61bbd_1920x1080.jpeg 848w, https://substackcdn.com/image/fetch/$s_!bwif!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff055f841-2129-4e36-bb08-e0539ab61bbd_1920x1080.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!bwif!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff055f841-2129-4e36-bb08-e0539ab61bbd_1920x1080.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Let&#8217;s be real &#8212; writing isn&#8217;t something most of us got into this job to do. I didn&#8217;t go to college thinking <em>&#8220;I want to write 8-page Google Docs and 6-sentence Slack updates for a living.&#8221;</em> But here we are.</p><p>Whether you're in DevRel, Product, or Engineering, writing is the glue. It's the thing that makes your ideas move, your team align, and your users understand what you just shipped. And the moment I stopped treating it like a chore and started treating it like a <em>tool</em>, things got better.</p><p>So what even counts as &#8220;good&#8221; writing in our world?</p><p>Let&#8217;s bring it down to the essentials &#8212; no jargon, no fluff:</p><ul><li><p><strong>Clear</strong>: The number one thing. If your reader can&#8217;t immediately tell what you're saying or what you want them to do, nothing else matters. Clarity is empathy. It shows you&#8217;ve done the work so they don&#8217;t have to.</p></li><li><p><strong>Concise</strong>: We&#8217;re all swimming in tabs and context switches. If you can say something in fewer words &#8212; do it. Every sentence you cut is a little gift to your reader.</p></li><li><p><strong>Useful</strong>: Is what you're writing going to <em>help</em> someone? Will it unblock a teammate, teach a user something new, or help someone make a decision? If yes, you're on the right track.</p></li><li><p><strong>Contextual</strong>: Know your audience. Writing an internal eng spec is very different from writing a changelog for customers. What do they already know? What do they care about? Writing without context is like giving someone directions without asking where they&#8217;re coming from.</p></li><li><p><strong>Unboring</strong>: This one's underrated. No one wants to read something that sounds like a compliance email. We&#8217;re still humans reading this stuff. Inject a little personality, a little rhythm. Be real. Even a dry subject can have a clear, kind voice.</p></li></ul><p>If your doc, post, spec, or tweet hits <em>those five </em>&#8212; clear, concise, useful, contextual, and unboring &#8212; you&#8217;re already ahead of 80% of the internet.</p><p>And honestly it&#8217;s not about being a great &#8220;writer.&#8221; It&#8217;s about being a clear thinker who knows how to get ideas across. That&#8217;s the real skill.</p><h3>My writing hacks (context matters)</h3><h4>DevRel writing - <em>(Tutorials, blogs, guides)</em></h4><p>DevRel writing is a strange but beautiful mix of teaching, storytelling, and subtle product marketing. The trick? It should <em>never</em> feel like marketing. It should feel like someone generous and sharp is walking you through something they&#8217;ve figured out the hard way.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ZsQn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6fa12b0-505f-4aa2-b121-b321fdba12fc_7427x4618.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ZsQn!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6fa12b0-505f-4aa2-b121-b321fdba12fc_7427x4618.png 424w, https://substackcdn.com/image/fetch/$s_!ZsQn!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6fa12b0-505f-4aa2-b121-b321fdba12fc_7427x4618.png 848w, https://substackcdn.com/image/fetch/$s_!ZsQn!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6fa12b0-505f-4aa2-b121-b321fdba12fc_7427x4618.png 1272w, https://substackcdn.com/image/fetch/$s_!ZsQn!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6fa12b0-505f-4aa2-b121-b321fdba12fc_7427x4618.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ZsQn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6fa12b0-505f-4aa2-b121-b321fdba12fc_7427x4618.png" width="1456" height="905" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f6fa12b0-505f-4aa2-b121-b321fdba12fc_7427x4618.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:905,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:914556,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://codewdhruv.substack.com/i/162051811?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6fa12b0-505f-4aa2-b121-b321fdba12fc_7427x4618.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ZsQn!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6fa12b0-505f-4aa2-b121-b321fdba12fc_7427x4618.png 424w, https://substackcdn.com/image/fetch/$s_!ZsQn!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6fa12b0-505f-4aa2-b121-b321fdba12fc_7427x4618.png 848w, https://substackcdn.com/image/fetch/$s_!ZsQn!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6fa12b0-505f-4aa2-b121-b321fdba12fc_7427x4618.png 1272w, https://substackcdn.com/image/fetch/$s_!ZsQn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6fa12b0-505f-4aa2-b121-b321fdba12fc_7427x4618.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Here&#8217;s what I&#8217;ve learned from watching great DevRel folks (and borrowing a few tricks myself):</p><p><strong>Start with the &#8220;why.&#8221;</strong> Assume your reader landed on your post while frustrated or curious. Lead with empathy:<br><em>&#8220;You&#8217;re probably running into this error when deploying on XYZ. Here&#8217;s a cleaner way to solve it.&#8221; </em>or <em>&#8220;This new SDK shortcut can save you 40 minutes every deploy.&#8221;</em></p><p><strong>Code first, words second.</strong> Show them the working snippet early. People trust code more than promises. Once they see it works, they&#8217;ll actually read what you wrote around it.</p><p><strong>Structure = respect.</strong> Use headers, bullets, bold text, copy-paste blocks. Add a TLDR if it&#8217;s longer than a screenful. Most readers are skimming at 11 PM with 20 tabs open.</p><p><strong>Always test it yourself.</strong> If your walkthrough breaks on step 3, your reader&#8217;s gone. And so is their trust. I&#8217;ve seen brilliant posts having massive drop outs because the author never ran through it fresh.</p><p><strong>Sound like a mentor, not a manual.</strong> You don&#8217;t need to be overly polished. It&#8217;s fine &#8212; actually it&#8217;s <em>helpful </em>&#8212; to say: <em>&#8220;This part&#8217;s a little weird. Took me a few tries to get it right, but here&#8217;s what finally worked.&#8221;</em></p><h3>Product writing - <em>(PRDs, changelogs, release notes)</em></h3><p>Product writing isn&#8217;t just &#8220;PM homework.&#8221; It&#8217;s the single source of truth for everyone else who has to work with what you're building &#8212; engineers, design, support, marketing, sales and even your future self three months from now.</p><p>Here&#8217;s how I try to approach it:</p><ul><li><p><strong>Start with the job-to-be-done.</strong> Right at the top, one or two lines:<br><em>&#8220;This feature lets users schedule deployments without needing to coordinate manually with ops.&#8221;</em><br>If that&#8217;s not clear, the rest of the doc won&#8217;t help. Your team can&#8217;t build what they don&#8217;t understand.</p></li><li><p><strong>Show, don&#8217;t just tell.</strong> Even a rough Figma mock/Excildraw mock or screenshot beats a block of descriptive text. People process visuals way faster than paragraphs.</p></li><li><p><strong>Use concrete workflows.</strong> Instead of saying <em>&#8220;Improves team collaboration&#8221;</em> say:<br><em>&#8220;Imagine Alex, a backend engineer, wants to test a feature branch in staging. Now they can trigger an environment preview without waiting on DevOps.&#8221;</em><br>Suddenly, it clicks.</p></li><li><p><strong>Write for out-loud reading.</strong> Whether it&#8217;s a spec review or a stakeholder sync someone will read this doc in a meeting. If your writing is stiff or filled with jargon, people will tune out &#8212; especially execs.</p></li><li><p><strong>Avoid buzzwords like the plague.</strong><br>If you find yourself typing <em>&#8220;streamlines the end-to-end feedback lifecycle&#8221;</em> stop.<br>Try: <em>&#8220;Makes it easier to leave and respond to feedback in one place.&#8221;</em><br>Clear beats clever every single time.</p></li></ul><blockquote><p><strong>Pro tip:</strong><br>Write the changelog version <em>before</em> you write the spec.</p><p>If you can&#8217;t explain what changed and why it matters in two lines, you&#8217;re not ready to spec it.</p><p>Example:</p><ul><li><p><strong>Bad changelog line:</strong><br><em>&#8220;Introduced a multi-tenant data abstraction layer to enhance platform modularity.&#8221;</em></p></li><li><p><strong>Better:</strong><br><em>&#8220;You can now manage workspaces across multiple teams from a single dashboard.&#8221;</em></p></li></ul><p>If the second one&#8217;s easier to read &#8212; it&#8217;s because it&#8217;s closer to what the user actually experiences.</p></blockquote><h4>Engineering writing <em>(RFCs, internal docs)</em></h4><p>Your job here is to <em>transfer clarity</em>, not impress people.</p><ul><li><p><strong>Define terms at the top.</strong> Even &#8220;what is a satellite&#8221; or &#8220;what is &lt;service_name&gt;&#8221;</p></li><li><p><strong>Show tradeoffs early</strong> &#8211; engs love to know &#8220;why this and not X.&#8221;</p></li><li><p><strong>State the goal like a test case</strong> &#8211; &#8220;This works <em>if</em>&#8230;&#8221;</p></li><li><p><strong>Don&#8217;t ignore</strong> &#8211; Call them out in a &#8220;Things to Watch&#8221; section.</p></li><li><p><strong>Use diagrams.</strong> Seriously. A 5-minute sketch saves a 500-word explanation.</p></li></ul><p>Most underrated move? Drop a Loom + Notion summary with your RFC. Eng leads <em>love</em> it.</p><h3>A few hard-won habits</h3><p>I&#8217;ve learned these the slow way:</p><ul><li><p><strong>Write your 1-pager before you pitch the project.</strong> If you can&#8217;t explain the problem, context, tradeoffs, and desired outcome on a page, you're not ready to start.</p></li><li><p><strong>Never surprise in writing.</strong> Use writing to <strong>confirm alignment</strong>, not create it. Talk to people first. </p></li><li><p><strong>Default to edit over ideate.</strong> Most people specifically PMs get stuck because they try to write the perfect doc on the first try. Start ugly. Hit send. Feedback is strategy.</p></li><li><p><strong>Name your documents like a dev.</strong> I&#8217;ve shipped more projects with <code>why-now.md</code>, <code>tradeoffs-v2.md</code>, and <code>launch-readiness.md</code> than any fancy doc template.</p></li><li><p><strong>Own the narrative.</strong> If you don&#8217;t write it, someone else will &#8212; and it might not be accurate.</p></li></ul><p>And the most human trick of all? <em>Write as if you&#8217;re DMing a smart friend.</em></p><p>Because let&#8217;s face it &#8212; writing isn&#8217;t about sounding smart. It&#8217;s about <em>being understood</em>.</p><div><hr></div><h3>Final thoughts</h3><p>We&#8217;re all accidental writers now. And writing <em>is</em> thinking. The clearer you write, the clearer you think. And in product/devrel/eng &#8212; <em>clarity wins</em>.</p><p>So next time you open Notion, Google Docs, or even Twitter...</p><p>Remember: <strong>write to help someone get unblock</strong>. That&#8217;s where the magic is.</p><p>That&#8217;s the bar. That&#8217;s the win.</p><div><hr></div><p><em>Thanks for reading! If this resonates, or if you&#8217;ve found writing hacks that actually work in fast-moving orgs, I&#8217;d love to hear about them.</em></p><p><em>&#8212; Dhrubajyoti<br>Product @ Harness | Thinking about agent infrastructure, developer experience, and LLM applications.</em></p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://codewdhruv.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Dhrubajyoti&#8217;s Newsletter! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[MCP vs. A2A: How & why google’s new protocol is a big deal for multi-agent systems]]></title><description><![CDATA[And why it might not be a showdown, but a handshake.]]></description><link>https://codewdhruv.substack.com/p/mcp-vs-a2a-how-and-why-googles-new</link><guid isPermaLink="false">https://codewdhruv.substack.com/p/mcp-vs-a2a-how-and-why-googles-new</guid><pubDate>Tue, 22 Apr 2025 13:18:42 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!0N4P!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc057195-f381-4098-8ad7-b50a2e47d36d_800x445.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A couple of days ago, Google quietly (or not-so-quietly, depending on your Twitter/X feed or if you were in Google CloudNext) released a new open protocol called <strong><a href="https://developers.googleblog.com/en/a2a-a-new-era-of-agent-interoperability/">A2A &#8212; Agent-to-Agent</a></strong>. It is essentially designed to standardize communication in multi-agent systems, and naturally as expected the AI/infra space of the internet is buzzing.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0N4P!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc057195-f381-4098-8ad7-b50a2e47d36d_800x445.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0N4P!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc057195-f381-4098-8ad7-b50a2e47d36d_800x445.jpeg 424w, https://substackcdn.com/image/fetch/$s_!0N4P!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc057195-f381-4098-8ad7-b50a2e47d36d_800x445.jpeg 848w, https://substackcdn.com/image/fetch/$s_!0N4P!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc057195-f381-4098-8ad7-b50a2e47d36d_800x445.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!0N4P!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc057195-f381-4098-8ad7-b50a2e47d36d_800x445.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0N4P!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc057195-f381-4098-8ad7-b50a2e47d36d_800x445.jpeg" width="800" height="445" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cc057195-f381-4098-8ad7-b50a2e47d36d_800x445.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:445,&quot;width&quot;:800,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:45875,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://codewdhruv.substack.com/i/161803038?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc057195-f381-4098-8ad7-b50a2e47d36d_800x445.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!0N4P!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc057195-f381-4098-8ad7-b50a2e47d36d_800x445.jpeg 424w, https://substackcdn.com/image/fetch/$s_!0N4P!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc057195-f381-4098-8ad7-b50a2e47d36d_800x445.jpeg 848w, https://substackcdn.com/image/fetch/$s_!0N4P!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc057195-f381-4098-8ad7-b50a2e47d36d_800x445.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!0N4P!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc057195-f381-4098-8ad7-b50a2e47d36d_800x445.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>I&#8217;ll admit: my gut reaction was <em>&#8220;Wait, doesn&#8217;t MCP already kind of cover this?&#8221;</em><br>And if you&#8217;re already building LLM-based applications, you might be asking the same thing.</p><p>But as I went deeper into A2A, I started to see how it fills some of the gaps MCP leaves behind &#8212; and how these two protocols might actually complement each other in a pretty elegant way.</p><p>This post is a breakdown of what&#8217;s going on here, why people are excited, and what this means if you&#8217;re building agentic apps or thinking about the future of how AI systems will interact.</p><h3>What&#8217;s MCP again?</h3><p>If you&#8217;ve been anywhere near open-source LLM infra over the past few months, you&#8217;ve probably heard of <strong>MCP (Model Context Protocol)</strong>.</p><p>At its core &#8212; MCP is about creating a consistent and modular way for LLM-based apps to interact with tools and data sources. Instead of manually wiring up every API and context window, MCP gives you a structured way to expose capabilities to your model&#8212;making it easier to scale and generalize how agents (or apps) access tools.</p><p>Here&#8217;s the typical breakdown of components in an MCP-powered environment:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!nKQd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd75129d-2a5b-4b62-8e79-9dcc89492fd6_4188x1588.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!nKQd!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd75129d-2a5b-4b62-8e79-9dcc89492fd6_4188x1588.png 424w, https://substackcdn.com/image/fetch/$s_!nKQd!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd75129d-2a5b-4b62-8e79-9dcc89492fd6_4188x1588.png 848w, https://substackcdn.com/image/fetch/$s_!nKQd!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd75129d-2a5b-4b62-8e79-9dcc89492fd6_4188x1588.png 1272w, https://substackcdn.com/image/fetch/$s_!nKQd!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd75129d-2a5b-4b62-8e79-9dcc89492fd6_4188x1588.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!nKQd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd75129d-2a5b-4b62-8e79-9dcc89492fd6_4188x1588.png" width="1456" height="552" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fd75129d-2a5b-4b62-8e79-9dcc89492fd6_4188x1588.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:552,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:339857,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://codewdhruv.substack.com/i/161803038?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd75129d-2a5b-4b62-8e79-9dcc89492fd6_4188x1588.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!nKQd!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd75129d-2a5b-4b62-8e79-9dcc89492fd6_4188x1588.png 424w, https://substackcdn.com/image/fetch/$s_!nKQd!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd75129d-2a5b-4b62-8e79-9dcc89492fd6_4188x1588.png 848w, https://substackcdn.com/image/fetch/$s_!nKQd!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd75129d-2a5b-4b62-8e79-9dcc89492fd6_4188x1588.png 1272w, https://substackcdn.com/image/fetch/$s_!nKQd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffd75129d-2a5b-4b62-8e79-9dcc89492fd6_4188x1588.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h4>MCP Host</h4><p>At the center of the architecture is the <strong>MCP Host</strong>. This is your main application &#8212;likely powered by a LLM &#8212; and it&#8217;s responsible for orchestrating everything. It receives requests, makes decisions, and determines which tools or services to invoke. Think of it as the brain that interprets user intent and turns it into meaningful action.</p><h4>MCP Client</h4><p>The <strong>MCP Client</strong> is the bridge between the Host and the broader ecosystem. It standardizes how the Host communicates with tools, services, and external systems. Think of it as the adapter layer&#8212;it knows how to speak both the language of the Host and the protocols of the tools it needs to use.</p><h4>MCP Server</h4><p><strong>MCP Servers</strong> are lightweight service wrappers that expose specific <strong>tools or capabilities</strong> to the MCP network. Each Server speaks the MCP protocol, making it easy to plug in new capabilities&#8212;like running a SQL query, accessing a calendar, or parsing a PDF.</p><p>These are stateless, composable, and easy to distribute. You can run as many as you need, where you need them.</p><h4>Local Data Sources</h4><p>These are the systems and data stores available on your local machine or network. For example:</p><ul><li><p>File systems (e.g. reading a <code>.csv</code>)</p></li><li><p>Local databases (e.g. SQLite, local Postgres)</p></li><li><p>Services that an MCP Server can access behind a firewall</p></li></ul><h4>Remote Data Sources</h4><p>This includes any <strong>cloud-based or API-accessible system</strong>, such as:</p><ul><li><p>Google Calendar</p></li><li><p>Salesforce</p></li><li><p>Notion</p></li><li><p>Any custom RESTful service</p></li></ul><p>In short, MCP gives your LLM superpowers by standardizing how it talks to different tools and services. But there's a catch...</p><h3>Where MCP Falls Short</h3><p>MCP shines in structured environments where a single LLM agent (or &#8220;host&#8221;) is accessing tools in a controlled way. But what if you&#8217;re building systems where <em>multiple</em> agents need to communicate, negotiate, share state, or divide and conquer tasks?</p><p>That&#8217;s where things start to get messy.</p><p>MCP doesn&#8217;t natively handle:</p><ul><li><p>Agent-to-agent coordination</p></li><li><p>Secure handshakes between agents</p></li><li><p>Negotiation around user preferences or constraints</p></li><li><p>Maintaining or syncing state across multiple independent actors</p></li></ul><p>And that's <em>exactly</em> where A2A steps in.</p><h3>A2A (Agent-to-Agent)</h3><p>Google&#8217;s new <strong>A2A</strong> protocol proposes a solution to the decentralized, real-time nature of multi-agent systems.</p><p>While MCP focuses on LLM-to-tool interaction, A2A focuses on <strong>agent-to-agent collaboration</strong>. Think of it as a protocol that lets multiple intelligent agents talk to each other, understand each other&#8217;s capabilities, and coordinate securely and effectively.</p><p>Here&#8217;s how it works:</p><ul><li><p><strong>Security Layer</strong> &#8211; A2A adds authentication and identity, which MCP doesn&#8217;t natively offer. You can now trust <em>who</em> you're talking to in multi-agent setups.</p></li><li><p><strong>Task &amp; State Management</strong> &#8211; Agents can share, delegate, and sync tasks and state. This is huge for applications where multiple agents are working toward a shared goal or context.</p></li><li><p><strong>User Experience</strong> &#8211; Agents can communicate about user preferences, time constraints, or alternative approaches to solving a problem.</p></li><li><p><strong>Capability Discovery</strong> &#8211; Just like MCP lets a host discover tools, A2A enables agents to discover <em>each other&#8217;s abilities</em>, making the system more dynamic.</p></li></ul><p>In other words, <strong>A2A makes agents truly agentic </strong>&#8212; giving them the language and structure to function as autonomous entities in a shared environment.</p><h3>So&#8230; why this protocol debate?</h3><p>Honestly? It doesn&#8217;t have to be.</p><p>I&#8217;ve seen a lot of folks online sharing thoughts on MCP and A2A against each other as if we&#8217;re heading for some kind of VHS vs. Betamax showdown. But the more I think about it, the more I believe they were <em>always meant to converge</em>.</p><p>If I had to bet, I&#8217;d say the original creators of MCP probably <strong>anticipated</strong> many of these A2A-style capabilities&#8212;task negotiation, agent communication, identity layers&#8212; but didn&#8217;t prioritize them in the early spec. Why? Because it made sense to first focus on the lower layer: tool interoperability and data access.</p><p>Now, A2A picks up where MCP leaves off, addressing the higher-level orchestration layer between agents.</p><p>Here&#8217;s how I see the two working together:</p><ul><li><p>Capability &amp; Tool Access &#8594; What can I do? &#8594; MCP</p></li><li><p>Agent Coordination &#8594; Who else is out there? How do we work together? &#8594; A2A</p></li></ul><p>Together, they could form a powerful stack for future agentic applications:<br><strong>MCP for accessing tools. A2A for collaborating to get things done.</strong></p><h3>Final thoughts - What does this mean for you?</h3><p>If you&#8217;re building agent-based systems &#8212; whether it&#8217;s for internal workflow automation, customer support agents, or research tools&#8212;this development is worth paying attention to.</p><p>My take?</p><ul><li><p><strong>If you&#8217;re already using MCP</strong>, A2A could layer beautifully on top of what you&#8217;ve built, adding support for multi-agent coordination without reinventing your whole stack.</p></li><li><p><strong>If you&#8217;re new to agent systems</strong>, you now have two maturing protocols to build with &#8212; one focused on execution (MCP), the other on collaboration (A2A).</p></li></ul><p>Either way, we&#8217;re watching the early formation of something big here: the standardization of how intelligent agents talk, think, and work together.</p><p>And that&#8217;s not just hype. That&#8217;s essentially the infrastructure for the next generation of software.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://codewdhruv.substack.com/p/mcp-vs-a2a-how-and-why-googles-new?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Dhrubajyoti&#8217;s Newsletter! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://codewdhruv.substack.com/p/mcp-vs-a2a-how-and-why-googles-new?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://codewdhruv.substack.com/p/mcp-vs-a2a-how-and-why-googles-new?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><div><hr></div><p><em>Thanks for reading. If you&#8217;re building in this space &#8212; or just exploring &#8212; feel free to <a href="https://www.linkedin.com/in/dhruvvjyoti/">connect with me</a>. I&#8217;d love to hear how you&#8217;re thinking about agent systems, protocol standards, and where it&#8217;s all headed.</em></p><p><em>&#8212; Dhrubajyoti<br>Product @ Harness | Thinking about agent infrastructure, developer experience, and LLM applications.</em></p>]]></content:encoded></item><item><title><![CDATA[How to grow through feedbacks]]></title><description><![CDATA[Hacks that can help you receive and use feedback to improve and essentially use it as a growth opportunity.]]></description><link>https://codewdhruv.substack.com/p/how-to-grow-through-feedbacks</link><guid isPermaLink="false">https://codewdhruv.substack.com/p/how-to-grow-through-feedbacks</guid><dc:creator><![CDATA[Dhrubajyoti Chakraborty]]></dc:creator><pubDate>Wed, 31 Jul 2024 20:55:56 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/4872bfd5-3559-4adc-a260-7176e6bf92fd_500x281.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As software developers, we put our hearts and souls into writing code that we're proud of. So, when someone provides feedback on our work, it's natural to feel defensive. However, it's essential to recognize that feedback is a vital growth opportunity, not a personal attack.</p><p>In this blog, I&#8217;ll share some life hacks I use to learn and grow through the feedback I have received so far in my career.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://codewdhruv.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Dhrubajyoti&#8217;s Newsletter! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3><strong>The initial reaction: You get defensive</strong></h3><p>When it comes to receiving feedback, most of us have an immediate emotional reaction that can be difficult to control. Whether it's a colleague or a senior manager providing their input on our work, our default response may be to feel defensive, dismissed, or even angered by their comments. This is because our brain's automatic response is to protect our ego and self-image.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!nfsz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88a14f36-059e-41d0-8f2d-cdd81d54a4ea_500x281.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!nfsz!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88a14f36-059e-41d0-8f2d-cdd81d54a4ea_500x281.jpeg 424w, https://substackcdn.com/image/fetch/$s_!nfsz!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88a14f36-059e-41d0-8f2d-cdd81d54a4ea_500x281.jpeg 848w, https://substackcdn.com/image/fetch/$s_!nfsz!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88a14f36-059e-41d0-8f2d-cdd81d54a4ea_500x281.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!nfsz!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88a14f36-059e-41d0-8f2d-cdd81d54a4ea_500x281.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!nfsz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88a14f36-059e-41d0-8f2d-cdd81d54a4ea_500x281.jpeg" width="500" height="281" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/88a14f36-059e-41d0-8f2d-cdd81d54a4ea_500x281.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:281,&quot;width&quot;:500,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:47748,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!nfsz!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88a14f36-059e-41d0-8f2d-cdd81d54a4ea_500x281.jpeg 424w, https://substackcdn.com/image/fetch/$s_!nfsz!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88a14f36-059e-41d0-8f2d-cdd81d54a4ea_500x281.jpeg 848w, https://substackcdn.com/image/fetch/$s_!nfsz!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88a14f36-059e-41d0-8f2d-cdd81d54a4ea_500x281.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!nfsz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F88a14f36-059e-41d0-8f2d-cdd81d54a4ea_500x281.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Imagine you've spent hours writing a new feature for your team's project. However, when you present it to the team, one of your team members points out a few issues with the code. Maybe function names are not descriptive, or maybe the behind-the-scenes logic is not efficient.</p><p>The first thought you would probably get is - "But I spent hours on this! It's perfect just the way it is. This person just doesn't understand my vision."</p><p>Well, that possibly could be the case but just not most of the time.</p><h4><strong>Hack 1: The 2-minute rule</strong></h4><p>Initially, I had the same feeling but then to overcome this what I call the reflex reaction, I tried the 2-Minute Rule. Well now when you receive the feedback, take a 2-minute pause before responding. This is a simple yet very effective hack.</p><ul><li><p>A brief pause allows your emotions to settle which helps you think rationally. Research shows that even short breaks can significantly reduce stress responses.</p></li><li><p>Taking a step back helps you view the feedback without personal bias, focusing on its content rather than delivery.</p></li><li><p>It essentially allows you to fully consume the information and assess its value to your growth.</p></li></ul><p>What you essentially do here is create a buffer between your initial reaction and your response. This will allow you to listen carefully which is probably one of the most important skills when it comes to receiving feedback and also can help in responding in a more constructive and open-minded way.</p><h4>Hack 2: Frame feedback as a question</h4><p>Let's go back to the previous example. You've just received feedback on your code, and you're feeling defensive. Now to channel your initial reaction, try reframing the feedback as a question. For example:</p><ul><li><p>Instead of "Your code is not efficient" reframe it as "What can I do to improve the efficiency of my code?" &#8220;What is the impact it causes?&#8221;</p></li><li><p>Instead of "Your function names are not descriptive," reframe it as "What are the other possible function names that are more descriptive and clear?"</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Qj8e!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffda7a61f-568e-4a1c-aae0-803d37e21ccc_600x400.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Qj8e!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffda7a61f-568e-4a1c-aae0-803d37e21ccc_600x400.png 424w, https://substackcdn.com/image/fetch/$s_!Qj8e!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffda7a61f-568e-4a1c-aae0-803d37e21ccc_600x400.png 848w, https://substackcdn.com/image/fetch/$s_!Qj8e!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffda7a61f-568e-4a1c-aae0-803d37e21ccc_600x400.png 1272w, https://substackcdn.com/image/fetch/$s_!Qj8e!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffda7a61f-568e-4a1c-aae0-803d37e21ccc_600x400.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Qj8e!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffda7a61f-568e-4a1c-aae0-803d37e21ccc_600x400.png" width="600" height="400" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fda7a61f-568e-4a1c-aae0-803d37e21ccc_600x400.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:400,&quot;width&quot;:600,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:24336,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Qj8e!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffda7a61f-568e-4a1c-aae0-803d37e21ccc_600x400.png 424w, https://substackcdn.com/image/fetch/$s_!Qj8e!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffda7a61f-568e-4a1c-aae0-803d37e21ccc_600x400.png 848w, https://substackcdn.com/image/fetch/$s_!Qj8e!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffda7a61f-568e-4a1c-aae0-803d37e21ccc_600x400.png 1272w, https://substackcdn.com/image/fetch/$s_!Qj8e!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffda7a61f-568e-4a1c-aae0-803d37e21ccc_600x400.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Ask yourself the reframed question, try to come up with a solution and then use the solution as a starting point for your response. This shift in perspective creates the foundation of a growth mindset, encouraging problem-solving over blame.</p><h3>Try to understand the usefulness of the feedback</h3><p>Once you've added your initial reaction, it's essential to understand the usefulness of the feedback. Not all feedback is equally helpful. Remember, you're the expert on your work. While others offer valuable perspectives, the ultimate goal is improvement. At the end of the day, you are the one who worked on the code and who essentially understands it the best. The best feedback can be yours.</p><p>Let's say you've received feedback on your code from a colleague. They've pointed out a few issues with the logic, and they've suggested some alternative approaches. However, as you review the feedback, you realize that the colleague's suggestions are not relevant to the specific problem you're trying to solve.</p><p>In this case, you need to assess the usefulness of the feedback and decide whether to act on it or not.</p><h4>Hack 3: The 3-Part feedback filter</h4><p>I use this almost every day. To assess the usefulness of feedback, try using the 3-Part feedback filter. This hack can help you:</p><ul><li><p>Understand the feedback objectively</p></li><li><p>Identify what is valuable and what is not</p></li><li><p>Prioritize your work items based on the feedback\</p></li></ul><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!V-P-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2626df8-c907-4d3e-b0e8-db2ed784f240_5431x756.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!V-P-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2626df8-c907-4d3e-b0e8-db2ed784f240_5431x756.png 424w, https://substackcdn.com/image/fetch/$s_!V-P-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2626df8-c907-4d3e-b0e8-db2ed784f240_5431x756.png 848w, https://substackcdn.com/image/fetch/$s_!V-P-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2626df8-c907-4d3e-b0e8-db2ed784f240_5431x756.png 1272w, https://substackcdn.com/image/fetch/$s_!V-P-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2626df8-c907-4d3e-b0e8-db2ed784f240_5431x756.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!V-P-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2626df8-c907-4d3e-b0e8-db2ed784f240_5431x756.png" width="1456" height="203" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a2626df8-c907-4d3e-b0e8-db2ed784f240_5431x756.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:203,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:239555,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!V-P-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2626df8-c907-4d3e-b0e8-db2ed784f240_5431x756.png 424w, https://substackcdn.com/image/fetch/$s_!V-P-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2626df8-c907-4d3e-b0e8-db2ed784f240_5431x756.png 848w, https://substackcdn.com/image/fetch/$s_!V-P-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2626df8-c907-4d3e-b0e8-db2ed784f240_5431x756.png 1272w, https://substackcdn.com/image/fetch/$s_!V-P-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2626df8-c907-4d3e-b0e8-db2ed784f240_5431x756.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p><strong>To use the 3-Part feedback filter get answers to the following questions:</strong></p><ol><li><p>Is the feedback specific, clear, and actionable? Does it provide concrete examples or suggestions?</p></li><li><p>Is the feedback relevant to the specific problem or goal you're trying to achieve? Does it align with your priorities and in some cases values?</p></li><li><p>Do you have a clear action item based on the feedback? Is it feasible to implement the suggested changes or improvements?</p></li></ol><p>You can use this method to consistently evaluate feedback from colleagues, managers, and mentors. Now for this method to work you will have to be honest with yourself considering some feedback may not be worth acting on.</p><p>How I see this is essentially like where feedback is not always right or wrong. It's often a matter of perspective, and you need to use your judgment to decide what to do with it.</p><h3>Use feedback as a source of constant growth</h3><p>Let&#8217;s consider the previous example where you have received feedback from your colleagues and manager on your code, and you've used it to make significant improvements. As a result, you've become more confident in your abilities and more efficient in your work.</p><p>However you realize that you're not getting as much feedback as you used to. Your colleagues and manager are not pointing out as many issues, and you're not sure if you're still improving.</p><p>In this case, you can use the Feedback Loop method to drive your growth using continuous feedback.</p><h4><strong>Hack 4: The feedback loop</strong></h4><p>Imagine you're driving a car, and you're trying to reach your destination. You're not just looking at the road ahead, but also checking your rearview mirror, side mirrors, and GPS to ensure you're on track. This process of checking, adjusting, and re-checking is similar to a feedback loop.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!fe6h!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0523a1a-3169-438d-9512-617cd0f7983b_1029x1038.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!fe6h!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0523a1a-3169-438d-9512-617cd0f7983b_1029x1038.png 424w, https://substackcdn.com/image/fetch/$s_!fe6h!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0523a1a-3169-438d-9512-617cd0f7983b_1029x1038.png 848w, https://substackcdn.com/image/fetch/$s_!fe6h!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0523a1a-3169-438d-9512-617cd0f7983b_1029x1038.png 1272w, https://substackcdn.com/image/fetch/$s_!fe6h!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0523a1a-3169-438d-9512-617cd0f7983b_1029x1038.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!fe6h!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0523a1a-3169-438d-9512-617cd0f7983b_1029x1038.png" width="280" height="282.44897959183675" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e0523a1a-3169-438d-9512-617cd0f7983b_1029x1038.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1038,&quot;width&quot;:1029,&quot;resizeWidth&quot;:280,&quot;bytes&quot;:86351,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!fe6h!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0523a1a-3169-438d-9512-617cd0f7983b_1029x1038.png 424w, https://substackcdn.com/image/fetch/$s_!fe6h!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0523a1a-3169-438d-9512-617cd0f7983b_1029x1038.png 848w, https://substackcdn.com/image/fetch/$s_!fe6h!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0523a1a-3169-438d-9512-617cd0f7983b_1029x1038.png 1272w, https://substackcdn.com/image/fetch/$s_!fe6h!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0523a1a-3169-438d-9512-617cd0f7983b_1029x1038.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p><strong>A Feedback Loop is a cycle of:</strong></p><ul><li><p><strong>Action</strong>: You take an action, like driving the car or writing a piece of code.</p></li><li><p><strong>Feedback</strong>: You receive feedback, like a GPS alert or a colleague's comment, that tells you how you're doing.</p></li><li><p><strong>Adjustment</strong>: You use the feedback to adjust your action, like turning the steering wheel or refactoring your code.</p></li><li><p><strong>Repeat</strong>: You repeat the cycle, taking a new action, receiving feedback, and adjusting again.</p></li></ul><p>Now in software engineering let&#8217;s understand this using an example where you're working on a new feature and you've written a piece of code, but you're not sure if it's the best solution. You want to make sure it's efficient, scalable, and meets the requirements.</p><p>Now in this scenario, the Feedback Loop becomes a cycle of:</p><ol><li><p><strong>Writing Code</strong>: You write a piece of code, like a function or a module. </p></li><li><p><strong>Get Feedback</strong>: You receive feedback from your peers, like a code review or a test report, that tells you how your code is performing.</p></li><li><p><strong>Refactor</strong>: You use the feedback to refactor your code, making improvements and optimizations.</p></li><li><p><strong>Repeat</strong>: You repeat the cycle, writing new code, getting feedback, and refactoring again.</p></li></ol><p><strong>Examples of Feedback Loops in Action</strong></p><ul><li><p>A dev team using CI/CD pipelines to automate testing.</p></li><li><p>A developer using a code review tool such as the Github Copilot to receive feedback from peers and improve the code quality.</p></li><li><p>A Devops team using monitoring and logging tools to receive feedback on system performance and make adjustments.</p></li></ul><p>To use the method you essentially have to trim down your focus area i.e. you&#8217;ll have to identify a specific space where you would want to improve.</p><h3>Closing thoughts</h3><p>Receiving feedback can be challenging, but it's essential for the growth of software engineering. Try using the hacks mentioned in this blog and share your experience. </p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://codewdhruv.substack.com/p/how-to-grow-through-feedbacks?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thank you for reading the 10x Engineering Newsletter. If you find this post useful feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://codewdhruv.substack.com/p/how-to-grow-through-feedbacks?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://codewdhruv.substack.com/p/how-to-grow-through-feedbacks?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p>Most importantly remember that affirmations can always help us stay on track, focusing on the type of software engineer we want to be, the thoughts and behaviour we aim to display, and the kind of person we want to be.</p><p>So, the next time you receive feedback, take a deep breath, repeat the affirmation, and use it as a growth opportunity. Your future self will thank you!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://codewdhruv.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[What is LangChain and why should you care?]]></title><description><![CDATA[Langchain &#129436; has quickly grown in the open-source space, experiencing exponential growth.]]></description><link>https://codewdhruv.substack.com/p/what-is-langchain-and-why-should</link><guid isPermaLink="false">https://codewdhruv.substack.com/p/what-is-langchain-and-why-should</guid><dc:creator><![CDATA[Dhrubajyoti Chakraborty]]></dc:creator><pubDate>Sat, 09 Mar 2024 18:02:58 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/c88a65e9-2e80-4018-9480-663259a2ee61.avif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Langchain &#129436; has quickly grown in the open-source space, experiencing exponential growth. One of the major reasons behind this surge is the recent interest in Language Model Integrations (LLMs). </p><p>Let me explain it in simpler terms. Langchain provides a platform for developers to connect data to language models, such as GPT models from OpenAI and various others, through their API. It provides a collection of modular components and utilities that simplify the process of building applications that leverage the capabilities of LLMs.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://codewdhruv.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading the Cero AI Newsletter! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>Why do we need LangChain?</h2><p>Developers often lack the necessary tools to deploy language models in real-world use cases as the ecosystem for building GenAI applications is still evolving. Starting scratch to work with LLMs directly can be a bit complex with tasks such as prompt engineering, data preprocessing, managing context and memory, and handling different model APIs and output formats. LangChain abstracts away much of this complexity, providing a higher-level interface for interacting with LLMs.</p><p>Many real-world applications require combining multiple operations or tasks involving LLMs, such as retrieving information from databases, processing text data, and generating outputs based on specific prompts. LangChain's concept of "chains" and "agents" makes it easier to create and manage these complex workflows.</p><p>The best part is that you can do all of this within a single interface. No more crazy scaling of code bases just to support different providers!</p><h2>The community behind</h2><p>When it comes to the future of any technology framework, one of the big factors is the community supporting it. And for projects like Langchain, this is even more crucial. You want to know you're not flying solo, right?</p><p>Langchain has got a massive fan base! As of today, it's got over 51k stars on GitHub, which is a pretty credible stat for its popularity in the open-source space. <br>It's racking up a million downloads every single month with some serious love from the developer community.</p><p>The community also maintains an active Discord channel. So, if you ever hit a blocker or just want to experiment around you know that you've got a place to hang out.</p><p><strong>LangChain Community: <a href="https://discord.com/invite/cU2adEyC7w">Click Here</a></strong></p><p><strong>LangChain Documentation: <a href="https://python.langchain.com/docs/get_started/introduction">Click Here</a></strong></p><p>Bottom line: with that kind of following and engagement, you can bet Langchain is doing something right. Let's now try to understand what exactly all of this hype is about.</p><h2>What does LangChain really do?</h2><h3>Complexity Abstraction</h3><p>At its core, LangChain acts as an abstraction layer, allowing developers to interact with various LLMs through a consistent and user-friendly interface. This abstraction simplifies the process of working with different LLM providers, APIs, and models, enabling seamless model interoperability and facilitating the integration of multiple LLMs within a single application.</p><h3>Chaining</h3><p>Beyond abstraction, LangChain introduces the concept of "chains," which are sequences of operations that can be applied to inputs or outputs of LLMs. These chains can be combined and orchestrated to create more complex workflows, enabling developers to build sophisticated applications that leverage the strengths of LLMs in various capacities. For example, a chain could involve retrieving data from a database, processing and summarizing that data using an LLM, and then generating a natural language response based on the processed information.</p><p>For example, you can define a custom chain that retrieves relevant information from various data sources, processes it using an LLM, and generates a natural language response:</p><pre><code>from langchain import OpenAI, GooglePalm, Anthropic
from langchain.chains import RetrievalQA

# load models

openai_llm = OpenAI(model_name="text-davinci-003")
palm_llm = GooglePalm(model_name="palm-2")
claude_llm = Anthropic(model_name="claude-v1")

# chain definition

chain = RetrievalQA(
    retriever=...,  # retriever for fetching relevant information
    question_llm=openai_llm,  
    answer_llm=palm_llm,  
    feedback_llm=claude_llm,
    ...
)

# use the chain to answer questions

result = chain.run("What is the capital of India?")</code></pre><p>The above code is an example on how you can integrate and use multiple LLMs within the same application, without having to worry about the underlying complexities of interacting with each provider's API directly. The <code>RetrievalQA</code> chain abstraction handles the orchestration of different LLMs for different subtasks (generating questions, answers, and feedback).</p><h3>Memory and Context Management</h3><p>LangChain can manage memory and context effectively. Most LLMs often struggle to maintain coherent and consistent responses across multiple interactions due to their limited context retention capabilities. LangChain addresses this limitation by providing tools for managing and persisting context, allowing applications to maintain relevant information and provide more coherent and contextually-aware responses.</p><pre><code>from langchain import OpenAI, ConversationBufferMemory

# initialize the LLM and memory

llm = OpenAI(temperature=0)
memory = ConversationBufferMemory()

# set up the conversation chain

from langchain.chains import ConversationChain
conversation = ConversationChain(llm=llm, memory=memory)

# start the conversation

print(conversation.predict(input="Hi there!"))
# -&gt; 'Hi there! It's nice to meet you. How can I assist you today?'

print(conversation.predict(input="I'm looking for information about the French Revolution."))
# -&gt; 'Sure, I'd be happy to help you with that. The French Revolution was a major event in European history...'

print(conversation.predict(input="What were some of the key causes?"))
# -&gt; 'Some of the key causes of the French Revolution included...'

print(conversation.predict(input="And what were the major outcomes?"))
# -&gt; 'The major outcomes of the French Revolution were...'</code></pre><p>In this example, we first initialize an OpenAI LLM and a <code>ConversationBufferMemory</code> object, which will be used to store and manage the conversation context.</p><p>Next, we create a <code>ConversationChain</code> and pass it to the LLM and memory objects. This chain is designed to maintain and utilize the conversation history to provide more coherent and contextually-aware responses.</p><p>As we interact with the <code>ConversationChain</code> through the <code>predict</code> method, the chain stores the input prompts and generated responses in the <code>ConversationBufferMemory</code>. When a new input is provided, the chain retrieves the relevant context from the memory and prepends it to the prompt before sending it to the LLM.</p><p>You'll notice that in the example, even though the prompts "What were some of the key causes?" and "And what were the major outcomes?" don't explicitly mention the French Revolution, the LLM can still provide relevant responses based on the context maintained in the <code>ConversationBufferMemory</code>.</p><p>LangChain provides several different memory implementations, including <code>ConversationBufferMemory</code>, <code>ConversationBufferWindowMemory</code> (which limits the context to a specific number of interactions), and <code>ConversationEntityMemory</code> (which can extract and store specific entities from the conversation). You can also implement custom memory classes to suit your specific use case.</p><h3>Model Agnostic</h3><p>Different LLMs have different APIs, input/output formats, and capabilities. LangChain provides a unified interface for working with various LLMs, allowing developers to switch between models or use multiple models in the same application without significant code changes.</p><h3>Data Ingestion &amp; Monitoring</h3><p>LangChain simplifies the process of ingesting and working with diverse data sources, such as text files, PDFs, websites, and databases. It provides utilities for preprocessing and formatting data, ensuring that it is compatible with the input requirements of LLMs. This data integration capability enables developers to leverage the power of LLMs with real-world data sources, unlocking new possibilities for processing and generating insights from complex and unstructured data.</p><p>It also offers utilities for evaluating and monitoring the performance of LLM-based applications, including tools for generating and analyzing metrics. This feature allows developers to assess the effectiveness of their applications, identify areas for improvement, and optimize their use of LLMs.</p><pre><code>from langchain.document_loaders import TextLoader, PyPDFLoader, WebBaseLoader
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.chains import RetrievalQA
from langchain.llms import OpenAI
from langchain.evaluation import LoadAndEvaluateQA

# ingest the data
loaders = [
    TextLoader('data/text_files/'),
    PyPDFLoader('data/pdfs/'),
    WebBaseLoader('https://example.com')
]

documents = []
for loader in loaders:
    docs = loader.load()
    text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
    texts = text_splitter.split_documents(docs)
    documents.extend(texts)

# create a vector store
vectorstore = Chroma.from_documents(documents, persist_directory='vectorstore')

# create a retrieval chain
llm = OpenAI(temperature=0)
qa = RetrievalQA.from_chain_type(
    llm=llm,
    chain_type="stuff",
    retriever=vectorstore.as_retriever()
)

# evaluate and monitor
dataset = LoadAndEvaluateQA.load_from_file('data/qa_dataset.json')
metrics = LoadAndEvaluateQA.evaluate(qa, dataset)
print(metrics)</code></pre><p>In the above code example, we start by ingesting data from various sources: text files, PDFs, and a website. We use LangChain's <code>TextLoader</code>, <code>PyPDFLoader</code>, and <code>WebBaseLoader</code> to load these documents. We then split the documents into smaller chunks using the <code>CharacterTextSplitter</code> to ensure that they are compatible with the input requirements of the LLM.</p><p>Next, we create a vector store using Chroma, which is a tool for ingesting and storing documents in a format suitable for retrieval and querying.</p><p>With the vector store in place, we create a <code>RetrievalQA</code> chain, which combines a retriever (in this case, the vector store) with an LLM (OpenAI's GPT-3) to answer questions based on the ingested data.</p><p>Finally, we use LangChain's evaluation and monitoring tools to assess the performance of our question-answering application. We load a dataset of question-answer pairs <code>(LoadAndEvaluateQA.load_from_file('data/qa_dataset.json'))</code> and evaluate the <code>RetrievalQA</code> chain against this dataset using the <code>LoadAndEvaluateQA.evaluate</code> function. This function returns a dictionary of metrics, such as accuracy, precision, recall, and F1 score, which we can use to monitor and optimize the performance of our application.</p><h2>Closing thoughts</h2><p>LangChain has proven to be a valuable tool for working with large language models (LLMs) by simplifying the complexities involved. But to make a solid point - the APIs for these language model things are constantly evolving. Who knows, maybe down the line they'll just bake in a bunch of the functionality that LangChain provides right now. That could make LangChain a little redundant, or at least force it to switch things up a bit.</p><p>What I do think is that LangChain's model-agnostic approach could position it as a standard interface for LLMs across different platforms If it becomes like the go-to interface for working with any language model out there, that'd be a pretty sweet spot to be in.</p><p>Whichever way it shakes out, you have to appreciate the LangChain team for putting in the work to build something that's legitimately useful right now. They've helped a lot of folks to essentially build use case driven apps using LLMs. Even if LangChain ends up evolving or getting replaced down the line, their contribution to pushing this space forward is solid.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://codewdhruv.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Dhrubajyoti&#8217;s Newsletter! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[What is this newsletter about?]]></title><description><![CDATA[I occasionally write about what I am building, researching, or thinking about.]]></description><link>https://codewdhruv.substack.com/p/what-is-this-newsletter-about</link><guid isPermaLink="false">https://codewdhruv.substack.com/p/what-is-this-newsletter-about</guid><dc:creator><![CDATA[Dhrubajyoti Chakraborty]]></dc:creator><pubDate>Tue, 27 Feb 2024 10:41:21 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/98725592-8f96-4571-9bbf-c0812af055ce_1313x938.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I occasionally write about what I am building, researching, or thinking about.<br>Topics usually sit at the intersection of neural networks, cloud-native systems, MLOps and applied machine learning particularly how we train, deploy, and reason about models at scale.<br>If you're interested in real-world ML, systems thinking, and infrastructure behind AI, this might be useful.</p><h2>About me</h2><p>I am currently a Product Manager at Harness.io working on Software Engineering Insights &#8212; building software that help teams understand and improve how software gets built.</p><p>Before that, I spent time in Developer Relations, where I focused on translating feedback into product and helping developers succeed with the platform.</p><p>Most of my work revolves around making ML systems more usable in production. I have explored a range of ideas from training large models to building scalable MLOps architectures. I try to explain complex things simply.</p><p>Background in physics. Learned AI the long way.<br>Built some things, broke others. Still learning.</p><p>You can find some of my experiments <a href="#">on HuggingFace</a><br>Always open to interesting conversations &#8212; reach out at <strong><a href="mailto:me@codewdhruv.com">me@codewdhruv.com</a></strong> or connect on <a href="https://www.linkedin.com/in/dhruvvjyoti/">LinkedIn</a>.</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://codewdhruv.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://codewdhruv.substack.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://codewdhruv.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Dhrubajyoti&#8217;s Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item></channel></rss>