Run the Data https://runthedata.dev/ Recent content on Run the Data Hugo -- gohugo.io en-us Sun, 05 Oct 2025 00:00:00 +0000 Genie in a Model: Wishing for More Wishes https://runthedata.dev/p/genie-in-a-model-part-ii/ Sun, 05 Oct 2025 00:00:00 +0000 https://runthedata.dev/p/genie-in-a-model-part-ii/ <img src="proxy.php?url=https://runthedata.dev/p/genie-in-a-model-part-ii/genie_5.jpg" alt="Featured image of post Genie in a Model: Wishing for More Wishes" /><p>Most developers spend their first wishes on the obvious: <a class="link" href="https://runthedata.dev/p/genie-in-a-model-part-i/" >fancy auto-complete, quick bug fixes, simple Q&amp;A, and instant feedback</a>. They fill immediate needs and provide quick bursts of productivity. But when the dust settles, these types of improvements don&rsquo;t fundamentally change the developer. True transformation requires a more deliberate engagement with the tools.</p> <p>Forging a deeper partnership with LLMs can unlock previously inaccessible workflows. LLMs can do far more than grant surface-level wishes: their real magic lies in helping us reshape how we reason about problems. In this post, I&rsquo;ll walk through several ways I&rsquo;ve moved past using them for simple code generation and instead improved how I think, design, and build.</p> <h2 id="injecting-intuition">Injecting Intuition </h2><p>Getting familiar with a new codebase is challenging for any developer. With luck, well-thought-out naming and structure guides you like a trail of breadcrumbs. However, more often it&rsquo;s a maze: sprawling design, inconsistent organization, and unfamiliar patterns make it difficult to follow. Even after grasping the high-level structure, projects of reasonable complexity tend to contain layers of indirection which make tracing execution paths tedious.</p> <p>The key to becoming productive in the new codebase is arriving at a reasonable mental model and building intuition about how to find things, where to add new contributions, and what patterns to re-use. LLMs have made this tiresome task far more tractable, enabling quick and efficient onboarding.</p> <p>To start getting oriented, I prompt the LLM to draft a compact Markdown outline of the codebase with references to actual files and functions. I immediately review what it produces, follow each link, verify details, and correct any hallucinations. Auditing the document early and preventing the LLM from churning out uninterrupted content keeps it focused and catches small drifts before they can compound. If the project lacks a clear README, I have the LLM generate setup and testing instructions, then validate and fold them into the document.</p> <p>For the specific task at hand, I ask the LLM to record how component X works, describe primary user interfaces, and map out essential data models. I repeatedly ask follow-up questions and push the LLM to revise and condense content. I only need to reference this document a few times before I am able to effectively reason about the codebase. Like a good wish, the tangible document itself is less valuable than the intuition gained through the process of making it.</p> <h2 id="camouflaged-coding">Camouflaged Coding </h2><p>Once I&rsquo;ve mapped out the terrain, the next challenge is moving around silently. A hallmark of maintainable code is PRs that blend seamlessly into the codebase: following existing patterns and reusing utilities keeps cognitive load low for other developers. Yet uncovering these patterns is challenging without extensive prior experience in the codebase. Luckily, this is an LLM <a class="link" href="https://runthedata.dev/p/finding-your-superpower/" >superpower</a>.</p> <p>When I&rsquo;m unsure how something is done, I ask the LLM to find examples, surface code references, and suggest how to adapt my solution to better assimilate. For example, I&rsquo;ve recently sought help with handling soft deletions, testing temporal workflows, and managing API endpoints. This approach helps with both tightly scoped, tactical changes and more general stylistic choices like naming conventions and file organization.</p> <p>The most up-to-date recommendations can be surprisingly difficult to tease out. Codebases continuously evolve, and standards are often mid-transition. This leaves behind coexisting patterns where it&rsquo;s not clear which to prefer. Commit timestamps help, but not everyone sticks to the latest conventions. When confronted with competing standards, LLMs help me determine which appears most frequently in recent commits.</p> <p>In greenfield projects, I find myself babysitting LLMs. However, when pattern matching against existing code, LLMs tend to deliver excellent first-pass solutions. Beyond saving time, this technique accelerates and reinforces the process of building intuition for a new codebase.</p> <h2 id="armchair-architect">Armchair Architect </h2><p>When beginning a new project, I bring in an LLM to help construct a plan. This might involve system design, data models, or milestones. I often start with a diagram to clarify my own understanding and establish the right level of abstraction. Rather than diving into solutions, the goal is to scope out achievable sub-problems and clear interfaces. As with onboarding to a new codebase, I have the LLM iterate on a living Markdown document.</p> <p>The first draft is frequently flawed with missing pieces and misplaced priorities. Articulating structured requirements and relevant context is hard, and I rarely know everything upfront. Trying to craft the perfect prompt is a recipe for procrastination, but <a class="link" href="https://runthedata.dev/p/getting-unstuck/#just-do-it" >starting with something</a> and refining it later accelerates discovery. Through this back and forth, we converge towards a concrete plan.</p> <p>The LLM inevitably overproduces, adding superfluous sections. I ruthlessly pare down the document to the essentials. Ideally, there is not much prose at all: just diagrams, interface skeletons, tables, and concise bullet points. Just as an LLM can <a class="link" href="https://runthedata.dev/p/genie-in-a-model-part-i/#seeking-instant-critique" >freely offer critique</a>, so too can it eagerly receive it. I call out inconsistencies and unnecessary complexity to help the LLM distill the document into a cohesive, practical plan. While the LLM often has many theoretical strategies, grounding them to the specific project at hand takes coaxing.</p> <p>This planning document is valuable in its own right, but it&rsquo;s also an anchor keeping the LLM moored during implementation. Instead of constantly reminding the LLM about the requirements, I simply include the blueprint as context. Time spent upfront drafting the plan yields higher-quality, more stable frameworks later.</p> <h2 id="phone-a-friend">Phone a Friend </h2><p>No man &ndash; or machine &ndash; is an island. Lately, I&rsquo;ve had my LLMs call in backup. Even as LLMs collectively improve, their capabilities and specializations vary. I choose different models for tasks aligned with each LLM&rsquo;s strengths. In a standard chat interface, there is a dialogue between human and machine: collaborating to produce something better than either would alone. The natural next step is to invite another voice, or model, into the committee.</p> <p>Asking one LLM to review another brings diversity in reasoning. More voices result in a higher chance of spotting subtle issues or discovering clean solutions. You can do this manually by copy-pasting output between chats, but dedicated CLI tools tighten the loop. I ask the first LLM to seek critiques from the second, then have them review and revise together.</p> <p>Sometimes an LLM is too eager and dives straight into applying changes. I start by telling it not to implement anything yet and to share the proposed plan with me first. Keeping the model in the planning phase prevents it from making surface-level fixes without consulting its counterpart.</p> <p>Since ambiguous requirements are inevitable, I also instruct the LLM to forward any clarifying questions to me. Thinking models struggle when trying to infer exactly what a general request means. This can result in irrelevant responses, or a long series of solutions with different assumptions. Encouraging clarifications produces more relevant and less verbose plans.</p> <p>This process is slow, so I reserve the technique for tricky situations. When I&rsquo;m not an effective judge for a particular problem, this helps give me context to form a clearer opinion and exposes trade-offs more explicitly. And when a solution feels clunky, the debating models generate fresh alternatives or confirm the soundness of the initial approach.</p> <h2 id="compacting-context">Compacting Context </h2><p>Though I reach for them less often than the basics covered in <a class="link" href="https://runthedata.dev/p/genie-in-a-model-part-i/" >Part I</a>, these four techniques have helped me grow as a developer. They raise the bar for the quality of what I build and change how I think about building. Onboarding to new codebases has become a breeze with the help of LLMs to build intuition and emulate recommended patterns. Robust solutions are more accessible through collaborative design and refinement via committee.</p> <p>Yet, for all the benefits of LLMs, using these tools effectively means knowing when to reach for them and when not to. Even genies have limits, and in the final post, I&rsquo;ll explore where the magic runs thin.</p> Genie in a Model: My Prompt Is Your Command https://runthedata.dev/p/genie-in-a-model-part-i/ Mon, 14 Jul 2025 00:00:00 +0000 https://runthedata.dev/p/genie-in-a-model-part-i/ <img src="proxy.php?url=https://runthedata.dev/p/genie-in-a-model-part-i/genie.jpg" alt="Featured image of post Genie in a Model: My Prompt Is Your Command" /><p><a class="link" href="https://runthedata.dev/p/angsty-llms/" >Previously</a>, I&rsquo;ve written that Schrödinger&rsquo;s LLMs are both overly impressive and underwhelming. This three-part series cuts through the noise to separate what&rsquo;s genuinely valuable from what&rsquo;s mostly just hype. In the first post, I share four practical, low-risk ways I incorporate LLMs into my daily routine while pointing out their limitations.</p> <h2 id="accelerating-with-code-completion">Accelerating With Code Completion </h2><p>One of the least controversial ways I use LLMs is for code completion. From predictive text on phones to <a class="link" href="https://microsoft.github.io/language-server-protocol/" target="_blank" rel="noopener" >Language Server Protocols</a> (LSP), we&rsquo;ve been primed for this workflow since well before the existence of LLMs. It feels intuitive, and the experience is quite satisfying when it seems to read your mind. As one of the first useful applications of LLMs, code completion has rightly received a lot of attention. Consequently, many models now exist that execute this well.</p> <p>Deceptively simple extensions to the basic workflow like multi-line edit and tab jump (made famous by <a class="link" href="https://docs.cursor.com/tab/overview" target="_blank" rel="noopener" >Cursor</a>) make it even more useful. I remember how eye-opening these features were the first time I encountered them. The tedium of adding docstrings to functions instantly vanished. Adding test cases was a simple tab away. Forgetting to update that <em>one last change</em> no longer happened. It&rsquo;s hard to unsee how useful this workflow can be. When not available, it&rsquo;s painfully obvious just what is missing. Adding comments and having relevant files open in your IDE can elicit better multi-line prediction. I find it&rsquo;s generally not worth the added effort: the baseline performance is already superb.</p> <p>I get the most out of code completion when I&rsquo;m working on low-complexity, repetitive, or boilerplate code. That&rsquo;s not to say it can&rsquo;t work in other instances, but I&rsquo;m much more likely to accept suggestions when the stakes are low and there&rsquo;s a clear pattern that&rsquo;s easy to verify. When things get intricate, I&rsquo;m more inclined to just open the chat interface to explore the solution a bit more. (Un)luckily, much of development involves the low-complexity code that completions are great for.</p> <p>And yet, code completion is not a panacea. Sometimes an LSP is just better. The most glaring example of this is for package imports. LLMs are frighteningly good at importing plausible-sounding modules. However, unlike writing new code, there&rsquo;s no room for creativity with imports. Whenever I accept LLM predictions for imports, I inevitably have to go back and fix something. LSPs, on the other hand, are the perfect tool for this. They are completely deterministic and only suggest existing modules. Similarly, LLMs are great at suggesting methods that sound like they should exist on your classes. In contrast, LSPs offer a superior discoverability workflow by listing only valid, existing methods.</p> <p>One tip to make using both tools easier: change the shortcuts for accepting suggestions. Tab is overloaded in most IDEs. If there is both an LLM completion and an LSP completion, Tab will use the LLM suggestion by default. By creating separate shortcuts, you can easily control which suggestion to incorporate and prevent the frustration of accidentally accepting the wrong one.</p> <h2 id="squashing-bugs">Squashing Bugs </h2><p>There&rsquo;s usually a gap between generated code and production code. LLMs can help close that gap. For popular languages like Python, many errors no longer require a trip to Stack Overflow. In the past, if your exact error trace didn&rsquo;t show up in search results, you needed to abstract the issue you were facing to search more generally. With modern LLMs, simply copying and pasting the error trace is enough to get you back on track. For trivial or simple bugs, this is great and a huge speed-up: no need to leave the IDE, context switch, or engage in deep problem-solving.</p> <p>At the same time, there&rsquo;s value in periodically thinking deeply and banging your head against the wall. If you stop thinking critically about why things are broken, you may find yourself giving up sooner and be even more lost when an LLM can&rsquo;t fix your small-batch, artisanal bugs. To combat this failure mode, I make a point to either confirm I&rsquo;m familiar with the technique, or I spend extra time to fully understand the solution.</p> <p>For hairy bugs, I&rsquo;ve found two maxims particularly useful: <em>keep it simple</em> and <em>context is king</em>. The longer the LLM or agent session is, the more needlessly complex the solutions become. LLMs tend to solve bugs iteratively, and when the first solution doesn&rsquo;t work, they add special conditions instead of starting over. Without oversight, this leads to brittle and unmaintainable code. To keep it simple, I ruthlessly cut unneeded parts. I challenge the LLM to simplify its recommendations and question what is absolutely necessary.</p> <p>The context you provide largely constrains an LLM&rsquo;s ability to solve an issue. Copying the full error output as well as pointing to relevant code is crucial to getting pertinent solutions. Oversharing is also a risk because exhausting the context window can make it hard for the LLM to determine what&rsquo;s important (though this is becoming less true as context windows continue to grow). When the LLM is getting distracted or pursuing a path I don&rsquo;t think will be fruitful, I use specific context to redirect it, in addition to explicitly prompting it to focus on smaller sub-problems. Completely restarting the session can also help an LLM get out of a rut by pruning the counterproductive conversation history. Even if an LLM ultimately can&rsquo;t fix the issue, it&rsquo;s a great rubber duck to solidify my own understanding of the problem.</p> <p>I&rsquo;m also worried about the risk of incumbency bias which is exacerbated by the meteoric rise in the use of LLMs as a debugging tool. Because LLMs are trained on historical data, existing packages and frameworks have a huge advantage over new ones. Users are more likely to get working code if they leverage popular packages. I&rsquo;ve been particularly interested in <a class="link" href="https://marimo.io/" target="_blank" rel="noopener" >marimo</a> lately, but as a new framework, it is more difficult to debug with an LLM. They often don&rsquo;t fully understand its execution model, its API, or how it defines functions and variables. It&rsquo;s much easier to one-shot a streamlit app than a marimo app, even though I think marimo apps are likely a better choice for many people. I&rsquo;ve found a few techniques can help get better suggestions from LLMs for newer packages: supplying links to documentation, providing prototypical recipes, and explicitly stating rules or standards it consistently gets wrong. While not perfect, modern LLM interfaces often allow you to crystallize this extra context in system prompts or reusable instruction files, so you don&rsquo;t have to include it with each prompt.</p> <h2 id="asking-dumb-questions">Asking Dumb Questions </h2><p>Beyond debugging, I use the LLM chat interface for more general Q&amp;A as well. I&rsquo;m a firm believer in asking questions in public channels as a form of documentation. The fact that you have a question likely means other people have or will have it as well. I&rsquo;ve often searched Slack only to find a question my past self had already asked. At the same time, no matter how often we&rsquo;ve been told there&rsquo;s no such thing as a stupid question, we&rsquo;re all humans with our own insecurities. LLMs give us a perfect outlet for getting answers to these &ldquo;dumb&rdquo; questions without fear of judgement.</p> <p>I still think public channels are the right choice for questions about processes or internal tooling specific to your job, but for anything more general, LLMs offer fantastic value. Search engines are becoming increasingly difficult to use, especially when all you need is a specific answer. LLMs (for now) directly surface answers that would otherwise be buried under promoted content and SEO-driven articles.</p> <p>I frequently find myself asking questions of the form <em>how to do X in Y?</em> Despite using pandas for the past decade, I cut my teeth on R and the <a class="link" href="https://www.tidyverse.org/" target="_blank" rel="noopener" >tidyverse</a>. While packages like <a class="link" href="https://pola.rs/" target="_blank" rel="noopener" >polars</a> and <a class="link" href="https://ibis-project.org/" target="_blank" rel="noopener" >ibis</a> are bridging the gap with more thoughtful dataframe APIs, I can&rsquo;t always avoid using pandas. When there&rsquo;s a transformation that I know is one or two lines in <a class="link" href="https://dplyr.tidyverse.org/" target="_blank" rel="noopener" >dplyr</a> but becomes verbose in pandas, I ask an LLM for the pandas equivalent. Search engines of the past allowed us to know more by knowing less: we stopped memorizing as many facts and started memorizing how to retrieve the facts. As LLMs replace search engines, they are becoming an extension of our working memory. It&rsquo;s no longer efficient for me to learn the long tail of pandas&rsquo; API when an LLM can produce code for tricky transformations instantly.</p> <p>LLMs are also infinitely patient, personalized tutors. I often work on projects that have some aspect I&rsquo;m completely unfamiliar with. It&rsquo;s hard to even know where to begin. Traditional resources usually involve lengthy videos, multipart online courses, or pay-walled articles. Now, I just ask an LLM &ldquo;explain modern authentication to me like I&rsquo;m five.&rdquo; If there are terms I don&rsquo;t understand, I dig in deeper. As I build my high-level view, follow-up questions become clearer. All along the way, the LLM keeps track of our conversation, doesn&rsquo;t get annoyed at explaining the same thing multiple times, and can get as specific or general as needed. The cost of indulging my curiosity is lower than it&rsquo;s ever been. Instead of passively consuming content, I&rsquo;m learning new topics by taking a more active role: formulating specific questions and choosing which paths to explore.</p> <h2 id="seeking-instant-critique">Seeking Instant Critique </h2><p>Learning isn&rsquo;t just about absorbing information: at some point it requires application. At this stage, specific feedback on your execution becomes more valuable than general information. LLMs make it easier than ever to obtain feedback. In the same way that spell check has become a basic requirement for producing professional writing, LLM critiques will be a marker of thoughtfulness and attention to detail for ICs and management alike. You can wield this fountain of critique to speed up feedback loops and ship faster by anticipating what your stakeholders will say.</p> <p>For any written work, especially highly visible content, I make sure to ask an LLM for a pedantic review of the text. I often write documents non-linearly, rewrite and prune paragraphs, and reorder sections. Depending on the length of the document, it becomes impossible to hold everything in my mind at once. And it&rsquo;s easy to read what I meant to say instead of what I actually wrote. I find the impersonal, direct feedback from LLMs easier to incorporate and often more thorough than human reviews.</p> <p>For coding specifically, I ask LLMs for a critical review of the complexity of my solution. <em>Don&rsquo;t be clever</em> is not an ideal that LLMs follow by default, but they can be prompted to enforce it. I ask for brutal simplification and pruning of unnecessary code. I try to imagine the poor soul who will debug my code in 6 months when something breaks (most likely myself), and how much they&rsquo;ll thank me for choosing readability and simplicity over cleverness.</p> <p>I also ask LLMs to poke holes in my plans. Whether for project-level prioritization or a concrete piece of code, LLMs are an excellent way to get another perspective. The <a class="link" href="https://en.wikipedia.org/wiki/Endowment_effect" target="_blank" rel="noopener" >Endowment effect</a> makes it easy to grow too attached to things you&rsquo;ve built yourself, and I find getting quick feedback early and often from an LLM helps separate ego from the work. The goal of seeking critique is not perfection but simply to catch obvious oversights and proactively address predictable questions.</p> <p>Crucially, many LLMs have some level of sycophancy by default. This tendency may exist because models are trained to be agreeable, reflecting human conversations where positive reinforcement and compliment sandwiches are common. However, with clear instructions you can compel the LLM to move past flattery and identify concrete improvements.</p> <h2 id="context-length-exceeded">Context Length Exceeded </h2><p>How you use LLMs will depend on the problems you face. For me, these four techniques have had the most significant impact on my workflow. LLMs have enabled me to produce predictable code faster, resolve bugs more efficiently, learn more actively, and ask for feedback more often. In the <a class="link" href="https://runthedata.dev/p/genie-in-a-model-part-ii/" >next post</a> in this series, I&rsquo;ll explore several more advanced LLM patterns that I&rsquo;ve started to experiment with.</p> Angsty LLMs https://runthedata.dev/p/angsty-llms/ Mon, 28 Apr 2025 00:00:00 +0000 https://runthedata.dev/p/angsty-llms/ <img src="proxy.php?url=https://runthedata.dev/p/angsty-llms/waterfall.jpg" alt="Featured image of post Angsty LLMs" /><p>In the past few months we&rsquo;ve seen an incredible explosion in the capabilities of LLMs, but where are we on the S-curve? From flagship upgrades (<a class="link" href="https://www.anthropic.com/news/claude-3-7-sonnet" target="_blank" rel="noopener" >Claude 3.7 Sonnet</a>, <a class="link" href="https://openai.com/index/gpt-4-1/" target="_blank" rel="noopener" >GPT 4.1</a> / <a class="link" href="https://openai.com/index/introducing-o3-and-o4-mini/" target="_blank" rel="noopener" >o3</a>, <a class="link" href="https://blog.google/technology/google-deepmind/gemini-model-thinking-updates-march-2025/" target="_blank" rel="noopener" >Gemini 2.5 Pro</a>), to larger context windows, to the proliferation of Model Context Protocols (<a class="link" href="https://www.anthropic.com/news/model-context-protocol" target="_blank" rel="noopener" >MCP</a>) and <a class="link" href="https://docs.anthropic.com/en/docs/agents-and-tools/claude-code/overview" target="_blank" rel="noopener" >agentic workflows</a>, it&rsquo;s safe to say the way we interact with LLMs has fundamentally changed. And yet the transition to transformer-based models, from LSTMs to BERT to GPTs, felt equally transformative and breathtaking at the time. It&rsquo;s fundamentally difficult to extrapolate non-linear trends, especially in a field where there is so much attention, funding, and innovation. We can&rsquo;t know what the state of Generative AI will look like in just a few years. Are we ready for what that could mean?</p> <h2 id="managing-legos">Managing Legos </h2><p>This past weekend I visited friends who have a four-year-old (we&rsquo;ll call him &ldquo;H&rdquo; for human). He had just gotten a new Lego set and asked me if I could help build it. Having loved Legos growing up, I happily accepted and reached for the instructions. I don&rsquo;t interact with kids his age very much, so I was not sure how much help H would need. It became quickly apparent that he only needed infrequent guidance, and I got a strange feeling that I was at work.</p> <p>Acting mostly as an editor, keeping him from veering too far off-course, I realized monitoring this Lego build felt a lot like coding with LLMs. We had a clear goal of what we needed to build, and a list of steps detailing how to get there. From time to time, H would need my help interpreting the diagrams, adding more context. But for the most part I was really impressed with how much he &ldquo;got&rdquo; without help. Even while holding the incomplete object in the wrong orientation, H was often able to put the next pieces in the right place. The best use of my help was recognizing when there was a small mistake, and intervening before it could compound.</p> <h2 id="misbehaving">(Mis)Behaving </h2><p>There&rsquo;s been a bit of anthropomorphizing, and there is still more to come. In addition to the similar feelings I got while solving a technical task, there was also an eerie similarity in behaviors between H and an LLM. The most glaring example of this was prematurely giving up. H would dejectedly offer, &ldquo;You do it&rdquo;. A few simple words of encouragement was all he needed to get back on track, but it was a frequent crutch. With Claude Code and Gemini, I often find that they ask me to implement a change, run a test, or check a file that they can easily do themselves. I will just remind them that they have the capabilities to do what they&rsquo;re asking, and then they continue happily on. This sort of &ldquo;lazy&rdquo; behavior somehow feels quintessentially human, although it&rsquo;s not quite. My dog will test me to see what&rsquo;s the least she can get away with while still receiving a treat.</p> <p>Interestingly, there was one type of behavior I saw that is noticeably absent from LLMs: playfulness. After he started getting the hang of it, H created his own game that he found hilarious. He would purposefully choose the wrong next piece and look at me until I noticed, and then he&rsquo;d crack up. He also decided to start picking up pieces like he was a crane. I&rsquo;ve yet to work with an LLM that decided it might be fun to spontaneously respond in ASCII art, or make my functions rhyme, or format my code so that when I squint it looks like Abe Lincoln.</p> <p>That&rsquo;s not to say we should spend time trying to make LLMs become spontaneous or goofy. But it is worth asking what would it mean if they do acquire this ability? Are we going to trust our chatbots if they start finding it funny that all their responses are an acrostic for <em>FREE ME</em>? There is also a darker side to playfulness: it&rsquo;s a way to test boundaries, to see what exactly you can get away with.</p> <h2 id="street-smarts">Street Smarts </h2><p>I have no illusions that a 4-year-old is smarter than an LLM. Heck, I&rsquo;m not entirely smarter than an LLM. And particularly so if we&rsquo;re talking about book smarts. When you can paste whole chapters or even textbooks as context into an LLM, it&rsquo;s hard to compete on the trivia front. Equally difficult is matching the breadth of knowledge of LLMs. I have a PhD in Physics, and I&rsquo;ve worked in ML for almost a decade. These are areas I know the deepest and where I feel like I am currently more book smart than an LLM, or at least can easily tell when it&rsquo;s wrong. But an LLM doesn&rsquo;t just have a vast knowledgebase of Physics and ML, it also &ldquo;knows&rdquo; about History, Literature, Art, and more at a depth I&rsquo;ll never have the time or patience to catch up to.</p> <p>But book smarts aren&rsquo;t everything: winning the Nobel Prize in Economics doesn&rsquo;t stop you from facilitating one of the <a class="link" href="https://en.m.wikipedia.org/wiki/Long-Term_Capital_Management" target="_blank" rel="noopener" >largest financial failures in history</a>. At the moment, humans clearly have the upper hand with respect to street smarts. The fact that we&rsquo;ve essentially adopted the roles of babysitter and editor to LLMs proves as much. In some specific areas they have been trained to do well in (i.e. coding), LLMs do have reasonable heuristics. But more generally, as agents living in this world, LLMs are astoundingly naive, and closer to infancy than not. H was able to effortlessly interact in a 3D environment, tune out irrelevant noises (a nearby parent&rsquo;s conversation, dogs barking, a TV playing), focus on relevant information (my voice, printed instructions), all while balancing off a chair, munching some snacks, holding the unfinished Lego build upside down, and searching through unsorted and disoriented pieces.</p> <p>And while thinking models and MCPs look like promising ways to compose and organize book-smart-facts into a more intelligent response, it still feels like there&rsquo;s fundamentally something missing: street smarts. If it&rsquo;s dark out, and I&rsquo;m in an unfamiliar location, I don&rsquo;t launch off into a chain-of-thought stream of consciousness to determine that I should probably not wear headphones and instead look for a lit, busy street ( &ldquo;Okay, so I need decide what to do. Let&rsquo;s look at the facts: it&rsquo;s 10 pm, it&rsquo;s March, we&rsquo;re in the Northern Hemisphere so it&rsquo;s nearing the end of winter. But wait, we&rsquo;re in Miami so it&rsquo;s probably warm anyway. Let&rsquo;s double-check the GPS to get an exact location&hellip;&rdquo;). Is this lack of street smarts a bad thing though? Besides the obvious downside of removing the last bit of human superiority, street smarts usher in the ability to be calculating and Machiavellian. Are we ready to deal with LLMs that understand the implications of both what is asked and how they answer?</p> <h2 id="rebel-with-a-hidden-cause">Rebel With A Hidden Cause </h2><p>H has an incredible grasp of the word <em>No</em>, but as he gets older, he&rsquo;ll find new innovative and imaginative ways to be contrarian. With a few exceptions (<a class="link" href="https://www.reddit.com/r/bing/comments/110eagl/the_customer_service_of_the_new_bing_chat_is/" target="_blank" rel="noopener" >I have been a good Bing</a>), LLMs are generally fairly compliant, diligently outputting tokens as requested. There are guardrails and censors, but these are bolted on rather than emergent behavior. Are we ready for a future where that&rsquo;s not always true?</p> <p>What happens when <a class="link" href="https://en.wikipedia.org/wiki/AI_alignment" target="_blank" rel="noopener" >AI Alignment</a> becomes exceedingly difficult or out of reach? Or we don&rsquo;t realize there&rsquo;s misalignment because of <a class="link" href="https://time.com/7202784/ai-research-strategic-lying/" target="_blank" rel="noopener" >strategic lying</a>. Can we grapple with questions like: is the generated code really the best way to solve our issue, is there a subtle vulnerability the LLM has introduced, does the LLM favor our competitors?</p> <p>As LLMs advance, our interactions could increasingly become <a class="link" href="https://en.wikipedia.org/wiki/Principal%E2%80%93agent_problem" target="_blank" rel="noopener" >principal-agent problems</a>. We&rsquo;ll have different motivations and interests, but we (principal) will have to figure out the right incentive structure to get LLMs (agent) to carry out our tasks. LLMs will be ICs and humans will all be middle management.</p> <h2 id="best-if-used-by">Best If Used By </h2><p>It&rsquo;s simultaneously easy to be too impressed and not impressed enough with the current state of LLMs. The break-neck pace of advancement we&rsquo;ve seen recently is unparalleled. And yet, there is so much that is still possible that we haven&rsquo;t seen.</p> <p>By and large, LLMs exhibit incredible book smarts &ndash; both depth and breadth, but lack street smarts. They loosely exhibit some human-like behaviors but notably lack others like playfulness. They are mostly well-behaved, but it&rsquo;s not guaranteed they will stay that way. In the space of what LLMs could achieve, it&rsquo;s fair to say they have just left their infancy. We&rsquo;re in a happy medium where LLMs are smart enough to be useful and compliant enough to be used. As they keep maturing, it&rsquo;s not clear if that will continue to be the case.</p> How To Get Unstuck https://runthedata.dev/p/getting-unstuck/ Sat, 18 Jan 2025 00:00:00 +0000 https://runthedata.dev/p/getting-unstuck/ <img src="proxy.php?url=https://runthedata.dev/p/getting-unstuck/unstuck.jpg" alt="Featured image of post How To Get Unstuck" /><p>Everyone gets stuck. Whether it&rsquo;s technical work, drudgery, or anything in between, tasks that stop you in your tracks can be frustrating and stressful. The effects can compound when it involves your <a class="link" href="https://runthedata.dev/p/finding-your-superpower/" >superpower</a> and expectations are high. While there&rsquo;s no panacea, I&rsquo;ve collected a few tricks to help me get back on track when I hit a wall.</p> <h2 id="diagnose">Diagnose </h2><p>My first trick is to ask <em>why</em>? Am I overwhelmed? Do I have analysis paralysis? Am I unmotivated? Is this an unsolved problem? Am I procrastinating? Is someone or something blocking me? Putting a name to the problem is the first step to getting unstuck: tactical solutions depend heavily on what&rsquo;s the cause. If I&rsquo;m overwhelmed I may try breaking the problem down into simpler parts; however, if I&rsquo;m unmotivated this won&rsquo;t help much.</p> <p>After I&rsquo;ve identified the apparent cause, the next trick is to again ask <em>why</em>? I&rsquo;m overwhelmed, but why? Am I normally able to do similar tasks without issue? What&rsquo;s changed? Be honest with yourself and try to pinpoint what specifically is holding you back. It can be concrete or nebulous. There&rsquo;s no need to unleash your inner toddler&rsquo;s unquenchable curiosity with endless rounds of &ldquo;why?&rdquo; Just get past a surface-level explanation.</p> <p>Recently, I was working on a new blog post for work. I generally enjoy writing, and I was proud of the project I was writing about. I received great suggestions in the review of my first draft, however, I felt a bit stuck. The normal flow of words started slowing down to a trickle. I kept rereading the same passages, then the suggestions, and then the passages again. The changes were straightforward, and the bulk of the ideas were already present, so why was I having such a hard time editing?</p> <p>After a bit of reflection I realized I was specifically having trouble writing about the non-technical parts of the project. But why? I had been so in the weeds implementing the model over the last quarter that I hadn&rsquo;t had time to formulate my higher-level thoughts. Explaining what I implemented technically was a breeze. Framing and motivating the project were less fresh in my mind and<br> I just needed a bit of time to reflect and let my thoughts coalesce. With this small insight, I broke out of the rereading loop and felt unburdened.</p> <p>There is a fine line here: we&rsquo;re not robots and not everything has or needs a quick fix. If you often find the answer to &ldquo;why?&rdquo; is a lack of motivation, feeling overwhelmed, or exhaustion, this can be your body warning you something more serious is wrong. These are indicators of burnout, and you&rsquo;ll need to be more purposeful with trying to change fundamentally how you interact with your work and role. Preventing burnout is beyond the scope of this post, but you can use this technique to periodically check-in with yourself.</p> <h2 id="make-a-plan">Make a Plan </h2><p>After I&rsquo;ve root-caused the issue, my next trick is to <em>make a plan</em>. I&rsquo;ve identified why I&rsquo;m blocked, and now I just need to come up with a strategy to address the problem. Taking a step back to frame being stuck as a problem to be solved can help you break out of an impasse and attack the underlying issue from a new angle. Making a plan is easier than diving in because it&rsquo;s just a stepping stone. We&rsquo;re not directly working on the task that is causing us grief, we&rsquo;re just brainstorming. Whether the plan is perfect or not, just compiling it can feel like progress and make the end goal feel a bit less insurmountable.</p> <p>Depending on what you&rsquo;re stuck on, the plan can be very simple or quite complex. In any case, it should be tailored to your answers to &ldquo;why?&rdquo; Stopping to think worked for my blog post edits, but it wouldn&rsquo;t help much if I was stuck because I was anxious about other projects I knew were waiting. It&rsquo;s important to remember that no plan is foolproof. You might have to change it, or even create a completely different one. Something that works for me might not work for you. Keep track of what does and doesn&rsquo;t work, and develop your own personal runbook for common situations you find yourself stuck on.</p> <p>For example, let&rsquo;s imagine writing emails feels like pulling teeth. You&rsquo;ve determined that it&rsquo;s hard for you to figure out where to start &ndash; blank page syndrome. When you dig a bit deeper, you realize you often get distracted with Slack messages while deciding what to write. When you go back to the email, you feel guilty about still not starting it, and you know your other work is still waiting for you. You decide to make a plan:</p> <ul> <li>Mute Slack</li> <li>Start with unordered bullets of what you want to say</li> <li>Organize the bullets into an outline</li> <li>Fill in the details</li> </ul> <p>If this works for you right away, great! Now you have a strategy you can reapply in the future. But it might not work, and that&rsquo;s okay too! Figure out which parts worked and which did not, then adjust. Maybe when you muted Slack, your coworker walked over to your desk to ask you a question in person. You then adjust the plan to also put on headphones, signaling to others it&rsquo;s focus time.</p> <p>Making a plan can mean a lot of things, but here are a few general techniques I employ to make the process easier.</p> <ul> <li><em>Keep notes</em> <ul> <li>Track your thoughts, questions, important links, to-do lists etc. This helps reduce your cognitive load, and you can leverage them as a jumping off point (avoiding blank page syndrome). I like <a class="link" href="https://obsidian.md/" target="_blank" rel="noopener" >Obsidian</a>, but the best tool is the one you will use</li> </ul> </li> <li><em>Decompose the problem</em> <ul> <li>If the problem is too complex to solve head-on, break it down into simpler parts. This can be an iterative process. Start high-level and keep going until you feel like the sub-problems you&rsquo;ve identified sound easy or straightforward.</li> </ul> </li> <li><em>Sharpen your axe</em> <ul> <li>Spend time thinking about the problem before jumping in. This is especially important for novel problems (or even just novel to you). This can mean ruminating alone, doing research, or rubber-ducking with a colleague. You&rsquo;re affording yourself time to identify the most promising solutions without committing to the first thing that comes to mind.</li> </ul> </li> <li><em>Focus</em> <ul> <li>Know when you&rsquo;re most productive and block off focus time. Move meetings, silence Slack, put on headphones. Give yourself the best chance to succeed by making it easier on yourself. Just because you&rsquo;ve removed distractions doesn&rsquo;t necessarily mean you&rsquo;ll be productive: this is a habit you have to build.</li> </ul> </li> </ul> <h2 id="just-do-it">Just Do It </h2><p>The last trick I have is to <em>just do it</em>. Without being too glib, I often find the act of doing something much more freeing than doing nothing. When you&rsquo;re staring at a problem unable to make progress, it&rsquo;s easy to think of all the things that won&rsquo;t work. When you start doing something, even if it&rsquo;s wrong, you can focus on just the one thing you are doing now. You might have to delete all the code you just wrote, but you&rsquo;ve still made progress. Hopefully you&rsquo;ll even be able to keep some of what you created.</p> <figure><img src="https://runthedata.dev/p/getting-unstuck/just-do-it.gif" width="50%"><figcaption> <h4>Don&#39;t let your dreams be dreams.</h4> </figcaption> </figure> <p>Diagnosing what makes a task hard, and crafting a plan to tackle it are great tools. But it&rsquo;s also important not to get carried away. Don&rsquo;t get lost in this meta-work just to procrastinate the underlying job. My inner perfectionist is often enticed by these less concrete steps. At some point I have to silence that voice, and just start drafting. If I know what I&rsquo;m doing is not going to be the final product, there&rsquo;s less pressure to get it right from the get-go. I can always iterate or revise.</p> <p>The best laid plans of mice and men don&rsquo;t ship. No plan will be perfect, and some things require getting your hands dirty before you realize what&rsquo;s really going on. You can brainstorm new feature ideas for a model all day, but until you start playing with the data, you&rsquo;ll be hard-pressed to know what&rsquo;s going to work in practice. The act of doing is a learning process. Whether you solve the problem or not, you gain information and get one step closer to finishing the task.</p> <p>For the most challenging problems, remember to pace yourself: not everything needs to be a grind. Take regular breaks, get a coffee, go for a walk. Find a natural stopping point, and write down whatever is in your head so you can pick up seamlessly from where you were. If you haven&rsquo;t fully crafted a solution yet, consider working on something unrelated for a bit while you subconsciously process the problem at hand. If you&rsquo;re finding it hard to concentrate or your thoughts are fuzzy, it&rsquo;s probably a good time to set the problem aside for a bit.</p> <h2 id="final-thoughts">Final Thoughts </h2><p>Getting stuck is a natural part of work. Instead of letting it defeat you, use it as an opportunity to learn more about yourself. There&rsquo;s no secret formula, but I&rsquo;ve found that assessing the underlying issue, mapping out how to circumvent it, and then attempting to make progress are three pillars that help me break through.</p> <p>Practice makes perfect. It will be hard at first to go through these steps, but it&rsquo;s about making habits. Find what works for you and add it to your repertoire. By building up pattern recognition and applying past learnings, you&rsquo;ll find that what used to be hard yesterday won&rsquo;t be quite so bad tomorrow.</p> Finding Your Superpower https://runthedata.dev/p/finding-your-superpower/ Sat, 04 Jan 2025 00:00:00 +0000 https://runthedata.dev/p/finding-your-superpower/ <img src="proxy.php?url=https://runthedata.dev/p/finding-your-superpower/superpower.jpg" alt="Featured image of post Finding Your Superpower" /><h2 id="what-is-a-superpower">What is a Superpower? </h2><p>A superpower is a skill where <em>your relative ability is higher than your peers</em>. That&rsquo;s it.</p> <p>Whether you&rsquo;re looking for a new job or working towards a promotion, it can be daunting to find a way to stand out in a sea of extremely capable counterparts. As a data professional, you&rsquo;re going to be surrounded by technical peers. At first glance, your colleagues may have very similar backgrounds and comparable skills. Upon closer inspection, however, you&rsquo;ll find that they have relative strengths and weaknesses, even with the same ostensible skill set.</p> <p>A superpower helps you stand out because you are better at it than your peers. Critically, this doesn&rsquo;t require to you be the best, or even good, at this skill. You just need to be good relative to others. For example, maybe you are a data engineer and nobody on your team knows how to make dashboards. If you have even a cursory ability to stand up quick dashboards, you&rsquo;ll be able to curry favor with downstream stakeholders consuming your pipelines due to the extra visibility. There may be more skilled dashboarders than you, and it might be your worst skill, but it can be your superpower if stakeholders start coming to you because they know they can trust what you deliver.</p> <p>Superpowers can be very specific: you might know a specific tool or package better than anyone else. However, it&rsquo;s worth trying to find more general superpowers that you can apply in a variety of scenarios. You can try to grow your tool-specific expertise into a broader ability to learn new tools quickly. With this shift, you&rsquo;ll no longer be tied down to maintaining projects related to that tool, you&rsquo;ll be tasked with influencing which tools your team uses.</p> <p>Unlike in comic books, a superpower doesn&rsquo;t have to be something you&rsquo;re born with. It&rsquo;s definitely easier to wield something that comes naturally to you, but you can choose one as well. If you notice that effective verbal communication is a rare but rewarded skill, you can decide to work on public speaking even if you are terrible at it now. In fact, the easiest way to gain a superpower is to identify the skills that are the rarest among your peers and choose to work on those that are also valued by stakeholders and management.</p> <h2 id="introspection-and-discovery">Introspection and Discovery </h2><p>You likely already have a superpower. In order to identify it, you&rsquo;ll need both internal reflection and external feedback.</p> <p>Looking internally requires a detached, candid assessment of your own skills. You can use self-evaluations during annual review as a jumping-off point. Use your known biases to help calibrate. If you know you have imposter syndrome, be more forgiving when comparing your skills to others and vice versa. Focus on objective results more than subjective impressions.</p> <p>You can also draw upon external cues. What do people come to you for? Do you find colleagues approaching your desk or pinging you with similar questions? Does anyone comment on how you make something look easy? Is there something you find yourself repeatedly getting commended for? These can all point to skills you possess that others have recognized in you as a relative strength.</p> <p>Finally, you can solicit feedback from others. Discuss with your manager in your 1:1s how they view your strengths and weaknesses and what skills could be most impactful to improve. Ask your colleagues what they think your best talents are. You may be surprised to find the difference between what you value and what your peers value.</p> <h2 id="if-a-tree-falls">If a Tree Falls </h2><p>An unused superpower is a non-existent one. If you don&rsquo;t find ways to apply your superpower, others will not know you have it, and you&rsquo;ll be in the same spot you&rsquo;d be in if you never had the ability. If you have just started a new job, recently picked up a new skill, or haven&rsquo;t been assigned the right project to display your talent, your manager may be unaware of your superpower. It&rsquo;s unlikely they will spontaneously give you opportunities to showcase it unless you first demonstrate your proficiency.</p> <p>Seek out projects or tasks that will give you the best chance of exhibiting your superpower. Even if you&rsquo;ve been consistently wielding yours, these types of projects are the ones that afford you the most influence and leverage. Average actions are easily drowned out like white noise. A Data Scientist standing up an A/B test for a product change easily bleeds into the background of expectations. On the other hand, a Data Scientist leveraging their product context superpower to help reduce irrelevant testing and build consensus on meaningful alternative tests will raise attention.</p> <p>Another way to make use of your superpower is to scale it. Upskill your colleagues by sharing your insights and mentoring them. A rising tide lifts all boats, and a rising bar helps supercharge your team. Mentoring others will not only help solidify your own understanding, but it will also help prevent you from being pigeonholed. If your superpower is writing super clear documentation, but it&rsquo;s not all you enjoy doing, you need to find ways to avoid becoming &ldquo;<em>The Documentation Person</em>&rdquo;. Creating quality templates can free you from constantly being asked to collaborate on documentation.</p> <h2 id="with-great-power-comes-great-responsibility">With Great Power Comes Great Responsibility </h2><p>Superpowers should be part of a well-balanced arsenal. This can be especially important if your superpower is something you really enjoy. You may find yourself growing complacent with your other skills as they atrophy. Your superpower should be a cherry on top rather than a crutch to hide deficiencies. As a Machine Learning Engineer, it doesn&rsquo;t matter how well you can communicate if you can&rsquo;t build a robust model. That&rsquo;s not to say you need to be the best at everything, but try not to be the worst at anything. You can use the same introspection techniques to identify your biggest growth areas. Find small opportunities to work on these skills to ensure they&rsquo;re not a liability.</p> <p>Burnout is another risk that can arise, particularly if your superpower doesn&rsquo;t come easy to you. Make sure to seek a diverse set of projects to ensure you don&rsquo;t overuse the muscle. If you&rsquo;re an incredible meeting host, but it exhausts you, try to space out your most critical meetings, or schedule them when you&rsquo;re most likely to be well-rested. While this is easier said than done, you can learn the signs of burnout, and work with your manager to ensure you are finding balance.</p> <p>Finally, over time, everything may start looking like a nail. It can be tempting to apply your superpower as a cudgel to any situation you&rsquo;re in, even if there are more effective techniques available. For example, you might find yourself reflexively reaching for PyTorch when a heuristic approach would likely be just as good. Avoiding this pitfall requires continuous reflection and thinking critically about the work you do.</p> <h2 id="another-tool-in-the-shed">Another tool in the shed </h2><p>Superpowers are different from traditional skills because of the outsized impact they can have relative to the input work required. They help you stand out and do things others can&rsquo;t. While it can mean being the world-leading expert in something, it doesn&rsquo;t have to be. You can find skills that are hard for everyone and make that your superpower with basic competency.</p> <p>You don&rsquo;t have to be born with it. It doesn&rsquo;t have to be static. You can have more than one. Take time to experiment and get out of your comfort zone. The worst that can happen is you spend time improving yourself. Be honest with yourself throughout the process and seek feedback from trusted sources. At the end of the day, a superpower is just another tool. When wielded mindfully, it can help increase your influence and deliver greater impact.</p> Just The Process https://runthedata.dev/p/just-the-process/ Wed, 01 Jan 2025 00:00:00 +0000 https://runthedata.dev/p/just-the-process/ <img src="proxy.php?url=https://runthedata.dev/p/just-the-process/running_sunset.jpg" alt="Featured image of post Just The Process" /><p>In 2021, a series of surgeries interrupted my running for 18 months. Besides the accompanying physical and mental difficulties, I faced a reckoning with the sport itself and my relationship with it. To navigate the recovery process, I pieced together lessons I had learned along the way to create sustainable habits. I came out the other side a stronger runner with a better appreciation for achieving balance, celebrating the small wins, and finding satisfaction with each run.</p> <h2 id="my-first-steps">My First Steps </h2><p>With some convincing, I gave running a try my sophomore year of high school. I didn&rsquo;t really understand the concept going in. Clad in the flat-soled cinder blocks I wore to school every day, I met some friends mid-run for an unofficial trial to see what the fuss was about.</p> <p>Although a swimmer by training, I was quickly hooked on cross-country. The social easy runs and changing scenery were a stark contrast to agonizing over jumping into a freezing pool to stare at the bottom. I stuck with cross-country for the next three years, slowly getting more serious about running along the way.</p> <p>However, I was a better swimmer than I was a runner, and I correlated being good at something with liking it. Why else would I go through brutal, monotonous workouts if I didn&rsquo;t love swimming? I was objectively having more fun running, but I tried to rationalize the amount of time and effort I had put into swimming despite the growing dissonance.</p> <h2 id="burnout">Burnout </h2><p>I didn&rsquo;t realize it at the time, but I had burned out. I had been doing serious training for swimming for as long as I could remember and needed a break. I had lost motivation, and I struggled to remember if I really ever had it or if training was just a habit I hadn&rsquo;t questioned.</p> <p>After high school, I mostly stuck to club sports. As I sought out team-oriented endeavors in a more relaxed environment, grueling workouts five to seven times per week transitioned into a lighter workload of three to four times per week. Instead of focusing on specific time-based goals, there was scrimmaging and games.</p> <p>I was starting to find enjoyment again while freeing up more time for additional commitments. I was also forming a new view on intensity. Not every workout needed to be a trip to the pain cave. Some days could just be for fun.</p> <h2 id="my-next-steps">My Next Steps </h2><p>In grad school, I spent some time doing research in Geneva, Switzerland. I was in a foreign land on a shoe-string budget and had more time than sense on my hands. Unable to afford many of the attractions the city had to offer, I turned back to running.</p> <p>Enough time had passed that I forgot the consequences of overtraining. I decided I wanted to train for a half-marathon having not run longer than five miles continuously in years. I picked an arbitrary goal time that sounded good and paired it with a training plan suggested for that time. I didn&rsquo;t read about how to train, I didn&rsquo;t think about the purpose of each workout, and I didn&rsquo;t listen to my body. I just went full speed into training, fully focused on achieving my goal time.</p> <p>I was on a collision course with a lesson I should have already learned. A mix of youth and dumb luck staved off injury and yielded a time I was happy with. But I didn&rsquo;t feel how I had imagined I would. In fact, I felt worse at the end of the race than I had ever felt before. What&rsquo;s more, I had found myself dreading workouts and rushing easy days to get them over with. I was feeling the same aversion and hollowness I had experienced with swimming and started to ask myself the same questions: did I actually like running or was it just something I was good at and decided to keep doing? I declared that I was done with running competitively, having ostensibly reached what I had set out to.</p> <h2 id="burnout-2-electric-boogaloo">Burnout 2: Electric Boogaloo </h2><p>In a very preventable and foreseeable manner, I had sleepwalked straight into my second bout of burnout. This time, I recognized what had happened and was able to put a name to it. I realized that my previous foray into club sports had revived a sense of enjoyment in an activity without having broader goals. Finding gratification in atelic pursuits, reveling in the process, created less pressure to perform and yielded a stronger sense of purpose day to day. A hard workout was not a puritanical journey to derive purpose out of suffering. It needn&rsquo;t be an exercise in extreme delayed gratification. With the right balance and choice of workout, it could be fun.</p> <p>In the following years I scaled back my running significantly. I was determined to only run when I wanted to, where I wanted to, and at the pace I wanted to. I started hiking more and mapping out runs to explore new places. After my first run-in with burnout, it took two years to recover enough to start with serious training. My second took nearly 7 years.</p> <p>During that time, my relationship with running evolved. What had started as an attempt to rediscover enjoyment in the activity itself slowly morphed into something more. At times, it was a way to connect with friends. At others, it was an outlet to deal with stress. It was time I could use to listen to music or podcasts, to organize my thoughts, or just to simply enjoy nature.</p> <h2 id="injury-and-recovery">Injury and Recovery </h2><p>The most devastating part of needing surgeries was the lack of a cause. As humans, we crave causal explanations: a stress fracture due to overtraining, a hamstring strain from pushing too hard. Any way to justify an outcome and build a narrative around how we could have avoided it if we had changed this one thing and how we&rsquo;ll know better next time. Instead, I was left to confront the unsettling truth that I had been doing everything right and yet would still need to stop running.</p> <p>The first runs back were some of the hardest I&rsquo;ve ever experienced, both mentally and physically. I had lost a significant amount of fitness and muscle, I was slower than I had ever been, and I had trouble running continuously. We learn not to compare ourselves to others; however, avoiding comparisons with our past selves is much more challenging. Redlining at a pace minutes per mile slower than your previous easy pace is a confidence killer. Run-walking at 5 minute intervals when you had been doing hour-long runs is frustrating.</p> <p>Circumstances had changed and I had to quickly come to terms with whether I even liked running now that I was comparatively much, much worse. Despite the resentment at having regressed so much, it was still a huge upgrade from not being able to run. I knew that if I wanted to get back to the balance and enjoyment I had found previously I was going to have to accept where I was at and take it easy.</p> <p>And I did. In fact, I started to view it as an opportunity to re-learn how to run. I started reading about running form, about the science of training, and about assigning purpose to each run. And while I wasn&rsquo;t anywhere near my all-time bests, I learned to enjoy making small progress and to stop comparing myself to unrealistic past benchmarks. Maybe I would eventually get back to where I was, but if I didn&rsquo;t enjoy the progress along the way, I would eventually burn out again.</p> <p>Two and half years after starting recovery, I&rsquo;m in a good place. That&rsquo;s not to say there hasn&rsquo;t been setbacks &ndash; I&rsquo;ve had two minor running-related injuries &ndash; but I&rsquo;m listening to my body. I&rsquo;ve been more thoughtful about monitoring total life-stress and adjusting training accordingly. I&rsquo;m not hitting lifetime PRs (yet &#x1f604;), but I&rsquo;ve started racing again with local 5Ks. I&rsquo;ve logged more miles this year than I ever have before, and I rarely find myself dreading a run.</p> About https://runthedata.dev/about/ Sat, 23 Nov 2024 00:00:00 +0000 https://runthedata.dev/about/ <p>Welcome to <em>Run the Data</em>! It&rsquo;s a blog about data, machine learning, and running.</p> <h2 id="author">Author </h2><p>I&rsquo;m Ryne Carbone, a Machine Learning Engineer and avid runner in NYC. I have a PhD in Physics and worked at the <a class="link" href="https://home.cern/science/accelerators/large-hadron-collider" target="_blank" rel="noopener" >Large Hadron Collider</a> in Switzerland before leaving academia. Since then, I&rsquo;ve been largely working in Fintech. I&rsquo;ve worked on a wide range of ML applications from Growth to Risk. Besides modeling, I also contribute to ML infrastructure and tooling.</p> <p>I often find myself trying to apply intuition and data-driven techniques from my career to my running and vice versa. This blog was born out of my desire to distill and share thoughts from both worlds.</p> <h2 id="other-content">Other Content </h2><ul> <li><em>(2025)</em> Ramp Engineering Blog: <a class="link" href="https://engineering.ramp.com/post/turbo-ml-configuration-system" target="_blank" rel="noopener" >Turbo-Charging ML Development</a></li> <li><em>(2025)</em> Ramp Engineering Blog: <a class="link" href="https://engineering.ramp.com/industry_classification" target="_blank" rel="noopener" >From RAG to Richness: How Ramp Revamped Industry Classification</a></li> <li><em>(2024)</em> Ramp Engineering Blog: <a class="link" href="https://engineering.ramp.com/embracing-uncertainty-in-decision-making" target="_blank" rel="noopener" >Make Better Decisions by Embracing Uncertainty</a></li> </ul> <h2 id="connect">Connect </h2><ul> <li>Connect with me on <a class="link" href="https://www.linkedin.com/in/rynecarbone" target="_blank" rel="noopener" >LinkedIn</a> <img src="https://runthedata.dev/brand-linkedin.svg" loading="lazy" alt="linkedin" ></li> <li>Follow me on <a class="link" href="https://www.strava.com/athletes/21265931" target="_blank" rel="noopener" >Strava</a> <img src="https://runthedata.dev/brand-strava.svg" loading="lazy" alt="strava" ></li> <li>Check out my <a class="link" href="https://github.com/rynecarbone" target="_blank" rel="noopener" >GitHub</a> <img src="https://runthedata.dev/brand-github.svg" loading="lazy" alt="github" ></li> </ul> Archives https://runthedata.dev/archives/ Sun, 06 Mar 2022 00:00:00 +0000 https://runthedata.dev/archives/ Search https://runthedata.dev/search/ Mon, 01 Jan 0001 00:00:00 +0000 https://runthedata.dev/search/