<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
        <title>Thomas Dohmke</title>
        <description>Random thoughts. Mostly about software development.</description>
        <link>https://ashtom.github.io/</link>
        <atom:link href="https://ashtom.github.io/feed.xml" rel="self" type="application/rss+xml"/>
        <pubDate>Sun, 03 Aug 2025 23:53:48 +0000</pubDate>
        <lastBuildDate>Sun, 03 Aug 2025 23:53:48 +0000</lastBuildDate>
        <generator>Jekyll v3.10.0</generator>
        
            <item>
                <title>Developers, Reinvented</title>
                <description>&lt;p&gt;A vital shift is underway in software development, one that redefines how we build, but also who we are as developers. In recent interviews we spoke with 22 developers that already use AI tools heavily in their workflow, and learned how they got there, how their craft has changed, and where they see things going.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;images/reinvented.png&quot; alt=&quot;An anime-stule image showing a girl stacking blocks that say &amp;quot;Reinvent Yourself&amp;quot;&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;a-transformation-in-the-making&quot;&gt;A transformation in the making&lt;/h2&gt;

&lt;p&gt;For these developers at the forefront, making AI a core part of their work is not a distant, long-horizon future, but a change that is happening today. ​​For many, the initial encounter with AI tools was met with a healthy dose of skepticism. “Pretty cool, but gimmicky” was a common refrain, which now developers interpret as high initial expectations clashing with the &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4573321&quot;&gt;jagged frontier&lt;/a&gt; of AI’s unpredictable capabilities. That’s a point where many write off AI tools as unhelpful, but those who continue experimenting reach pivotal “aha! moments” of both time savings and figuring out tool capabilities and how to match them to their work.&lt;/p&gt;

&lt;p&gt;The developers who found success with AI tools have a &lt;strong&gt;strong underlying motivation to prepare for what they anticipate will be an overhaul of their profession.&lt;/strong&gt; To that end, they relentlessly experiment with various AI tools, even when the tools aren’t consistently helpful. “Either you have to embrace the Al, or you get out of your career” one developer said.&lt;/p&gt;

&lt;p&gt;What does embracing AI look like for these developers? We believe it’s a phased evolution, fuelled by daily trial and error:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stage 1: AI Skeptic.&lt;/strong&gt; Dabbling with AI in small tasks and questions. Developers are primarily working with code completions and have low tolerance for iteration and errors. If they persevere, they shed their expectations for one-shot success.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stage 2: AI Explorer.&lt;/strong&gt; Using AI for debugging, boilerplate and snippets, developers start understanding AI’s limitations. They work with completions and chat, and copy-paste code from browser-based LLMs. With practice, they evolve to brainstorming more complex tasks, embrace iterative prompting, and realize that when getting poor results it’s better to start over, instead of getting diminishing returns from pushing forward.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stage 3: AI Collaborator.&lt;/strong&gt; Actively co-creating with AI, and building context engineering intuition. Developers use AI-enabled IDEs for multi-step tasks and multi-file changes and adopt habits like prompting for a plan first, curating agent rules, and strategically switching between tools and between models as they learn AI’s “thought process”. In this stage, developers also join discussions or internal demos to share effective prompts, use cases, and lessons learned.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stage 4: AI Strategist.&lt;/strong&gt; Treating AI as a powerful partner for feature development, complex tasks, and large-scale refactorings. Developers build elaborate, multi-agent workflows with planning and coding models, increasing autonomy and parallelism as they go. Developers at this stage are confident and optimistic about the future.&lt;/p&gt;

&lt;p&gt;The developers at stage 4 unanimously declare their role has shifted. &lt;strong&gt;They now focus on the delegation and the verification of a task.&lt;/strong&gt; &lt;em&gt;Delegation&lt;/em&gt; is about setting up the agents for success with rich context and instructions, designing and refining prompts, reviewing AI’s plans and tradeoffs, and tweaking before proceeding. &lt;em&gt;Verification&lt;/em&gt; is all about tearing down the agent’s work – they review and validate that the AI-generated implementation fulfills the objectives and conventions it needs to. These developers moved from writing code to architecting and verifying the implementation work that is carried out by AI agents.&lt;/p&gt;

&lt;p&gt;You can probably guess what happened next. When we asked developers about the prospect of AI writing 90% of their code, they replied &lt;strong&gt;favorably.&lt;/strong&gt; Half of them believe a 90% AI-written code scenario is not only feasible but likely within 5 years, &lt;strong&gt;while half of them expect it within 2 years.&lt;/strong&gt; But, crucially, to them this future scenario did not feel like their value or identity is diminished, but that it is reinvented. Having experienced the skill and effort that goes into effectively managing the work of agents, it was now clear to them this will be the value-add activity, rather than leading implementation. “Maybe we become less code producers and more code enablers. My next title might be Creative Director of Code.”, one participant concluded.&lt;/p&gt;

&lt;h2 id=&quot;optimism-realism-or-both&quot;&gt;Optimism? Realism? Or both?&lt;/h2&gt;

&lt;p&gt;We tend to see optimism and realism as opposing mindsets. But the developers we heard from had an intriguing blend, they were &lt;strong&gt;realistic optimists&lt;/strong&gt;. They see the shift, they don’t pretend it won’t change their job, but they also believe this is a chance to level up. One developer said “I think of myself as [a] mediocre engineer, and I feel this AI reset is giving me a chance to build skills that will bring me closer to excellence.”&lt;/p&gt;

&lt;p&gt;Let’s apply the realistic optimism lens for the rest of this section.&lt;/p&gt;

&lt;h3 id=&quot;job-outlook&quot;&gt;Job outlook&lt;/h3&gt;

&lt;p&gt;AI is increasingly automating many coding tasks, accelerating software development. As models and tools improve, we see the automation of more complex coding tasks under developers’ orchestration (like the ones we interviewed). This is already reality and no longer a future trend.&lt;/p&gt;

&lt;p&gt;If we continue the thought, some traditional coding roles will decrease or significantly evolve as the core focus shifts from writing code to delegating and verifying. At the same time, the U.S. Bureau of Labor Statistics projects that &lt;a href=&quot;https://www.bls.gov/opub/ted/2025/ai-impacts-in-bls-employment-projections.htm&quot;&gt;software developer jobs are expected to grow by 18% in the next decade&lt;/a&gt; – nearly five times the national average across occupations. &lt;strong&gt;They won’t be the same software developer jobs as we know them today,&lt;/strong&gt; but there is more reason to acknowledge the disruption and lean into adaptation, than there is to despair.&lt;/p&gt;

&lt;p&gt;You know what else we noticed in the interviews? Developers rarely mentioned “time saved” as the core benefit of working in this new way with agents. They were all about increasing ambition. We believe that means that we should &lt;strong&gt;update how we talk about (and measure) success&lt;/strong&gt; when using these tools, and we should expect that after the initial efficiency gains our focus will be on raising the ceiling of the work and outcomes we can accomplish, which is a very different way of interpreting tool investments. This helps explain the – perhaps unintuitive at first – observation that many of the developers we interviewed were paying for top-tier subscriptions. &lt;strong&gt;When you move from thinking about reducing effort to expanding scope, only the most advanced agentic capabilities will do.&lt;/strong&gt;&lt;/p&gt;

&lt;h3 id=&quot;essential-skills-old-and-new&quot;&gt;Essential skills, old and new&lt;/h3&gt;

&lt;p&gt;If we accept that the developer role is transforming, what are the skills that take center stage?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI fluency:&lt;/strong&gt; It will be critical to understand the capabilities and constraints of different AI tools, platforms, and models to adjust practices and workflows accordingly. This requires continuous learning-by-doing and a commitment to adaptability, given the “breakneck” speed of AI innovation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Delegation and agent orchestration:&lt;/strong&gt; Setting up agents for success will rely on context and prompt engineering, problem framing, breaking down tasks optimally, parallelizing work, and articulating success criteria and constraints. With higher AI fluency, developers will also evolve their judgement and strategies for when to synchronously collaborate with agents or offload tasks to them in the background. Don’t underestimate how critical communication skills are for delegation – it’s not “just” using natural language. Vague or one-liner instructions were never enough to empower human colleagues to successfully hit goals, and they will not work for AI agents either.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Human-AI collaboration:&lt;/strong&gt; When working synchronously with agents it will be important to have tight iterative feedback loops, set stopping points, instruct for self-critique, as well as encourage agents to ask questions. One of our interviewees prompted their agent to interview them as a way to build context into instructions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fundamentals:&lt;/strong&gt; Understanding and reasoning about code and systems remains critical. Despite AI’s code generation capabilities, developers need a deep understanding of programming basics, algorithms, data structures, and overall software systems. This foundational knowledge enables developers to critically comprehend complicated code, to evaluate the quality of AI’s output and how it got there. It’s critical to the “verification” part of the developers’ role when orchestrating agents.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Verification and quality control:&lt;/strong&gt; Since AI output requires scrutiny, the ability to rigorously review, test, and verify AI-generated code is paramount. Developers are already practicing this through manual code reviews, running tests, and ensuring adherence to standards. If anything, agentic workflows need an amplified version of this skill, with extended test coverage and thinking about quality and security more upstream.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Product understanding:&lt;/strong&gt; This is all about systems thinking, meaning looking at the product as a whole. It will push developers to adopt hybrid thinking that combines engineering, design, and product management. Here, defining requirements, prototyping and design, idea exploration, and understanding user needs will be key to make sure that tasks for agents are thought through from an outcome perspective, not just code output.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architecture and systems design:&lt;/strong&gt; As AI handles more low-level coding, the importance of understanding overall system architecture, design patterns, and how different components interact becomes elevated. This is crucial for guiding AI and integrating its output effectively.&lt;/p&gt;

&lt;h3 id=&quot;implications-for-education&quot;&gt;Implications for education&lt;/h3&gt;

&lt;p&gt;From a realistic optimism perspective, the rise of AI in software development signals the &lt;strong&gt;need for computer science education to be reinvented as well.&lt;/strong&gt; Educators who adopt this mindset will acknowledge the scale of change while embracing the opportunity to prepare students for a broader, more strategic, and more creative role in shaping the digital world.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Students will rely on AI to write increasingly large portions of code.&lt;/strong&gt; Teaching in a way that evaluates rote syntax or memorization of APIs is becoming obsolete. Foundational coding remains a critical skill, but now as a way to understand systems, debug AI-generated code, and express ideas clearly to both humans and machines. Instead of “write this loop” we will need to shift to “understand what this code does, and what would break if you changed it.”&lt;/p&gt;

&lt;p&gt;LLMs and agents are already writing code in the real world, and ignoring them in the classroom leaves students underprepared. Integrating AI tools into education can enhance learning, by creating more time for deeper design, analysis, and creativity. Curricula must include AI-assisted coding, but focus on how to collaborate with AI: prompting, reviewing, editing, and validating output. In short, teach AI fluency.&lt;/p&gt;

&lt;p&gt;Many Computer Science (CS) programs still center around problems that AI can now solve competently. &lt;strong&gt;The future belongs to developers who can model systems, anticipate edge cases, and translate ambiguity into structure—skills that AI can’t automate.&lt;/strong&gt; We need to teach abstraction, decomposition, and specification not just as pre-coding steps, but as the new coding.&lt;/p&gt;

&lt;p&gt;The reinvention of the software developer role also opens doors for &lt;strong&gt;interdisciplinary thinking and diverse paths into software careers.&lt;/strong&gt; We can blend CS with design, ethics, systems thinking, and human-computer interaction and encourage students to see themselves not as coders, but as computational creators.&lt;/p&gt;

&lt;p&gt;That all influences assessment too. Traditional programming exams no longer reflect real-world skillsets, especially once AI can solve the problem faster than a human. Assessing students on their ability to frame problems, guide AI, critique solutions, and debug complex outputs is more meaningful, and future-proof. We want to create assignments that require AI collaboration: “Here’s what the AI wrote, what’s wrong with it?” or “Improve this spec so an AI can build what you intended.”&lt;/p&gt;

&lt;h2 id=&quot;take-this-with-you&quot;&gt;Take this with you&lt;/h2&gt;

&lt;p&gt;The software developer role is set on a path of significant change. &lt;strong&gt;Not everyone will want to make the change.&lt;/strong&gt; Managing agents to achieve outcomes may sound unfulfilling to many, although we argue that’s what developers have been doing on a lower level of abstraction, managing their computers via programming languages to achieve outcomes. Still, as humans we are often reluctant to change, and that’s okay.&lt;/p&gt;

&lt;p&gt;Developers worldwide are already moving from skepticism to confidence, reshaping their roles, practices, and mindsets as they partner with AI. For them, what started as fear of AI replacing them is switching to pragmatically embracing the ambitious reality of AI, &lt;strong&gt;viewing it as a growth opportunity.&lt;/strong&gt; As we build the tools of tomorrow we can usher developers through this reinvention of their role in ways that are intuitive, delightful, and cater to developers’ curiosity, keeping them fulfilled and happy during the transition. That makes us optimistic. Realistically. 😉&lt;/p&gt;
</description>
                <pubDate>Sun, 03 Aug 2025 00:00:00 +0000</pubDate>
                <link>https://ashtom.github.io/developers-reinvented</link>
                <guid isPermaLink="true">https://ashtom.github.io/developers-reinvented</guid>
                
                <category>Agents</category>
                
                
            </item>
        
            <item>
                <title>What makes autonomy work?</title>
                <description>&lt;p&gt;Imagine an AI agent reviewing your pull request – not just flagging issues, but proposing meaningful improvements, refactoring code to be more readable, more efficient, more secure, and even identifying follow up tasks before you do. That’s the future of agentic AI in software development. But how do we get there?&lt;/p&gt;

&lt;p&gt;The answer lies in a principle we already follow when working with people: trust must be earned before autonomy is given. You probably wouldn’t ask a new team member to make an important project decision on their first day. But as they prove their skills and judgment, you grant them more freedom – merging pull requests independently, suggesting architectural improvements, even identifying areas of technical debt ahead of becoming bottlenecks. AI should follow the same path. So we must ask: &lt;strong&gt;What makes autonomy work?&lt;/strong&gt; And what conditions must be in place for developers to trust AI agents the same way they end up trusting their most capable teammates?&lt;/p&gt;

&lt;h2 id=&quot;how-ai-earns-autonomy-in-a-development-team&quot;&gt;How AI earns autonomy in a development team&lt;/h2&gt;

&lt;p&gt;For developers to grant AI autonomy, two key dimensions of trust must be established: &lt;strong&gt;competence&lt;/strong&gt; – the ability to execute tasks successfully – and &lt;strong&gt;alignment&lt;/strong&gt; – the ability to understand the team’s goals and work in sync.&lt;/p&gt;

&lt;h3 id=&quot;competence-the-ability-to-take-action-successfully&quot;&gt;Competence: The ability to take action successfully&lt;/h3&gt;

&lt;p&gt;Developers won’t trust AI to act independently unless it demonstrates that it can:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Generate high-quality code that meets the team’s standards&lt;/li&gt;
  &lt;li&gt;Handle varied and complex development scenarios without excessive human correction&lt;/li&gt;
  &lt;li&gt;Learn from developer feedback and improve over time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A key aspect of this is &lt;strong&gt;latent learning&lt;/strong&gt; – AI must absorb information from past interactions and apply it effectively when the situation calls for it. Imagine an AI-powered coding assistant that doesn’t just suggest syntax fixes but gradually learns an engineering team’s preferences, adapting to their coding style and proactively refactoring inefficient patterns.&lt;/p&gt;

&lt;p&gt;Competence doesn’t exist without &lt;strong&gt;reliability and repeatability&lt;/strong&gt;. Neither a human nor an AI developer that occasionally produces great suggestions but is inconsistent will earn autonomy. Developers need AI to:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Offer stable, predictable improvements rather than sporadic breakthroughs&lt;/li&gt;
  &lt;li&gt;Explain its reasoning clearly (e.g., why it suggests a certain code optimization)&lt;/li&gt;
  &lt;li&gt;Avoid making careless changes that introduce regressions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A useful real-world parallel is automated testing – a tool that occasionally flutters between false positives and missed bugs is not trusted, whereas one that consistently catches regressions earns a place in the CI/CD pipeline. AI must build a similar track record of dependability before teams allow it to act without review.&lt;/p&gt;

&lt;h3 id=&quot;alignment-the-ability-to-understand-team-goals-and-act-accordingly&quot;&gt;Alignment: The ability to understand team goals and act accordingly&lt;/h3&gt;

&lt;p&gt;Proving competence is only part of the equation. Even the most capable AI won’t be trusted if it works at odds with a team’s goals. That’s where alignment comes in – ensuring AI acts in ways that support the team’s bigger picture. Developers don’t just want AI to generate code; they need it to make decisions that support the project’s broader goals, including:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Adhering to team conventions and architectural principles&lt;/li&gt;
  &lt;li&gt;Identifying areas for improvement that align with current priorities&lt;/li&gt;
  &lt;li&gt;Helping maintain technical debt rather than blindly optimizing for speed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The key is &lt;strong&gt;aligned autonomy&lt;/strong&gt;. An AI agent that starts working without accounting for the most up-to-date team priorities can create more work than it saves. For example, if an AI agent detects a slow-running function and starts aggressively optimizing it, but the team is focused on bug fixes before a release, its actions could create unnecessary distractions. Without clear alignment, AI risks solving the wrong problems – or even spinning up new ones.&lt;/p&gt;

&lt;p&gt;This means AI must be given clear goals, and work within explicit guardrails and feedback loops – much like a new engineer learning a team’s expectations over time. AI should provide suggestions, receive feedback on those suggestions, and refine its decision-making process accordingly. The goal isn’t just accuracy. It’s meaningful, context-aware contributions.&lt;/p&gt;

&lt;h2 id=&quot;the-gradual-path-to-ai-autonomy-in-development&quot;&gt;The gradual path to AI autonomy in development&lt;/h2&gt;

&lt;p&gt;Just as junior developers or new teammates gain responsibility incrementally, AI should be introduced into the development workflow step by step. This ensures that trust is built through progressive autonomy, where AI takes on increasingly complex tasks as it proves itself. Key stages are:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;&lt;strong&gt;Assisting developers:&lt;/strong&gt; AI starts by offering suggestions in pull requests, highlighting potential issues, and helping refactor code under human supervision.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Executing small-scale tasks:&lt;/strong&gt; As confidence in AI grows, it might be allowed to auto-merge safe refactors or format code without manual review.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Identifying and prioritizing work:&lt;/strong&gt; AI transitions from reactive assistance to proactive contributions – detecting patterns of inefficiency, flagging outdated dependencies, or surfacing areas for refactoring.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Initiating tasks autonomously:&lt;/strong&gt; The ultimate stage of trust – AI proposes and executes improvements, like optimizing database queries based on observed performance patterns, without human prompting.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;A concrete example: At first, an AI assistant might only suggest reformatting code and catching syntax errors. Over time, as it demonstrates reliability, it could start suggesting small to medium-sized, non-controversial refactorings. Eventually, after proving alignment with team priorities, it might be trusted to make larger changes, such as optimizing functions for performance based on real-world usage patterns.&lt;/p&gt;

&lt;p&gt;Each of these steps must be earned, not assumed. Teams need mechanisms to evaluate AI’s decisions, catch mistakes, and provide feedback – whether through human-in-the-loop (HITL) review systems, AI self-reporting, or explainability features that allow developers to verify AI’s logic before it acts independently.&lt;/p&gt;

&lt;h2 id=&quot;developing-ais-capabilities-and-developers-role-in-the-process&quot;&gt;Developing AI’s capabilities, and developers’ role in the process&lt;/h2&gt;

&lt;p&gt;For AI to move from a reactive assistant to an autonomous collaborator, it must develop specific abilities that enable deeper integration with development teams. Likewise, human developers will need to adapt their workflows to guide AI’s evolution effectively.&lt;/p&gt;

&lt;h3 id=&quot;what-do-ai-agents-need-in-order-to-evolve-towards-greater-autonomy&quot;&gt;What do AI agents need in order to evolve towards greater autonomy?&lt;/h3&gt;

&lt;p&gt;To be trusted with higher levels of autonomy, we need to build certain abilities into AI agents:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Memory:&lt;/strong&gt; AI should retain context across interactions, learning from past decisions and refining its future recommendations.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Pattern Recognition:&lt;/strong&gt; AI should identify coding trends, architectural inefficiencies, and recurring issues, surfacing meaningful insights rather than isolated suggestions.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Flagging uncertainty:&lt;/strong&gt; AI must communicate its level of confidence in its decisions, highlighting when human validation is necessary.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Self-Reflection:&lt;/strong&gt; AI should periodically review its past actions, incorporating feedback and lessons learned into future tasks.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;how-can-developers-guide-ai-towards-autonomy&quot;&gt;How can developers guide AI towards autonomy?&lt;/h3&gt;

&lt;p&gt;AI won’t develop these capabilities in isolation – it requires human guidance. Developers play a crucial role in refining AI agents by:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Providing Clear Goals:&lt;/strong&gt; AI needs well-defined objectives to ensure its decisions align with team priorities and broader project goals. At the beginning this could mean decomposing tasks further to make them fitting for agents to solve reliably.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Offering Context:&lt;/strong&gt; AI needs exposure to team documentation, code history, and past decisions to make informed suggestions. At the beginning this could take the form of developers providing explicit instructions and comments for agents to account for.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Encouraging Inquiry:&lt;/strong&gt; Developers should prompt AI to ask clarifying questions and intervene when its assumptions deviate from reality.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Calibrating Behavior:&lt;/strong&gt; Through iterative interactions, developers can shape AI’s decision-making process, helping it adapt to team-specific needs.&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;Facilitating Reflection:&lt;/strong&gt; Encouraging AI to analyze its past performance and adjust accordingly will reinforce productive behaviors.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If this sounds familiar it’s because these are some of the core &lt;a href=&quot;https://www.ncbi.nlm.nih.gov/books/NBK552762/&quot;&gt;mentorship behaviors&lt;/a&gt;&lt;sup id=&quot;fnref:1&quot; role=&quot;doc-noteref&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;. By fostering these abilities in AI and taking an active role in guiding its learning process, development teams can accelerate AI’s journey from a task-driven assistant to a proactive engineering partner.&lt;/p&gt;

&lt;h2 id=&quot;a-future-where-ai-suggests-executes-and-collaborates-with-your-team&quot;&gt;A future where AI suggests, executes, and collaborates with your team&lt;/h2&gt;

&lt;p&gt;The promise of AI-native software development is about amplifying – not eliminating – human decision-making and teams’ impact. AI agents that earn autonomy can handle repetitive tasks, make informed suggestions, and even anticipate future technical needs, allowing developers to focus on complex problem-solving and creative work, as well as accelerate high-impact work.&lt;/p&gt;

&lt;p&gt;But to get there, we must treat AI autonomy as a gradual, earned process, much like onboarding a new teammate. By establishing trust in competence and alignment with team goals, we can move beyond AI as a passive assistant and toward AI as an active engineering partner – one capable of identifying work, prioritizing improvements, and executing meaningful contributions.&lt;/p&gt;

&lt;p&gt;For teams exploring AI adoption, the key question is: what’s the next step in building trust with AI? Whether it’s refining how AI learns from your feedback or gradually increasing its scope, the path to autonomy starts with intentional, incremental steps.&lt;/p&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot; role=&quot;doc-endnote&quot;&gt;
      &lt;p&gt;National Academies of Sciences, Engineering, and Medicine. (2019). Mentorship behaviors and education: How can effective mentorship develop? In M. L. Dahlberg &amp;amp; A. Byars-Winston (Eds.), The science of effective mentorship in STEMM (Chapter 5). National Academies Press. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;
</description>
                <pubDate>Tue, 06 May 2025 09:30:00 +0000</pubDate>
                <link>https://ashtom.github.io/agent-autonomy</link>
                <guid isPermaLink="true">https://ashtom.github.io/agent-autonomy</guid>
                
                <category>Agents</category>
                
                
            </item>
        
            <item>
                <title>The Dawn of Showercoding</title>
                <description>&lt;p&gt;&lt;a href=&quot;https://x.com/karpathy/status/1886192184808149383&quot;&gt;Vibecoding&lt;/a&gt; has struck &lt;a href=&quot;https://x.com/ycombinator/status/1897301568736911530&quot;&gt;Silicon Valley&lt;/a&gt;. The premise is simple: You envision it. You say it or prompt it. The AI delivers it. Rinse. Repeat. In a short time, &lt;a href=&quot;https://x.com/slow_developer/status/1899430284350616025?utm_campaign=article_email&amp;amp;utm_content=article-14677&amp;amp;utm_medium=email&amp;amp;utm_source=sg&quot;&gt;90-100%&lt;/a&gt; of all code will be written by AI. That means, apps, webpages, features, all will soon be built on the back of pure vibes.&lt;/p&gt;

&lt;p&gt;While vibecoding is the current craze, a few days ago I was struck by something even more… “profound”. I stumbled on a post by @affaanmustafa on X &lt;a href=&quot;https://x.com/affaanmustafa/status/1893792780582822350&quot;&gt;stating&lt;/a&gt; that he’d pushed his first “shower commit” – a quick fix on GitHub Mobile made from the shower. You’ve heard of “shower thoughts” or even “&lt;a href=&quot;https://en.wikipedia.org/wiki/Shower_beer&quot;&gt;shower beer&lt;/a&gt;”… but have you heard of “showercoding”?&lt;/p&gt;

&lt;p&gt;On its face, the idea of showercoding is quite funny. And Affaan didn’t exactly recommend the risk of the experience 😆 But he was on to something much bigger.&lt;/p&gt;

&lt;p&gt;Our most creative ideas are almost never borne when we force ourselves to be creative. Years of anecdotal evidence confirm that &lt;a href=&quot;https://medium.com/@davidwolff218/why-showering-will-make-you-a-better-coder-de5630698ced&quot;&gt;showering can make you a better coder&lt;/a&gt;. Paul McCartney dreamed of Yesterday in his sleep, and bolted to a piano to finish before forgetting. Aaron Sorkin wrote parts of the Social Network on cocktail napkins. And famously, Aristotle did his best thinking while walking.&lt;/p&gt;

&lt;p&gt;As the mass deployment of agents makes natural language prompting a critical part of the coding experience, we are on the brink of integrating coding into the flow of our daily lives more naturally than ever before.&lt;/p&gt;

&lt;p&gt;The coding flow state is integrating with the flow state of our lives. It’s all becoming one flow.&lt;/p&gt;

&lt;p&gt;The result is a world in which we are able to vibecode, wherever we are – as AI agents deliver our creative consciousness into software.&lt;/p&gt;

&lt;p&gt;What does this do?&lt;/p&gt;

&lt;p&gt;Not only will the future developer not touch most code. The future developer will be in a constant loop between human and machine, defined not by time zone or period of the day, but pure creativity when it strikes. Iterating together with AI, you can get your concept started, or even get all the way to merging your PR, with simply the sound of your own voice. The flow of time is broken. There will be no more circadian rhythm to the global production of software.&lt;/p&gt;

&lt;p&gt;Commits are merged on the way home from soccer practice. PRs are reviewed en route to happy hour. Idea. Spoken. Commanded. Software created. Wherever, whenever.&lt;/p&gt;

&lt;p&gt;Up until now, imagine how many brilliant ideas have been lost because the developer was AFK, or didn’t have the skills to get started when inspiration first struck. Think of the possibilities untapped, the stones unturned. Now that is changing. We are approaching a union of human and machine consciousness that brings continuous ideas to fruition. A free-flowing supply of software that centers on human desires, innovations, and convenience. A form of coding that makes your life easier – because it fits, easily, into the rest of your life.&lt;/p&gt;

&lt;p&gt;The idea is not that you lose the skill to code, or that you stop understanding what you’ve started to create. In fact, it’s the opposite. This new paradigm allows you to level up to higher order thinking. To call the shots in the larger direction of your project. It’s a leg up in going from idea to mockup, from blinking cursor to green checks, whether you’re half asleep or multitasking or fully plugged in and in the flow.&lt;/p&gt;

&lt;p&gt;So yes, showercoding sounds a little weird. But what it unlocks is anything but. It’s creativity manifesting in the unexpected parts of your day. It’s taking action as soon as the lightbulb moment hits. Even if you happen to be in the shower.&lt;/p&gt;
</description>
                <pubDate>Thu, 13 Mar 2025 09:30:00 +0000</pubDate>
                <link>https://ashtom.github.io/showercoding</link>
                <guid isPermaLink="true">https://ashtom.github.io/showercoding</guid>
                
                <category>Agents</category>
                
                
            </item>
        
            <item>
                <title>AI Peer Programmer</title>
                <description>&lt;p&gt;The headline of Copilot’s &lt;a href=&quot;https://github.blog/news-insights/product-news/introducing-github-copilot-ai-pair-programmer/&quot;&gt;first ever introduction&lt;/a&gt; ended with “AI pair programmer”. It was a signal on what to expect from this first-of-a-kind product - it will pair with you as a programmer. And those developers that have paired with another developer in real life know that it is a highly human experience. It only works if expectations are aligned, if the two developers agree on how to share the one machine, one editor, one task between them.&lt;/p&gt;

&lt;p&gt;Now in 2025, the growing reasoning capabilities of large-language models (LLMs), the ability for the models to use tools like terminal and browser, and the integration into the development environment are putting Copilot on the path of becoming the “AI peer programmer”. And as for any member of a developer team, we have 4 expectations for these agents to enable a productive work experience: They need to be predictable, steerable, tolerable, and verifiable.&lt;/p&gt;

&lt;h2 id=&quot;predictable&quot;&gt;Predictable&lt;/h2&gt;

&lt;p&gt;As developers become increasingly dependent on AI-based collaboration, it’s important that they can form a meaningful intuition around what their agents can and cannot do for them. Otherwise, if we simply pass the stochastic burden of LLMs onto the user, and expect them to develop their own expectations based on trial and error, then their success will likely be left to too much chance. In other words, before handing a task to an agent, the developer must have a high level of confidence that the agent can solve the task to a meaningful degree.&lt;/p&gt;

&lt;p&gt;This is why it’s important to design the user experience with constraints and discoverability in mind that can effectively guide the user towards success, and give them the means to iterate in a way that will result in higher value. If we give users too much of a “magical” experience, even if it’s initially fun, it will quickly become frustrating as the novelty wears off and the user feels their time is being wasted.&lt;/p&gt;

&lt;p&gt;&lt;ins&gt;&lt;em&gt;Examples from Copilot today&lt;/em&gt;&lt;/ins&gt;:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Due to the contextual nature of Copilot’s “ghost text”, many users can intuit that its suggestions are based on the preceding code in the file, and therefore, can predict (to some degree) when it will offer suggestions of higher or lower value.&lt;/li&gt;
  &lt;li&gt;Highlighting a span of code, and then asking Copilot Chat to explain it, makes it very clear what the AI will take into consideration when responding to you&lt;/li&gt;
  &lt;li&gt;By providing a structured timeline of steps, with specific entry points from GitHub, Copilot Workspace indicates to users what it’s meant to be good at, and how to make forward progress&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;steerable&quot;&gt;Steerable&lt;/h2&gt;

&lt;p&gt;LLM-based suggestions are only as good as the context they’re given. Which is why it’s commonly called “grounding”. Therefore, it’s expected that even when assistance is “correct”, it won’t always be “perfect”.&lt;/p&gt;

&lt;p&gt;That’s why it’s critical the end-user can “steer” a suggestion, in order to iterate it towards the exact solution they’re looking for, in a simple and lightweight way. If suggestions feel like a binary decision (accept/ignore), then the cost of it being “partially wrong” becomes too high.&lt;/p&gt;

&lt;p&gt;&lt;ins&gt;&lt;em&gt;Examples from Copilot today&lt;/em&gt;&lt;/ins&gt;:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;When a user accepts a Copilot suggestion, they can easily edit it inline. And therefore, the suggestion doesn’t need to be 100% accurate to be valuable.&lt;/li&gt;
  &lt;li&gt;When a user interacts with Copilot Chat, they can refine a response by asking follow-up questions. This “steerability” is what makes conversational interfaces compelling&lt;/li&gt;
  &lt;li&gt;Copilot Workspace allows editing the task/spec/plan/implementation, and provides an undo stack to easily iterate and refine. Additionally, all code editors are editable, and you can open a terminal or Codespace if you need to make larger edits or make use of other tools&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;tolerable&quot;&gt;Tolerable&lt;/h2&gt;

&lt;p&gt;LLMs are non-deterministic, and therefore, they can, and will be wrong. As such, it’s important that the cost of an agent being wrong feels low, so that the end-user doesn’t perceive it as being a distraction. In essence, the strengths of an AI assistant needs to outweigh its inevitable weaknesses, and a critical part of that, is a UX that makes it simple for the user to ignore unwanted or unhelpful suggestions, while remaining in their flow.&lt;/p&gt;

&lt;p&gt;This is effectively an invariant for any passive AI assistance, e.g. unsolicited pull request bot comments, in order to prevent a “Clippy effect” and too much noise. And for interactive experiences, the solution is typically the previous property, i.e. steerability.&lt;/p&gt;

&lt;p&gt;&lt;ins&gt;&lt;em&gt;Examples from Copilot today&lt;/em&gt;&lt;/ins&gt;:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;“Ghost text” allows Copilot completions to be displayed inline, but with a UX that allows the developer to easily ignore them, and keep typing&lt;/li&gt;
  &lt;li&gt;Because it streams a response so quickly, it’s acceptable when Copilot Chat gets an answer initially wrong, because it feels lightweight to refine and iterate with it.&lt;/li&gt;
  &lt;li&gt;By having Copilot Workspace generate an initial spec/plan for an issue, it can help act as a jumpstart or thought partner. And so even if it gets it wrong, it’s still likely useful for progressing your task/thinking forward&lt;/li&gt;
&lt;/ol&gt;

&lt;h2 id=&quot;verifiable&quot;&gt;Verifiable&lt;/h2&gt;

&lt;p&gt;Despite the increasing ubiquity of AI-infused and AI-native development, it’s still nascent, in the sense of orgs deciding if and what they can be confident in using it for. Because source code isn’t static, it’s insufficient for it to simply “look right”. It also needs to “be right”, and therefore, behave as expected, adhere to best practices, be free of security issues, etc.&lt;/p&gt;

&lt;p&gt;As a result, it’s important that a user can not only steer assistance towards a final solution, but that they can trust the solution enough to accept and commit it. Empowering this level of trust will become critical, as we expect developers to use AI assistance for new, and increasingly complex tasks (example).&lt;/p&gt;

&lt;p&gt;&lt;ins&gt;&lt;em&gt;Examples from Copilot today&lt;/em&gt;&lt;/ins&gt;:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;After accepting a Copilot suggestion, a developer can immediately see error squiggles in their editor, and run a linter/test suite to validate its correctness&lt;/li&gt;
  &lt;li&gt;Copilot Chat displays citations to used references and external docs, as a means of “proving” to the user that it consulted the correct materials when making its suggestion&lt;/li&gt;
  &lt;li&gt;Copilot Workspace provides an integrated terminal in order to validate code changes, as well as secure port forwarding to view a running web app. It also provides an integrated file browser which allows easily verifying the current and proposed spec&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;–&lt;/p&gt;

&lt;p&gt;Only those agents that can fulfill these 4 expectations will be delightful to use.&lt;br /&gt;
Only those agents will actually increase the joy of being a developer, letting us amplify our creativity while leaving behind the boilerplate that has slowed us down ever since a bug was taped into &lt;a href=&quot;https://americanhistory.si.edu/collections/object/nmah_334663&quot;&gt;this famous log book&lt;/a&gt;.&lt;br /&gt;
Only those agents will become true peer programmers in our teams.&lt;/p&gt;

</description>
                <pubDate>Sun, 16 Feb 2025 08:00:00 +0000</pubDate>
                <link>https://ashtom.github.io/ai-peer-programmer</link>
                <guid isPermaLink="true">https://ashtom.github.io/ai-peer-programmer</guid>
                
                <category>Agents</category>
                
                
            </item>
        
            <item>
                <title>Scenes from an Agentic Life</title>
                <description>&lt;p&gt;“When?” you ask. “When does it begin?”&lt;/p&gt;

&lt;p&gt;You’ll live this day soon, in just a few years’ time.&lt;/p&gt;

&lt;p&gt;—&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7:00 AM:&lt;/strong&gt; The alarm rings. You’ll go to hit snooze. &lt;strong&gt;&lt;em&gt;It&lt;/em&gt;&lt;/strong&gt; will say, “No snooze today, sorry.” You set that command for your own good. You’re awake, unfortunately, and Bob Dylan’s “Like a Rolling Stone” is playing in the background. “Next,” you say. Too early for that.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7:07 AM:&lt;/strong&gt; Over coffee, you’ll open the day’s agenda. “Fix all the back-to-backs and shift meetings to tomorrow, please,” you’ll say. &lt;strong&gt;&lt;em&gt;It&lt;/em&gt;&lt;/strong&gt; will compile a summary of your Slack messages and emails from colleagues across time zones – and you’ll approve a drafted response for each based on the progress made since your last message. You laugh at a video of a woman jamming about butter.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7:45 AM:&lt;/strong&gt; You’ll be out walking your dog when you get an alert on your watch: DDoS attack detected. Shit. &lt;strong&gt;&lt;em&gt;It&lt;/em&gt;&lt;/strong&gt; sends another notification right after: Attack can be mitigated by adjusting Varnish Cache config. You’ll review. Good to go. Accept. Shit! Another shit! – this one from your dog. You’ll pick it up, throw it in the trash bin. &lt;strong&gt;&lt;em&gt;It&lt;/em&gt;&lt;/strong&gt; can’t help with that. Time to walk home.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8:10 AM:&lt;/strong&gt; You’ll miss the first ten minutes of the standup, but &lt;strong&gt;&lt;em&gt;It&lt;/em&gt;&lt;/strong&gt; will summarize the meeting so far in bullets capturing the gist. &lt;strong&gt;&lt;em&gt;It&lt;/em&gt;&lt;/strong&gt; does this every day, but you notice it has self-improved its method from the day prior.  You’ll have a new feature request and you decide to start together after &lt;strong&gt;&lt;em&gt;It&lt;/em&gt;&lt;/strong&gt; provides a changelog of the past 24 hours.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;8:37 AM:&lt;/strong&gt; You’ll send your coworker a voice note with ideas for the new feature. &lt;strong&gt;&lt;em&gt;It&lt;/em&gt;&lt;/strong&gt; will be listening to the conversation and will get started on spec and mock-ups for both of you to approve based on the transcript without you asking. Looks great, you’ll tell &lt;strong&gt;&lt;em&gt;It&lt;/em&gt;&lt;/strong&gt; to go to work!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;9:42 AM:&lt;/strong&gt; Based on the spec you’ve iterated on, you’ll ask &lt;strong&gt;&lt;em&gt;It&lt;/em&gt;&lt;/strong&gt; to start writing the new feature. This will require changing dozens of files in your codebase that &lt;strong&gt;&lt;em&gt;It&lt;/em&gt;&lt;/strong&gt; highlights for you with a few keystrokes. You’ll prompt &lt;strong&gt;&lt;em&gt;It&lt;/em&gt;&lt;/strong&gt; to draft the changes while you review new Slacks and emails.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;10:10 AM:&lt;/strong&gt; You’ll take a look at the code &lt;strong&gt;&lt;em&gt;It&lt;/em&gt;&lt;/strong&gt; generated. Mostly right, but you see a few places you’ll need to correct. “Damn, I’ve gotta prompt  some code, don’t I?” you say to yourself. You’ll still be team Python, duh. Time for deep work. You and &lt;strong&gt;&lt;em&gt;It&lt;/em&gt;&lt;/strong&gt;, yin and yang in the flow state. Coding. Revising. Factoring and refactoring across multiple files. &lt;strong&gt;&lt;em&gt;It&lt;/em&gt;&lt;/strong&gt; writes tests, documentation and optimizes for efficiency. Hey now, this IS what dreams are made of.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;11:30 AM:&lt;/strong&gt; Time for a break! You’ll tell &lt;strong&gt;&lt;em&gt;It&lt;/em&gt;&lt;/strong&gt; to book a table somewhere fancy and no farther than 5 kilometers from your house for date night tonight. &lt;strong&gt;&lt;em&gt;It&lt;/em&gt;&lt;/strong&gt; gets to work and quickly presents some options; you pick your partner’s favorite place. You’ll drink another shot of espresso and catch up on your social feed.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;12:32 PM:&lt;/strong&gt; Back to work. You’ll have an idea in your head to improve the feature that you just can’t seem to get out in code or in words. So, you go grab the back of a party invitation that you certainly aren’t going to attend and sketch the idea on paper. You take a picture with your phone, and &lt;strong&gt;&lt;em&gt;It&lt;/em&gt;&lt;/strong&gt; automatically analyzes, runs, and implements the code. Ok. &lt;strong&gt;&lt;em&gt;It&lt;/em&gt;&lt;/strong&gt; is cooking.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1:25 PM:&lt;/strong&gt; &lt;strong&gt;&lt;em&gt;It&lt;/em&gt;&lt;/strong&gt; wraps up another set of commits to the PR and you’ll review them. You’ll want to run the app, so you’ll check out the code locally and ask &lt;strong&gt;&lt;em&gt;It&lt;/em&gt;&lt;/strong&gt; to install dependencies and fix a problem in your dev environment. The last commit will be yours, but &lt;strong&gt;&lt;em&gt;It&lt;/em&gt;&lt;/strong&gt; quickly updates the PR summary. Tests and reviews pass. LFG.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2:01 PM:&lt;/strong&gt; After deploying, a few bug reports appear. You’ll paste them into a new issue and ask &lt;strong&gt;&lt;em&gt;It&lt;/em&gt;&lt;/strong&gt; for a fix. Rinse and repeat. Things have been flowing well today.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4:10 PM:&lt;/strong&gt; What’s a hard day’s work without a little show off? You’ll ask &lt;strong&gt;&lt;em&gt;It&lt;/em&gt;&lt;/strong&gt; to create, record and send a demo of the feature, just to see what your colleagues think of it; or of &lt;strong&gt;&lt;em&gt;It&lt;/em&gt;&lt;/strong&gt;? Like a Rolling Stone.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5:23 PM:&lt;/strong&gt; The work day is winding down. You’ll have about one hour. You’re tired of all your computer games, and you recently re-watched The Fellowship of the Ring. Hm. You’ll pick up your phone and ask &lt;strong&gt;&lt;em&gt;It&lt;/em&gt;&lt;/strong&gt; for a playable prototype of a new hobby project: a Lord of the Rings-inspired game that you want to publish open source. Balrog v. Gandalf face-off. Yep, you’re still a nerd.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5:25 PM:&lt;/strong&gt; Ding. &lt;strong&gt;&lt;em&gt;It&lt;/em&gt;&lt;/strong&gt; notifies you: The game is ready. Pretty good. You begin your face-off against the Balrog. You. Shall. Not. Pass! Seconds later, you lose the first round to &lt;strong&gt;&lt;em&gt;It&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6:30 PM:&lt;/strong&gt; You’ll still be iterating on that LOTR prototype when the autonomous car arrives to pick you and your partner up at the door. &lt;strong&gt;&lt;em&gt;It&lt;/em&gt;&lt;/strong&gt; will bring you to dinner and you’ll order a vegan pasta dish.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7:37 PM:&lt;/strong&gt; The group next to you is talking loudly about their new vertical mushroom farming startup. This is San Francisco; some things never change. You jump into the conversation and propose to use &lt;strong&gt;&lt;em&gt;It&lt;/em&gt;&lt;/strong&gt; for monitoring and fine-tuning the growth process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;9:00 PM:&lt;/strong&gt; The car is there to bring you home. But…it’s a beautiful night in September, so you’ll decide to walk instead and talk about good times. No more &lt;strong&gt;&lt;em&gt;It&lt;/em&gt;&lt;/strong&gt; today.&lt;/p&gt;

&lt;p&gt;—&lt;/p&gt;

&lt;p&gt;This is just an average day. One you’ll soon forget, but one that seems no less profound when glancing through tomorrow.&lt;/p&gt;

&lt;p&gt;You’ll still be a developer. But one day soon, you’ll live this new developer experience, re-programmed by you and your orchestra of agents, the many “&lt;strong&gt;&lt;em&gt;It&lt;/em&gt;&lt;/strong&gt;s”.&lt;/p&gt;

&lt;p&gt;And in all this, you’ll live a new human experience, too. Let the symphony begin.&lt;/p&gt;

</description>
                <pubDate>Fri, 07 Feb 2025 08:00:00 +0000</pubDate>
                <link>https://ashtom.github.io/agentic-life</link>
                <guid isPermaLink="true">https://ashtom.github.io/agentic-life</guid>
                
                <category>Agents</category>
                
                
            </item>
        
            <item>
                <title>Developer Odyssey</title>
                <description>&lt;p&gt;In today’s world, we often talk about artificial intelligence as the next frontier, with artificial general intelligence (AGI) even seen as the final frontier. But maybe it’s more apt to see it as the natural evolution in a series of progressive shifts. The craft of software engineering has been through multiple waves of transformation, each shaped by its technological, cultural, and economical contexts. The next wave did not simply replace the previous; they gained inertia from one another, expanding the capacities of those using computers – and lowering the barrier of entry of who could program machines themselves.&lt;/p&gt;

&lt;p&gt;In the mainframe era, computing resources were scarce and tightly centralized. A small group of specialists wrote low-level instructions to optimize compute and memory usage. With the arrival of the personal computer, more affordable hardware combined with more accessible operating systems and programming languages sparked a wave of democratization. Coding became a growing phenomenon in schools and computer clubs. Garages turned into offices. The personal computer fostered a wave of creativity and entrepreneurship by allowing users to transform into developers.&lt;/p&gt;

&lt;p&gt;Then the internet era changed everything. Suddenly, software could connect to services and people around the globe. Software distribution no longer required physical media, physical boxes, and physical stores. Web development took center stage, and new languages like HTML and JavaScript made building applications accessible to an even broader swath of programmers. Open source emerged as an indispensable force, letting developers build on the collective knowledge of a worldwide community instead of starting from scratch. The internet turned coding into a social activity, accelerating innovation by making open source the backbone of modern software.&lt;/p&gt;

&lt;p&gt;Finally, cloud computing put it all together: Large amounts of PC hardware, running open-source software, put into data centers around the world, connected to the public internet. Always on, always available. The pay-as-you-go model empowered startups to grow without needing massive capital expenditures, and enterprises could offload routine maintenance. Developers found themselves grappling with architecture in new ways: full-stack engineering, micro services, continuous integration, and 24/7 on-call schedules became standard. The cloud made it possible for small and large teams to deliver software with global scale and rapid delivery.&lt;/p&gt;

&lt;p&gt;Today, we stand on the cusp of another major pivot: the rise of artificial intelligence allows anyone to create any digital content with one or more prompts, written in natural language. And developers are once again on the forefront, using AI for code generation, code review, security and compliance, and production monitoring. Much like each preceding step, AI won’t eliminate developers – it will empower them to tackle new, more complex challenges. Through machine learning models that generate or suggest code, developers can accelerate mundane tasks and experiment faster. AI agents can sift through documentation, identify bugs, and even propose architectural improvements. This emerging paradigm lets developers think more strategically about the user experience, performance, and reliability, rather than obsessing over syntax and rote implementation. Developers will increasingly build in AI-native editors and environments – where you can’t build without building with AI.&lt;/p&gt;

&lt;p&gt;In this new AI-native future, the developer’s role will shift in similar ways as in previous waves, climbing the abstraction ladder by replacing the complexities of the underlying layer with technology that allows them to manage even more complex systems. We will see a future where code is co-written by humans and AI agents. The human sets the overarching goal, determines constraints, ensures ethical considerations, and divides the work into small chunks that can be handled by the state-of-the-art model, while the AI agent takes on the grunt work of writing, testing, and refining large swaths of code. Debugging sessions are becoming more like dialogues between human and machine, instead of the hours-long hunt for elusive errors. The interplay between humans and AI agents will become central to how software evolves, introducing a dynamic feedback loop that extends beyond the code itself. The developer, in essence, becomes the conductor of an AI-empowered orchestra.&lt;/p&gt;

&lt;p&gt;Critics worry that AI-driven coding might render human developers obsolete. However, history suggests that every major leap in abstraction – from assembly languages to high-level frameworks, from optimizing for a single computer to thousands of virtual machines, from one release every quarter to hundreds per day – simply allowed people to accomplish more. The anxiety is understandable, but time and again, developers have discovered how to channel the new capabilities into entire domains of innovation that didn’t exist before. They have always used automation to make their life easier. Batch scripts, bots, workflows, or pipelines. Now we call them AI agents, but the idea remains the same: Define a goal, let the compute processes do the work, sometimes over hours, then validate the output. Rinse and repeat.&lt;/p&gt;

&lt;p&gt;AI code generation, models, and agents won’t automatically produce perfect solutions; human oversight, expertise, and intuition are still vital. In many ways, the hype around AI is a reflection of our anticipation for how drastically new tools can transform what’s possible. With every technology wave, from mainframes to personal computers, the internet, open source, and the cloud, developers have expanded their horizons. They took on new forms of intricacy that they had to learn and master in order to reach the wave’s full potential. AI stands as a continuation of this arc – a technology that allows programmers to do more with less. It doesn’t negate the necessity of human creativity and human craft; AI amplifies them by handling repetitive tasks at lightning speed.&lt;/p&gt;

&lt;p&gt;Ambient computing and artificial intelligence are the natural evolution, the stream of consciousness in the developer odyssey. Which leaves us with one final quest(ion): what new vistas will developers explore, what novel arenas will humans build once they are unburdened from the complexity of today’s software development processes?&lt;/p&gt;
</description>
                <pubDate>Sun, 12 Jan 2025 08:00:00 +0000</pubDate>
                <link>https://ashtom.github.io/developer-odyssey</link>
                <guid isPermaLink="true">https://ashtom.github.io/developer-odyssey</guid>
                
                <category>Agents</category>
                
                
            </item>
        
    </channel>
</rss>