<![CDATA[Kevin Kasaei]]>https://www.kasaei.comhttps://substackcdn.com/image/fetch/$s_!YaNZ!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3d618b8e-f53c-41e0-9970-ddd6833edb13_401x401.pngKevin Kasaeihttps://www.kasaei.comSubstackWed, 22 Apr 2026 18:50:43 GMT<![CDATA[Goodbye Coding! ]]>https://www.kasaei.com/p/goodbye-codinghttps://www.kasaei.com/p/goodbye-codingFri, 27 Feb 2026 01:01:46 GMTI wrote my first line of code in 1995 on a Commodore 64. Commodore BASIC. No IDE, no syntax highlighting — just a blinking cursor after READY. and a kid figuring out that 10 PRINT "HELLO" followed by RUN was the closest thing to magic. Over the three decades since, I’ve shipped enterprise systems, wrestled MongoDB clusters into submission, architected microservices, and debugged code at 2am because a deployment went sideways.

Last month I realised something unsettling: I hadn’t opened my code editor in three weeks. Not because I was lazy. Because I didn’t need to.

I was still shipping features. Still fixing bugs. Still building products. But the craft I’d spent thirty years mastering — the act of translating intent into syntax — was no longer the bottleneck. It wasn’t even the job anymore.

Coding, as we knew it, is dead. And I say that as someone who loved it.

This Substack is reader-supported. To receive new posts and support my work, consider becoming a free or paid subscriber.


What Actually Changed

Let me be precise about what I mean, because “AI writes code now” is a lazy take that misses the point.

Twelve months ago, AI could autocomplete functions and generate boilerplate. Useful, sure. A faster pair of hands. But you still needed to think in code, structure the architecture, wire the systems together, and babysit every output.

That’s not where we are anymore.

Today I describe what I want built — in plain English, with business context — and an agent goes away, reads my codebase, writes the implementation, runs the tests, and opens a pull request. Sometimes multiple agents do this in parallel across different parts of the same project.

The shift wasn’t incremental. It was a phase change. We went from “AI helps you code faster” to “AI codes, and you supervise.”

If you’re a CTO still thinking about AI as a productivity tool for your engineering team, you’re already behind.


The Two Tools That Broke the Dam

Two products crystallised this shift for me: OpenAI’s Codex and Anthropic’s Claude Code. They represent different philosophies but arrive at the same conclusion — the developer’s role is fundamentally changing.

Codex: The Parallel Workforce

OpenAI’s Codex is now a full cloud-based software engineering agent. You point it at a GitHub repo, describe what you want, and it spins up its own sandboxed environment to do the work. Multiple tasks run in parallel. It reads your codebase, understands your conventions, writes features, fixes bugs, and proposes pull requests.

The latest models — GPT-5.3-Codex — can sustain work across millions of tokens. OpenAI showed it building complete, multi-level games autonomously over the course of days, iterating on its own output without human intervention. The Codex desktop app is essentially a command centre where you supervise a team of AI agents, each working on isolated branches of your code.

This isn’t a copilot sitting in your editor. It’s a junior engineering team that works 24/7, doesn’t get tired, and scales horizontally on demand.

What made me sit up was the introduction of Skills and Automations. Skills let Codex go beyond code generation into prototyping, documentation, and code understanding — aligned with your team’s standards. Automations mean Codex can work unprompted: triaging issues, monitoring alerts, handling CI/CD. The agent doesn’t wait for you to ask. It picks up work.

Claude Code: The Senior Engineer in Your Terminal

Claude Code takes a different approach. It lives in your terminal. It understands your codebase. It reasons about architecture.

Where Codex feels like managing a team, Claude Code feels like working alongside a very senior engineer who happens to have perfect recall of every file in your project.

Claude Code was released a year ago as a command-line tool and quickly became what many consider the best AI coding assistant available. Enterprise adoption grew 5.5x in its first few months. Microsoft, Google, and — ironically — even OpenAI employees were using it. That’s not marketing spin. That’s a signal about where actual engineering work is getting done.

What sets Claude Code apart is depth. Anthropic’s Opus 4.6 model recently wrote a working C compiler in Rust from scratch — one capable of compiling the Linux kernel. Sixteen agents collaborating, costing $20,000, producing something that would have taken a team of specialists months. The compiler isn’t optimised, but it works. That’s the part that matters.

Then there’s Claude Code Security, launched weeks ago. It doesn’t just scan for known vulnerability patterns. It reasons about your codebase like a human security researcher — tracing data flows, understanding component interactions, finding bugs that had gone undetected for decades in open-source libraries. Over 500 previously unknown zero-day vulnerabilities found in its initial runs.

The trajectory is clear: Claude Code is evolving from “a tool that writes code” to “a tool that does engineering.”


What the CTO Role Looks Like Now

Here’s what I’ve noticed changing in my own day-to-day.

I think in systems, not syntax. My value is in deciding what to build, why, and how the pieces fit together. The implementation is increasingly delegated. When I describe a feature to Claude Code or spin up a Codex task, I’m operating at the architecture and product level. The code is a byproduct.

I review more than I write. Pull requests from AI agents are a daily occurrence. My job is to evaluate whether the approach is right, whether edge cases are handled, whether the solution fits the broader system. I’m a code reviewer, not a code writer. This is a fundamentally different skill.

Speed compounds differently. When you can run five agents in parallel — one building a feature, one writing tests, one fixing a bug, one updating documentation, one handling a migration — your throughput doesn’t increase linearly. It explodes. A task that would have taken my small team a week now takes an afternoon of supervision.

The hiring calculus has flipped. I used to hire for “can this person write clean code?” Now I hire for “can this person specify clearly what needs to be built, evaluate whether AI output is correct, and architect systems that are agent-friendly?” That’s a very different profile. Strong communicators with deep domain knowledge are suddenly more valuable than fast typists with encyclopaedic framework knowledge.


This Isn’t Theory. I’m Living It Across Three Companies.

I don’t say any of this from the sidelines. I’m a self-funded founder running three companies simultaneously from Sydney, and every one of them is being shaped by this shift.

SearchFit: Building an AI Product With AI

SearchFit.ai is an AI search visibility platform — it helps brands understand and optimise how they appear across AI-powered search engines like ChatGPT, Perplexity, and Google’s AI Overviews. The old SEO playbook is dying. When answers come from AI, not blue links, visibility means something completely different. SearchFit tracks that new reality.

Here’s the thing: I’m building an AI product almost entirely with AI.

The SearchFit codebase is 25% written by Claude Code and Codex agents. Feature development that would have required a team of three or four engineers is handled by me directing agents — describing what I need, reviewing the pull requests, and iterating on the architecture. Database schema design, API endpoints, Shopify App Store integrations, pricing logic, documentation — agents handle the implementation. I handle the product decisions.

This is what “goodbye coding” looks like in practice. With a very small technical team we are shipping a SaaS product at the pace of a funded startup, because the agents have replaced the team I would have needed to hire. The economics of building software have fundamentally changed. You don’t need a seed round to build a real product anymore. You need domain expertise, architectural judgement, and the ability to direct AI agents effectively.

Capitaly: Agentic Sales and Pipeline Building

Capitaly.ai is an AI-powered capital raising platform for founders. It connects founders with investors, helps them prepare for raises, and builds the relationships that lead to funded rounds.

What’s changed the game here is agentic pipeline building. Traditional sales and BD meant a human manually researching leads, crafting outreach, following up, updating CRMs, and nurturing relationships across dozens of touchpoints. It was labour-intensive and didn’t scale — especially for a bootstrapped company.

Now, AI agents handle the heavy lifting of the pipeline. Researching investor fit. Personalising outreach at scale. Monitoring signals that indicate when a fund is actively deploying. Following up with context-aware messages that feel human because the agent understands the full history of the relationship.

The human role — my role — is strategy. Which investor segments to target. What the value proposition is for each audience. When to push and when to nurture. The agents execute the playbook. I write the playbook.

This is the pattern I see emerging across every B2B company: agentic sales isn’t about replacing salespeople. It’s about giving one person the pipeline capacity of a team of ten. The founders who figure this out first will have an absurd advantage in 2026.


What This Means If You’re a Developer

I’m not going to sugarcoat this. If your entire value proposition is “I write code,” you’re in trouble.

But here’s the thing most doom-and-gloom takes miss: the demand for software hasn’t decreased. It’s exploded. Every business wants custom tools, automations, internal platforms, and AI-powered products. The constraint was never ideas — it was implementation capacity. AI just removed the constraint.

What’s valuable now:

Understanding the problem space. AI can write any code you describe. The hard part is describing the right thing. Product thinking, user empathy, domain expertise — these are the new premium skills.

Architectural judgement. AI agents can build components, but someone needs to decide how those components fit together, what the data model looks like, where to draw service boundaries, and what trade-offs to accept. This requires experience that no model currently has.

Taste. This sounds vague but it’s not. Knowing when a solution is over-engineered, when an abstraction is premature, when the simple approach is the right one — that’s taste. AI generates options. Humans choose wisely.

Verification and trust. Every line of AI-generated code needs to be reviewed. Security implications need to be understood. Edge cases need to be caught. The more code AI writes, the more critical review becomes.


The Uncomfortable Truth About “Vibe Coding”

Something interesting happened over the recent holiday period. Claude Code went viral with non-programmers. People with zero coding experience were building apps, launching tools, shipping products. They called it “vibe coding” — just describe what you want and let the AI figure it out.

Part of me loves this. The democratisation of software creation is genuinely exciting. People who couldn’t participate before now can.

But part of me worries. Because vibe-coded apps are like houses built without architects. They might look fine. They might even work. But when the load increases, when edge cases surface, when security matters — the foundations crack.

The gap between “it works on my machine” and “it works in production at scale” is still enormous. And that gap is where engineering expertise lives. AI hasn’t eliminated that gap. If anything, by making it trivially easy to create software, it’s made the gap more dangerous.


Where This Goes Next

I think we’re about 12 months away from a world where:

  • Most routine software engineering tasks are fully automated

  • AI agents handle the complete development lifecycle — from issue triage to deployment to monitoring

  • The “10x engineer” isn’t someone who codes ten times faster, but someone who can effectively supervise ten AI agents simultaneously

  • Non-technical founders can build and ship MVPs without writing a line of code (this is already happening)

  • Engineering teams shrink in headcount but grow in output by an order of magnitude

Anthropic just announced Claude Cowork — essentially Claude Code for non-developers. Private plugin marketplaces. Deep integrations with Google Drive, Gmail, DocuSign, and dozens of other enterprise tools. The direction is unmistakable: AI agents that don’t just write code, but do knowledge work end-to-end.

OpenAI’s Codex app is heading the same way — from coding tool to autonomous software development platform.


PADISO: Running Agentic Workloads for the Enterprise

The third company in the portfolio is PADISO.ai — an AI and automation consultancy. While SearchFit and Capitaly are products I’m building, PADISO is where I see what the market actually needs right now. And what the market needs is help running agentic workloads.

Most companies aren’t struggling with “should we use AI?” anymore. That debate is over. They’re struggling with “how do we actually deploy AI agents that do real work, reliably, at scale, without breaking everything?”

That’s the gap PADISO fills. We design and deploy agentic workflows for clients — everything from automated revenue tracking across booking platforms to Power BI dashboards that update themselves, from email automation pipelines to full process orchestration using tools like N8N, Claude Code, and custom agent architectures.

What I’ve learned running PADISO is that the hard part isn’t the AI. The hard part is the integration. Real businesses have messy data, legacy systems, workflows that evolved organically over decades, and teams that don’t think in terms of prompts and agents. The value isn’t in showing a client that Claude can write code. It’s in wiring an agent into their actual operational reality — their booking systems, their CRMs, their reporting stack — and making it work Monday through Friday without human intervention.

This is where the “goodbye coding” thesis gets real for enterprises. The companies that will win aren’t the ones that hire more engineers. They’re the ones that learn to orchestrate agentic workloads across their existing infrastructure. The CTO of 2026 isn’t managing a team of developers. They’re managing a fleet of agents, with a small team of humans who understand both the business domain and the AI capabilities well enough to keep everything running.

PADISO generates around $20K a month in revenue — profitably, without VC funding — and that revenue funds the product bets like SearchFit and Capitaly. It’s the consultancy-funded model: use services revenue to bankroll products. But increasingly, the consulting work itself is being done with AI agents. I’m using AI to build the AI consultancy that funds the AI products. If that sentence doesn’t capture the moment we’re in, I don’t know what does.


The Real Goodbye

I’m not saying goodbye to building things. I’m saying goodbye to the specific act of sitting in an editor, holding the entire context of a system in my head, and translating logic into syntax character by character.

That was beautiful work. It was craft. From Commodore BASIC to TypeScript, from line numbers to microservices — every era taught me something. I’ll miss the flow state, the satisfaction of an elegant solution, the quiet pride of a clean diff.

But I won’t miss the repetition. The boilerplate. The context-switching between thinking about what to build and figuring out how to express it. The yak-shaving.

What I do now is closer to what I always wanted to do: think about hard problems, make consequential decisions, and build things that matter. The code is just a detail.

Goodbye coding. Hello engineering.


I’m Kevin — a serial entrepreneur, CTO-turned-CEO running three companies from Sydney. I write about the intersection of technical leadership and AI-first building. If this resonated, subscribe for more.

This Substack is reader-supported. To receive new posts and support my work, consider becoming a free or paid subscriber.

]]>
<![CDATA[DeepSeek vs OpenAI: A New Era of AI Innovation and What CTOs Can Learn]]>https://www.kasaei.com/p/deepseek-vs-openai-a-new-era-of-aihttps://www.kasaei.com/p/deepseek-vs-openai-a-new-era-of-aiMon, 27 Jan 2025 20:03:28 GMTIn the rapidly evolving landscape of artificial intelligence, a new player has emerged to challenge the dominance of industry giants. DeepSeek, a Chinese AI research lab, has recently unveiled its open-source AI model, DeepSeek-R1, which is making waves in the tech world. This development not only signals a shift in the global AI landscape but also offers valuable lessons for Chief Technology Officers (CTOs) looking to drive innovation within their organizations.

Share Keyvan Kasaei

The Rise of DeepSeek-R1

DeepSeek-R1, released in January 2025, has quickly garnered attention for its impressive capabilities in areas such as mathematical reasoning, code generation, and cost efficiency. What sets DeepSeek-R1 apart is its unique training methodology, which relies heavily on reinforcement learning techniques without the need for supervised fine-tuning as an initial step. This approach has allowed the model to develop advanced reasoning behaviors naturally, rivaling and in some cases surpassing the capabilities of OpenAI's latest offering, the O1 model.

Key Features of DeepSeek-R1

1. Advanced Reasoning: DeepSeek-R1 excels in multi-step logical and mathematical tasks, making it a powerful tool for complex problem-solving.

2. Open-Source Nature: Available under the MIT license, DeepSeek-R1 is accessible for both academic and enterprise use, fostering innovation and collaboration within the AI community.

3.Cost-Effectiveness: Perhaps one of its most striking features, DeepSeek-R1 is estimated to cost only about 2% of what users would spend on OpenAI's O1 model. This significant cost reduction makes advanced AI reasoning accessible to a broader audience.

4. Multilingual Capabilities: DeepSeek-R1 demonstrates strong performance in non-English contexts, particularly for Asian languages, giving it an edge in global markets.

OpenAI's O1: The Established Contender

OpenAI's O1 model, released in December 2024, represents a significant leap forward in reasoning and problem-solving capabilities. It employs an internal chain-of-thought mechanism to enhance accuracy and logical coherence, making it particularly adept at tasks requiring deep reasoning.

O1's Strengths

1. Scientific Reasoning: O1 excels in annotating data and generating mathematical proofs.

2. Mathematics: The model ranks among the top 500 US students in the American Invitational Mathematics Examination (AIME).

3. Coding: O1 demonstrates proficiency in code generation and debugging, ranking in the 89th percentile on Codeforces.

4. Data Analysis: It shows strong capabilities in analyzing large datasets and generating SQL queries for financial applications.

DeepSeek-R1 vs OpenAI O1: A Comparative Analysis

While both models showcase impressive capabilities, they each have their unique strengths and potential applications. In direct comparisons:

- Reasoning: DeepSeek-R1 surpasses all previous state-of-the-art models, though it falls slightly short of O1 on the ARC AGI benchmark.

- Mathematics: Both models perform exceptionally well, with O1 maintaining a slight edge.

- Coding: Initial impressions suggest DeepSeek-R1 is competitive with O1, with its significantly lower cost making it a more practical choice for many applications.

- Creative Writing: DeepSeek-R1 shines in this area, evoking the same excitement as early ChatGPT versions. It's more expressive and notably creative compared to O1.

- Cost-Efficiency: DeepSeek-R1's operational costs are significantly lower than O1, making it a more accessible option for many organizations.

Implications for CTOs and Innovation

The emergence of DeepSeek-R1 as a formidable competitor to OpenAI's offerings holds several important lessons for CTOs looking to drive innovation within their organizations:

1. Embrace Open-Source Solutions

DeepSeek's open-source approach demonstrates the power of collaborative innovation. CTOs should consider how open-source technologies can be leveraged to accelerate development and reduce costs within their organizations.

2. Foster a Culture of Experimentation

DeepSeek's success in developing R1 through novel training methods highlights the importance of encouraging experimentation. CTOs should create an environment where safe experimentation, creativity, and calculated risk-taking are encouraged and rewarded.

3. Prioritize Cost-Effective Solutions

The significant cost advantage of DeepSeek-R1 over O1 underscores the importance of seeking out cost-effective solutions. CTOs should continuously evaluate new technologies that can deliver comparable or superior results at lower costs.

4. Drive Continuous Learning

The rapid advancements in AI underscore the need for continuous learning and upskilling. CTOs should foster a culture of continuous learning within their teams to keep pace with emerging technologies.

5. Lead with Vision

The success of both DeepSeek and OpenAI in pushing the boundaries of AI capabilities demonstrates the importance of visionary leadership. CTOs must stay ahead of the tech curve, anticipating market trends and identifying opportunities for growth.

6. Collaborate Across Boundaries

The global nature of AI development, as exemplified by DeepSeek's emergence from China, highlights the importance of cross-border collaboration. CTOs should seek out partnerships and collaborations that can bring fresh perspectives and accelerate innovation.

Conclusion

The competition between DeepSeek and OpenAI represents a new chapter in AI innovation, one that offers exciting possibilities for organizations across industries. For CTOs, this development serves as a reminder of the importance of staying agile, embracing open innovation, and continuously seeking out new technologies that can drive business value.

As we move forward, the role of the CTO will continue to evolve, becoming increasingly central to organizational success. By fostering a culture of innovation, embracing emerging technologies, and leading with vision, CTOs can position their organizations at the forefront of the AI revolution, ready to harness the power of models like DeepSeek-R1 and O1 to drive growth and create value in the years to come.

The future of AI is here, and it's more accessible and powerful than ever before. The question for CTOs is not whether to embrace these new technologies, but how to do so in a way that best serves their organization's unique needs and goals. As DeepSeek and OpenAI continue to push the boundaries of what's possible, the opportunities for innovation are limitless. It's up to today's CTOs to seize these opportunities and lead their organizations into a future where AI is not just a tool, but a fundamental driver of business success.


Thank you for reading! If you found this article insightful, I encourage you to take the next step in your AI journey.

🤝 I'm Kevin Kasaei, Principal Consultant at PADISO. We help businesses cut through the AI noise and implement solutions that actually move the needle. No theoretical frameworks - just practical, results-driven AI strategies tailored to your business goals.

Want the same results for your business?

🎯 I'm offering free 30-minute consultations to discuss practical applications for your specific situation.

Book a call with me | Let's connect on Linkedin

📫 P.S. Subscribe to my newsletter where I share weekly insights about AI implementation case studies and practical frameworks. No fluff, just results.

Don't miss out on the AI revolution - let's turn these insights into action for your business today!

Keyvan Kasaei is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

]]>
<![CDATA[The End of Software Engineering (As We Know It)]]>https://www.kasaei.com/p/the-end-of-software-engineering-ashttps://www.kasaei.com/p/the-end-of-software-engineering-asSun, 19 Jan 2025 10:12:37 GMT
woman using laptop and looking side

The drumbeat is getting louder: AI is coming for software engineering jobs. With each new breakthrough – from GitHub Copilot to Claude to GPT-4 to Google's Gemini – the question grows more urgent: Is this the end of software engineering as we know it? After spending months researching this transformation and speaking with dozens of engineers and tech leaders, I've come to a conclusion that might surprise you: Yes, it is the end – but not in the way most people think.

The Great Misconception

Keyvan Kasaei is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

The popular narrative goes something like this: AI models are getting so good at writing code that they'll soon replace human programmers entirely. After all, if an AI can pass coding interviews, complete programming assignments, and generate working code from natural language descriptions, what's left for human engineers to do?

This view fundamentally misunderstands both the nature of software engineering and the trajectory of AI development. The truth is more nuanced and, ultimately, more interesting.

What's Really Ending

What we're witnessing isn't the death of software engineering – it's the death of software engineering as we've known it for the past several decades. Here's what's actually ending:

1. The Era of Manual Implementation

The days of manually writing every line of code are indeed coming to an end. Just as calculators freed mathematicians from manual arithmetic, AI is freeing engineers from the mechanical aspects of coding. But this isn't a loss – it's a liberation.

Think about it: How much of your time as a developer is spent on truly creative problem-solving versus implementing well-understood patterns? How often are you copying and pasting from Stack Overflow or rewriting boilerplate code? These mechanical aspects of coding are precisely what AI excels at automating.

2. The Traditional Career Ladder

The conventional career progression from junior developer to senior engineer to architect is being disrupted. The skills that traditionally took years to develop – like memorizing syntax or understanding common design patterns – can now be augmented or replaced by AI tools. This means the career ladder is being compressed, but it's also creating new rungs at the top.

3. The Isolation of Development

The era of the lone programmer, headphones on, coding in isolation, is ending. As AI tools take over more of the implementation details, software engineering is becoming more collaborative, more creative, and more focused on human-to-human interaction and problem-solving.

What's Being Born

As these aspects of traditional software engineering fade away, new paradigms are emerging. Here's what the future looks like:

1. The Rise of the AI-Native Engineer

Just as we have cloud-native engineers today, we're seeing the emergence of AI-native engineers. These professionals don't just use AI tools – they think in terms of AI-first architectures and solutions. They understand:

- How to effectively prompt and direct AI systems

- When to use AI versus traditional approaches

- How to architect systems that can evolve with AI capabilities

- How to ensure safety and reliability in AI-augmented systems

2. The Emergence of Meta-Engineering

Instead of writing all the code themselves, future engineers will focus on meta-engineering: designing systems and processes that AI can implement. This includes:

- Creating robust architectures that AI can work within

- Designing safety constraints and validation systems

- Developing evaluation criteria for AI-generated code

- Orchestrating complex systems of AI agents

3. The Evolution of Problem Solving

The focus is shifting from "how to implement" to "what to implement." Engineers will spend more time on:

- Understanding user needs and business requirements

- Designing system architectures and interfaces

- Ensuring security, scalability, and reliability

- Making high-level technical decisions that AI can't make

The New Skill Stack

To thrive in this new era, software engineers need a different set of skills. Here's what the new stack looks like:

1. Prompt Engineering and AI Interaction

- Understanding how to effectively communicate with AI systems

- Knowing how to debug and improve AI outputs

- Being able to combine multiple AI tools effectively

2. System Design and Architecture

- Designing systems that can leverage AI capabilities

- Understanding the limitations and trade-offs of AI tools

- Creating robust and maintainable AI-augmented systems

3. Human Skills and Business Acumen

- Communicating effectively with stakeholders

- Understanding business requirements and constraints

- Managing teams of humans and AI agents

- Making strategic technical decisions

4. Safety and Reliability Engineering

- Ensuring AI-generated code is secure and reliable

- Developing testing and validation frameworks

- Creating safeguards and fallback systems

The Winners and Losers

This transformation will create both winners and losers in the industry. Here's how it breaks down:

Winners Will Be:

- Engineers who embrace AI as a powerful tool rather than fighting it

- Those who focus on high-level problem solving and system design

- Professionals who develop strong communication and collaboration skills

- Engineers who understand both technology and business needs

Losers Will Be:

- Those who resist learning to work with AI tools

- Engineers who only know how to implement without understanding why

- Developers who can't explain their decisions or collaborate effectively

- Those who see coding as mere syntax rather than problem-solving

What This Means for Different Groups

For Current Software Engineers:

- Start learning to work with AI tools now

- Focus on developing high-level design and architecture skills

- Build your communication and collaboration abilities

- Understand the business context of your work

For Students and Aspiring Engineers:

- Learn the fundamentals, but don't obsess over memorizing syntax

- Focus on problem-solving and system design

- Develop strong mathematical and logical thinking skills

- Start working with AI tools early

For Companies:

- Invest in AI tools and training

- Redesign development processes to leverage AI

- Focus on hiring engineers with strong system design and communication skills

- Create frameworks for evaluating AI-generated code

The Timeline

This transformation won't happen overnight, but it's moving faster than many realize. Here's my prediction for the next few years:

2024-2025:

- AI coding assistants become standard tools

- Basic implementation tasks are largely automated

- Companies start restructuring development teams

2026-2027:

- AI systems can handle most routine coding tasks

- New roles emerge for AI-native engineers

- Traditional junior developer roles decrease

2028-2030:

- AI handles the majority of code implementation

- Software engineering focuses on system design and oversight

- New educational and career paths emerge

How to Prepare

Whether you're a current engineer or aspiring to enter the field, here's how to prepare for this new era:

1. Embrace AI Tools

- Start using GitHub Copilot or similar tools

- Learn prompt engineering

- Experiment with different AI coding assistants

2. Develop Meta-Skills

- Study system design and architecture

- Learn about AI systems and their limitations

- Focus on problem-solving and critical thinking

- Build communication and collaboration skills

3. Stay Adaptable

- Keep learning new tools and technologies

- Focus on understanding principles rather than specific implementations

- Build a strong network in the tech community

The Future is Bright

Despite the apocalyptic headlines, the future of software engineering is actually quite bright. We're not seeing the end of the field – we're seeing its evolution into something more powerful and interesting. The key is to embrace this change and prepare for it.

Just as the calculator didn't eliminate mathematicians but rather elevated the field to focus on more complex problems, AI won't eliminate software engineers. Instead, it will free us from the mundane aspects of coding and allow us to focus on more challenging and creative aspects of software development.

The end of software engineering as we know it isn't a tragedy – it's an opportunity. Those who embrace this change and adapt their skills accordingly will find themselves at the forefront of one of the most exciting transformations in the history of technology.

Are you ready for it?


Thank you for reading! If you found this article insightful, I encourage you to take the next step in your AI journey.

🤝 I'm Kevin Kasaei, Principal Consultant at PADISO. We help businesses cut through the AI noise and implement solutions that actually move the needle. No theoretical frameworks - just practical, results-driven AI strategies tailored to your business goals.

Want the same results for your business?

🎯 I'm offering free 30-minute consultations to discuss practical applications for your specific situation.

Book a call with me | Let's connect on Linkedin

📫 P.S. Subscribe to my newsletter where I share weekly insights about AI implementation case studies and practical frameworks. No fluff, just results.

Don't miss out on the AI revolution - let's turn these insights into action for your business today!

Keyvan Kasaei is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

]]>
<![CDATA[Building Effective AI Agents]]>https://www.kasaei.com/p/building-effective-ai-agentshttps://www.kasaei.com/p/building-effective-ai-agentsMon, 13 Jan 2025 08:52:57 GMT

Building an AI Agent

In the rapidly evolving landscape of artificial intelligence, the concept of AI agents has gained significant traction. These sophisticated systems, capable of performing complex tasks with varying degrees of autonomy, are reshaping how we approach problem-solving across industries. A recent research post from Anthropic, dated December 20, 2024, provides valuable insights into building effective AI agents. Let's dive deep into their findings and explore the implications for developers and businesses alike.

Understanding AI Agents

Before we delve into the intricacies of building effective agents, it's crucial to understand what we mean by "agents" in the context of AI.

Defining Agents

Anthropic's research distinguishes between two types of agentic systems:

  1. Workflows: These are systems where large language models (LLMs) and tools are orchestrated through predefined code paths.

  2. Agents: These systems allow LLMs to dynamically direct their own processes and tool usage, maintaining control over how they accomplish tasks.

This distinction is fundamental to understanding the different approaches to building AI systems and choosing the right solution for specific use cases.

When to Use Agents

A key takeaway from Anthropic's research is the importance of finding the simplest solution possible. They advise against unnecessarily complex implementations, recommending that developers only increase complexity when needed.

Consider the following factors when deciding whether to use agents:

  • Task Complexity: For well-defined tasks with predictable steps, workflows might suffice. Agents are better suited for tasks requiring flexibility and model-driven decision-making.

  • Performance vs. Cost: Agentic systems often trade latency and cost for better task performance. Evaluate whether this tradeoff makes sense for your specific application.

  • Scale: Agents can be particularly valuable when dealing with tasks that need to be scaled up efficiently.

Building Blocks of Effective Agents

Anthropic's research outlines several key components and patterns that form the foundation of effective AI agents. Let's explore these building blocks in detail.

The Augmented LLM

At the core of agentic systems lies the augmented LLM. This foundational building block enhances large language models with capabilities such as:

  • Retrieval

  • Tool usage

  • Memory

These augmentations allow the LLM to actively generate search queries, select appropriate tools, and determine what information to retain

.Key Implementation Considerations:

  1. Tailor augmentations to your specific use case.

  2. Ensure a well-documented, easy-to-use interface for your LLM.

  3. Consider using Anthropic's Model Context Protocol for integrating with third-party tools.

Workflow Patterns

Anthropic's research identifies several effective workflow patterns that developers can leverage when building AI agents. Let's examine each of these patterns:

1. Prompt Chaining

This workflow breaks down a task into a sequence of steps, with each LLM call processing the output of the previous one

.When to use: Ideal for tasks that can be easily decomposed into fixed subtasks. It trades latency for higher accuracy by making each LLM call an easier task.Examples:

  • Generating marketing copy, then translating it into a different language

  • Writing an outline, checking criteria, then writing a document based on the outline

2. Routing

This pattern involves classifying an input and directing it to a specialized followup task

.When to use: Effective for complex tasks with distinct categories that are better handled separately, and where classification can be done accurately.Examples:

  • Directing different types of customer service queries to appropriate processes

  • Routing easy questions to smaller models and hard questions to more capable models

3. Parallelization

This workflow involves running LLMs simultaneously on a task and aggregating their outputs programmatically. It has two main variations:

  • Sectioning: Breaking a task into independent subtasks run in parallel

  • Voting: Running the same task multiple times to get diverse outputs

When to use: Effective when subtasks can be parallelized for speed, or when multiple perspectives are needed for higher confidence results.Examples:

  • Implementing guardrails with separate models for processing queries and screening content

  • Reviewing code for vulnerabilities using multiple prompts

4. Orchestrator-Workers

In this workflow, a central LLM dynamically breaks down tasks, delegates them to worker LLMs, and synthesizes their results

.When to use: Well-suited for complex tasks where subtasks can't be predicted in advance.Examples:

  • Coding products that make complex changes to multiple files

  • Search tasks involving gathering and analyzing information from multiple sources

5. Evaluator-Optimizer

This pattern involves one LLM generating a response while another provides evaluation and feedback in a loop

.When to use: Effective when there are clear evaluation criteria and iterative refinement provides measurable value.Examples:

  • Literary translation requiring nuanced improvements

  • Complex search tasks needing multiple rounds of searching and analysis

Autonomous Agents

As LLMs mature in key capabilities, truly autonomous agents are emerging in production environments. These agents can understand complex inputs, engage in reasoning and planning, use tools reliably, and recover from errors

.Key Characteristics of Autonomous Agents:

  1. Begin work with a command or interactive discussion with the user

  2. Plan and operate independently

  3. Gain "ground truth" from the environment at each step

  4. Pause for human feedback at checkpoints or when encountering blockers

  5. Often terminate upon task completion, but may include stopping conditions

When to Use Autonomous Agents:Autonomous agents are ideal for open-ended problems where:

  • It's difficult to predict the required number of steps

  • You can't hardcode a fixed path

  • You have some level of trust in the LLM's decision-making

  • You need to scale tasks in trusted environments

Examples of Autonomous Agents in Action:

  1. A coding agent resolving SWE-bench tasks, involving edits to many files based on a task description

  2. Anthropic's "computer use" reference implementation, where Claude uses a computer to accomplish tasks

Best Practices for Building Effective Agents

Based on Anthropic's research, here are some key best practices to keep in mind when building AI agents:

1. Start Simple and Iterate

Begin with simple prompts and optimize them through comprehensive evaluation. Only add multi-step agentic systems when simpler solutions fall short

2. Focus on Transparency

Explicitly show the agent's planning steps to maintain transparency in its decision-making process

3. Design Clear Agent-Computer Interfaces (ACI)

Invest time in crafting thorough tool documentation and testing. This is crucial for ensuring that your agent can effectively interact with its environment

4. Use Frameworks Judiciously

While frameworks like LangGraph, Amazon Bedrock's AI Agent framework, Rivet, and Vellum can simplify implementation, be cautious of added complexity. Understand the underlying code and be prepared to reduce abstraction layers as you move to production

5. Implement Proper Safeguards

For autonomous agents, extensive testing in sandboxed environments and appropriate guardrails are essential to mitigate risks associated with their autonomy

Real-World Applications of AI Agents

Anthropic's research highlights two particularly promising applications for AI agents that demonstrate their practical value:

1. Customer Support

AI agents in customer support combine chatbot interfaces with enhanced capabilities through tool integration. This application is well-suited for agents because:

  • Support interactions naturally follow a conversation flow while requiring access to external information and actions

  • Tools can be integrated to pull customer data, order history, and knowledge base articles

  • Actions like issuing refunds or updating tickets can be handled programmatically

  • Success can be clearly measured through user-defined resolutions

2. Coding Agents

The software development space has shown remarkable potential for AI agents, evolving from code completion to autonomous problem-solving. Agents are particularly effective in this domain because:

  • Code solutions are verifiable through automated tests

  • Agents can iterate on solutions using test results as feedback

  • The problem space is well-defined and structured

  • Output quality can be measured objectively

The Future of AI Agents

As we look to the future, it's clear that AI agents will play an increasingly important role in various industries. However, the key to success lies not in building the most sophisticated system, but in creating the right system for specific needs.By following the principles and patterns outlined in Anthropic's research, developers can create agents that are not only powerful but also reliable, maintainable, and trusted by their users. As the field continues to evolve, we can expect to see even more innovative applications of AI agents, pushing the boundaries of what's possible in artificial intelligence.In conclusion, the journey to building effective AI agents is one of careful consideration, iterative improvement, and a deep understanding of the task at hand. By leveraging the insights from Anthropic's research and staying attuned to the evolving landscape of AI, developers and businesses can harness the true potential of AI agents to solve complex problems and drive innovation across industries.


Thank you for reading! If you found this article insightful, I encourage you to take the next step in your AI journey.

🤝 I'm Kevin Kasaei, Principal Consultant at PADISO. We help businesses cut through the AI noise and implement solutions that actually move the needle. No theoretical frameworks - just practical, results-driven AI strategies tailored to your business goals.

Want the same results for your business?

🎯 I'm offering free 30-minute consultations to discuss practical applications for your specific situation.

Book a call with me | Let's connect on Linkedin

📫 P.S. Subscribe to my newsletter where I share weekly insights about AI implementation case studies and practical frameworks. No fluff, just results.

Don't miss out on the AI revolution - let's turn these insights into action for your business today!

]]>