<![CDATA[Opply Tech Insights]]>https://tech.opply.com/https://tech.opply.com/favicon.pngOpply Tech Insightshttps://tech.opply.com/Ghost 6.22Thu, 19 Mar 2026 13:01:58 GMT60<![CDATA[Why Product Teams Should Prioritise Pain, Not Features]]>

Product teams receive a constant stream of feature requests. Dashboards, exports, notifications, integrations. Each one makes sense in isolation. A customer asks for something specific, the team sees a clear gap, and the natural instinct is to prioritise it.

Over time, these requests accumulate. Roadmaps begin to fill with features

]]>
https://tech.opply.com/why-product-teams-should-prioritise-pain-not-features/69b97f2d08b5340001a31143Wed, 18 Mar 2026 14:55:58 GMTWhy Product Teams Should Prioritise Pain, Not Features

Product teams receive a constant stream of feature requests. Dashboards, exports, notifications, integrations. Each one makes sense in isolation. A customer asks for something specific, the team sees a clear gap, and the natural instinct is to prioritise it.

Over time, these requests accumulate. Roadmaps begin to fill with features that feel justified, even necessary. And yet, despite all the activity, something often feels off. The product improves incrementally, but the underlying friction does not disappear.

That is because feature requests are rarely the problem. More often, they are proposed solutions.

A request for an export is rarely about a CSV. It is usually a sign that the product is failing to provide a specific answer clearly enough. Taking requests at face value is one of the fastest ways to build a product that looks busy without getting meaningfully better. A request for a notification often reflects uncertainty in a workflow. A dashboard request can signal that critical information is scattered across too many places. Each request points somewhere real, but not in a way that is immediately obvious.

If teams prioritise requests literally, they end up building a product shaped by the last few conversations rather than the underlying issues those conversations are pointing to. The result is a series of local optimisations instead of a coherent product.

The real job is to understand the pain behind the request.

That only becomes visible when you step back from individual inputs and look at them collectively. A single request is just a data point. But when similar signals begin to appear across customer conversations, operational feedback, support tickets, and product usage, patterns start to emerge. In practice, most product work is not about choosing between features. It is about recognising that multiple requests are symptoms of the same structural issue.

Making that shift requires more than collecting feedback. It requires turning scattered signals into a coherent understanding of the problem space. Product discovery, at its best, is not about gathering inputs, but about making sense of them.

At times, the process feels less like product management and more like detective work. A comment from a support ticket here, a complaint in a customer call there, a workaround mentioned by an operations team member. Individually, these fragments do not say much. But when enough of them accumulate, a clearer picture begins to form.

Collecting signals is only the start. Good product teams build a case from them.

They ask what sits behind the request, whether the same frustration appears elsewhere, and whether the requested feature is actually the best way to remove it. That is the difference between responding to demand and understanding the system. Once the problem is clear, the solution space opens up. The same pain point can be addressed in many ways: a change in design, a shift in data structure, automation, or the removal of complexity altogether.

Starting from a feature request limits this exploration because the solution is already implied. Starting from a well-understood pain point creates room to choose the approach that actually works.

This has become even more important as development has become faster and more flexible. With modern workflows and AI-assisted engineering, teams can prototype quickly, test ideas early, and iterate in short cycles. Building is no longer the main constraint.

The constraint is clarity.

The faster a team can move, the more expensive it becomes to be wrong. You do not just build the wrong thing once; you can now build the wrong thing repeatedly, very quickly. When fast-moving teams misunderstand the problem, they pay for it in rework, redirection, and avoidable complexity.

That is why clear context matters so much. When engineers work with tools that allow rapid exploration, the quality of the output depends heavily on the quality of the context they start from. Well-framed problems lead to better solutions.

In a world where building software is increasingly fast, the teams that succeed will not be the ones who ship the most features. They will be the ones who consistently identify the problems that are actually worth solving.

]]>
<![CDATA[When You Can Build Anything, Choosing What to Build Is Everything]]>

The hardest part of building software has changed.

It used to be the building itself — the months of development, the cost of getting it wrong, the weight of every commitment. When shipping a feature took a quarter, less than ideal prioritisation was easy to hide. Long cycles gave you

]]>
https://tech.opply.com/when-you-can-build-anything-choosing-what-to-build-is-everything/69b146d471f8230001f9c3cbWed, 11 Mar 2026 12:11:02 GMT

The hardest part of building software has changed.

It used to be the building itself — the months of development, the cost of getting it wrong, the weight of every commitment. When shipping a feature took a quarter, less than ideal prioritisation was easy to hide. Long cycles gave you cover — by the time something landed, the context had shifted enough that nobody stopped to ask whether it was the right thing to build in the first place.

AI has compressed that. We can prototype in hours and ship in days. The cost of building has dropped dramatically. But that speed has created a new problem: when you can build anything quickly, choosing what to build becomes the highest-leverage decision your team makes.

Most teams haven't adjusted to this. They're still spending 90% of their energy on execution and 10% on direction. We think that ratio needs to flip. And that shift isn't just a product management problem — it changes what it means to be an engineer.


Engineers need to be close to the problem

When building was slow, it made sense to separate the people who decided what to build from the people who built it. There was a long pipeline — research, specs, handoff, execution — and engineers sat at the end of it. That model breaks down when the cycle time drops from months to days. You can't afford the latency of passing context through layers of documentation and meetings.

At Opply, our engineers sit alongside the ops team. The ops team uses the product daily and sits closest to customers. They experience the same friction, hear about problems in real time, and understand the context behind a complaint in a way that a ticket never captures.

This isn't an accident. It's a deliberate structural choice. When an engineer hears about a problem directly from the person experiencing it — whether that's the ops team, a support ticket, or a customer conversation — they understand it differently than when they read a brief. They understand the why behind the pain, not just the what. And that understanding changes the solution they build.


The solution matters as much as the decision

Prioritisation gets a lot of attention — and rightly so. But even perfect prioritisation falls flat if the engineering response isn't right.

What we mean by that: when a problem is identified, the way you solve it matters enormously. Two teams given the same problem will build very different things depending on how they think.

Our approach is to ask a few questions before we write any code:

What's the simplest thing that actually solves this? AI makes it tempting to over-build. When you can generate a sophisticated solution in a day, it's easy to skip past the version that's three lines of config and a database change. We try to resist that. The goal is to solve the problem, not to demonstrate what we're capable of.

Does this solution move the product forward or just patch a hole? Some problems need a quick fix and nothing more. But the best engineering work solves today's pain in a way that also builds toward the product's long-term direction. That means understanding the vision well enough to design toward it, even when you're working on something small.

Are we solving what the customer asked for, or what they actually need? These are often different. Being close to the problem — through the ops team, through direct conversations — gives us the context to tell the difference. A customer might ask for an export button when what they really need is a view that eliminates the need to export at all.


Speed changes the engineering mindset

When building was expensive, engineers were trained to be cautious. Plan extensively, spec thoroughly, build once. That mindset made sense when a wrong decision cost months.

Now that we can build, test, and iterate in days, the calculus is different. The cost of trying something and learning from it is often lower than the cost of planning it perfectly upfront. This doesn't mean we move recklessly — it means we treat shipping as a way of learning, not just a way of delivering.

For engineers, this is a genuine shift in how you work. You spend less time interpreting specs and more time talking directly to the people affected by the problem. You're not waiting for a spec to be handed to you — you're part of the discussion about what the problem actually is. You prototype fast, put it in front of real users quickly, and iterate based on what you see.

This is what we mean when we talk about product engineering. It's not a rebrand of the same job. It's a fundamentally different relationship between the engineer and the problem.


Communication is the whole game

None of this works without communication. Not status updates — actual, ongoing, two-way communication between the people building the product and the people closest to customers.

Our engineers sit alongside the ops team, so context flows naturally rather than being filtered through handoff documents. Stakeholders understand why a solution was built the way it was, not just what was shipped. Everyone can challenge an approach before it's locked in.

The biggest waste in product development isn't building slowly. It's building confidently in the wrong direction because the people designing the solution were too far from the people experiencing the problem.


AI has given every team the ability to build anything, fast. That sounds like a superpower. But speed without closeness to the problem just means you build the wrong thing quicker — and now you can do it ten times before anyone notices.

The question worth asking isn't "how fast can we ship?" It's "how close are the people building to the people hurting?" Because in a world where building is cheap, understanding is the competitive advantage.

]]>
<![CDATA[Old Rules Still Apply: Traditional Engineering Practices Matter More in the Age of AI Coding]]>For most of my career as a software engineer, I’ve tried to follow the practices that make engineering sustainable: writing tests, keeping changes small, reviewing carefully, documenting intent.

Recently, while reflecting on how my way of writing code has changed with the introduction of AI coding tools, I

]]>
https://tech.opply.com/old-rules-still-apply-traditional-engineering-practices-matter-more-in-the-age-of-ai-coding/69a853cee5eda400016b854dWed, 04 Mar 2026 17:04:11 GMT

For most of my career as a software engineer, I’ve tried to follow the practices that make engineering sustainable: writing tests, keeping changes small, reviewing carefully, documenting intent.

Recently, while reflecting on how my way of writing code has changed with the introduction of AI coding tools, I realised something interesting: many of the practices that guided good engineering for years have become even more important.

AI hasn’t replaced them. It has amplified their importance.

Another shift I’ve noticed concerns documentation. For a long time, there was a strong idea in engineering that good code should explain itself. A well-written function, with clear naming and structure, shouldn’t need additional explanation. Comments were sometimes treated as a code smell, something that appeared when the code itself wasn’t clear enough.

There is still a lot of truth in that principle. Clear code remains essential. But documentation is starting to play a new role.

It is no longer written only for humans. It is increasingly written for machines.

Well-structured comments, clear docstrings, and explicit descriptions of behaviour help AI tools understand the intent of the code they are modifying or generating. What used to be optional context for human readers has become a powerful signal for AI assistants trying to reason about a codebase.

This made me realise something: the old engineering rules aren’t becoming obsolete in the age of AI-assisted development.

If anything, they are becoming load-bearing.


Let's take a practical example: a developer uses Cursor to generate a data processing function. The code is clean, well structured, properly typed, and even has sensible variable names. It passes a quick review and gets merged.

Two weeks later: a production incident.

The AI-generated code handled the happy path beautifully but silently corrupted data on edge cases no one thought to check. No tests caught it because none were written. No reviewer caught it because the code looked right.

This story is becoming common. And it reveals something important about working with AI coding assistants: they’re force multipliers. They amplify whatever practices you already have.

Good discipline plus AI equals exceptional velocity.
No discipline plus AI equals a faster path to technical debt.

The old rules aren’t obsolete. They’re load-bearing.

Several practices that have kept codebases healthy for decades, such as Test-Driven Development, incremental delivery, and documentation-first approaches, aren’t relics of a slower era. They’re the guardrails that make AI-assisted development actually work.

Test-Driven Development: Define Correctness First

Tools like GitHub Copilot and Claude Code generate plausible code that often handles the obvious cases while quietly missing edge cases. The code compiles. It runs. It even looks elegant. But it wasn’t written with your specific constraints in mind.

TDD changes the order of the conversation.

You define correctness before asking AI to generate anything. The tests become a contract the AI must fulfil, not an afterthought you hope will catch bugs.

When you write tests first, you can let AI iterate on implementations until they pass. Using tools like Claude Code that can run tests and refine their output automatically. But this only works if the tests exist.

Without TDD, you’re trusting AI’s judgement about what “working” means. With TDD, you’re telling it.

Incremental Delivery: Keep Changes Reviewable

AI makes it tempting to generate entire features at once. Why write a function when you can prompt for the whole module? Why build incrementally when the AI can scaffold everything in one shot?

Today, many engineers also rely on AI to review code. These tools can catch obvious issues quickly and help analyse large changes faster than a human reviewer could. But even with AI assistance, large AI-generated change sets remain difficult to evaluate with confidence.

Your eyes glaze over by the third file. The code looks reasonable. The AI review passes. You approve it. Bugs ship.

Small increments keep each contribution, whether written by AI or reviewed with AI, understandable and reversible.

Generate a function, verify it works, commit. Generate the next piece, verify, commit. The feedback loop stays tight. AI tools can analyse changes more reliably, and human reviewers can still apply judgement where it matters. You catch drift early, before it compounds into a production incident buried under 2,000 lines of generated code you never really understood.

The velocity gain from AI isn’t about generating more code faster. It’s about maintaining quality while moving faster. Incremental delivery is how you do that.

Documentation-Driven Development: Clear Intent, Better Output

Writing documentation before code forces you to articulate what you’re building.

What problem does this solve? What are the inputs and outputs? What should happen when things go wrong?

This is exactly what AI needs: clear intent.

Vague prompts produce vague code. “Build a user service” gets you something generic. A well-written spec, with defined behaviour, edge cases, and constraints, becomes the prompt.

The documentation is the thinking; the code is merely the output.

Engineers who document first give AI better instructions and catch design flaws before any code is generated. They spend less time wrestling with AI output that missed the point, because they made the point clear from the start.

Traditional Engineering Practices Matter More in the Age of AI Coding

Several of these practices share something fundamental: they’re about defining intent before generating code.

  • Tests define correctness.
  • Small increments define scope.
  • Documentation defines purpose.

AI is extraordinarily powerful at generation. But it has no intent of its own. It doesn’t know what you’re trying to build, what trade-offs matter in your context, or what “done” looks like for your team.

These, and other "traditional" engineering practices supply what AI lacks.

The engineers who thrive in this new era won’t be those who abandon discipline for speed. They’ll be the ones who recognise that AI makes discipline more valuable, not less.

The old rules still apply, they just have a new reason to exist.

]]>
<![CDATA[Fluent, Not Native: Agentic Tools and Cross-Team Contribution]]>

The ability to improve a design occurs primarily at the interfaces. This is also the prime location for screwing it up.

- Akin’s Laws of Spacecraft Design #15 (Shea’s Law)

Full stack engineers are supposed to be generalists, but at some point in the last few

]]>
https://tech.opply.com/fluent-not-native-agentic-tools-and-cross-team-contribution/699f24d3bc051e000144dff2Wed, 25 Feb 2026 17:27:43 GMT

The ability to improve a design occurs primarily at the interfaces. This is also the prime location for screwing it up.

- Akin’s Laws of Spacecraft Design #15 (Shea’s Law)

Full stack engineers are supposed to be generalists, but at some point in the last few years "full stack" has quietly expanded to include territory that used to belong to dedicated data teams, leaving dashboards, reporting queries, data models, and BI tooling increasing within the product engineer's remit.

The data team still owns the warehouse and the pipelines. But product engineers regularly need to make changes that touch both sides: adding a metric, building a dashboard for a feature they've just shipped, debugging why a report shows unexpected values. The traditional options are either to learn the data stack properly (time consuming, especially in a rapidly evolving product) or file a ticket and wait (slow, especially on a lean team). Agentic tools offer a third path: assisted contribution that lets engineers work productively in data-adjacent territory without fully context-switching into a new discipline.

The goal isn't fluency indistinguishable from a native speaker. It's being conversational enough to get things done.

The friction at the boundary

A concrete scenario: an engineer ships a feature and needs to add tracking to an existing dashboard. The data's already flowing into the warehouse. Conceptually, the change is straightforward.

But the dashboard is defined in YAML via a sync process. There are conventions for how cards wire up to filters, entity ID formats to follow, parameter mappings to configure correctly. The SQL needs to follow patterns specific to the reporting schema. The engineer could learn all of this, but it’s tooling they’re not going to be using on a daily basis in an environment that’s changing rapidly. It’s a lot of time investment for an uncertain and intermittent reward.

The alternative is to hand it to the data team. But they have their own priorities, and now there's coordination overhead: explaining the requirement, waiting for availability, reviewing something written by someone who doesn't have full context on the feature that motivated the change and isn’t in direct contact with the stakeholders who drive the requirement.

Neither option is particularly appealing. The first burns cognitive load on learning systems you'll rarely touch. The second introduces latency and the inevitable friction of handoffs.

This isn't an argument for reckless autonomy. Sometimes the right answer genuinely is to hand it off. If the change is architecturally significant or touches something fragile, you want the domain expert involved from the start. But plenty of contributions get stuck not because they're genuinely complex, but because the contributor lacks familiarity with the tooling and conventions of an adjacent domain. That's a different problem, and one where agentic tools can help.

Agentic tools as translation layers

The useful framing here is translation rather than replacement. Agentic tools can't be a substitute for the data engineer's expertise. They're bridging the gap between "I know what I want to achieve" and "I know how to express that in this system's idioms."

Dashboard contributions are a clear example. The product engineer understands the business logic and the underlying data, having the in-depth understanding that only comes from having built the feature in the first place. What they don't have memorised is the YAML schema, the naming conventions, or how filter mappings need to be structured. An agentic tool can scaffold that translation while the engineer focuses on the semantics of what they're trying to display.

Reporting queries follow a similar pattern. Writing SQL against a well-modelled warehouse isn't conceptually difficult, but knowing which tables to join, what the naming conventions are, and where the edge cases live takes time to absorb. Pattern-matching against existing queries, surfacing relevant schema information, suggesting approaches based on similar reports—these are tasks where agentic tools can meaningfully accelerate the work.

Data model exploration is often the real bottleneck. Understanding what's actually in the warehouse—what fields exist, how they relate, what's been deprecated—typically requires either reading documentation (if it exists and is current) or interrupting someone on the data team for orientation. Agentic tools can compress that exploration significantly.

None of this works by magic. There are prerequisites.

The existing codebase needs reasonable conventions and patterns to match against. If the current SQL is an inconsistent mess, the tool will confidently reproduce that inconsistency. Structure in -> structure out, or the other thing.

There need to be explicit boundaries on what's appropriate for assisted contribution versus what needs data team involvement from the start. There are a lot of use cases where slow and steady really does win the race.

As with all tool-assisted development, the review process is absolutely key. The data team's role shifts from "do all the work" to "review contributions and catch domain-specific errors." This is a different skill, and arguably a better use of their time, but it requires a lot of focus and detailed review in volume becomes a specialist skill in its own right. If review becomes a bottleneck you've just moved the ticketing process a couple of stages down the pipeline.

Insoluble problems

Agentic tools can help with expressing intent in unfamiliar systems. They're considerably less useful for forming that intent in the first place. Again, another new specialist skillset emerges: choosing which metrics actually matter to the business, understanding why the data model is structured as it is, debugging subtle issues where values seem wrong for non-obvious reasons. These all require domain knowledge and continuous stakeholder interaction that the tool can't provide.

Architectural decisions about the data stack remain firmly in data team territory. The questions of whether to restructure a fact table, how to handle slowly changing dimensions, or when to materialise a view aren't things you want someone to stumble through with AI assistance. The goal is unblocking routine contributions, not dissolving the boundary between teams entirely.

Where does this leave us?

Agentic tools are genuinely useful when they reduce friction for contributions that would otherwise be blocked by knowledge gaps in adjacent domains. They're a wonderful facilitator for cross-team collaboration, not a magic bullet that eliminates the need for expertise.

The interesting question isn't whether these tools replace specialists, but what effect they have on how specialists spend their time. Ideally, more time is spent on architectural thinking, detailed review, and knowledge transfer. These are areas of work that exert more leverage on the engineering division's overall output.

]]>
<![CDATA[What is an AI agent, Really?]]>Nowadays, everywhere you look, people are talking about AI agents.

They’re on social media, in the news, in product demos, startup pitches, and everyday conversations. Agents are creating their own social media accounts, pushing code to repositories, managing inboxes, and even starting drama online.

Some describe them as

]]>
https://tech.opply.com/what-is-an-ai-agent-really/699580effaf1b500015cf480Wed, 18 Feb 2026 12:53:19 GMT

Nowadays, everywhere you look, people are talking about AI agents.

They’re on social media, in the news, in product demos, startup pitches, and everyday conversations. Agents are creating their own social media accounts, pushing code to repositories, managing inboxes, and even starting drama online.

Some describe them as personal assistants or junior employees. Others believe they will replace entire teams. And many dismiss them as just another hype cycle.

Amid all these claims and viral threads, a simple question comes up:

What is an AI agent, really?

What an AI Agent is NOT

Before defining what an AI agent is, it helps to clarify what it is not. There are two common misconceptions:

Agents vs Chatbots: A chatbot is typically powered by a large language model (LLM). It interacts with you through conversation. You ask a question, it generates a response. Support bots on websites, help desk assistants, even advanced conversational AI tools all fall into this category.

This confusion is understandable. Most agents are built on top of LLMs.

A chatbot can:

  • Answer questions
  • Explain concepts
  • Brainstorm ideas
  • Help you plan

But it stays inside the conversation. It cannot go and perform tasks for you. An agent can.

Agents vs Workflows: Agents are often associated with automation, but automation itself is not new. We have had workflows for years.

A workflow follows predefined steps:
Input → Step A → Step B → Step C → Output

You decide the path in advance. The system executes exactly what you designed. If you know precisely what should happen and in what order, a workflow is perfect.

But that is not always the case. Agents operate differently.

You define a goal, not every step. The agent determines which tools to use, in what order, whether more information is needed, and how to adapt as context changes.

The Anatomy of an AI Agent

Now that we’ve clarified what an agent is not, what actually makes something an agent?

To understand it clearly, we can break it down into three core components:

Decision-making: This is the heart of an agent. Instead of following fixed steps, an agent continuously asks:

  • What is the current state?
  • What is the goal?
  • What is the best next action?

It operates in a loop:
Observe → Decide → Act → Re-evaluate.

With each cycle, it updates its understanding and adjusts its path toward the goal.

Tools: Tools are what give agents real-world impact. Without tools, you have a conversational interface. With tools, the agent can take actions in the real world through external systems.

These tools can be simple:

  • Reading documents
  • Querying a database

Or more complex:

  • Browsing the web
  • Publishing content
  • Updating project boards
  • Pushing code

Modern systems are increasingly standardizing this connection layer. Concepts like Model Context Protocol (MCP) servers allow agents to connect to everyday tools such as Notion, Miro, GitHub, Slack, and others.

The more tools an agent can access, the broader the range of tasks it can handle.

Memory and Context: Memory allows an agent to build context over time instead of starting from scratch at every step.

An agent can:

  • Retrieve information
  • Store intermediate results
  • Compare new inputs with previous context
  • Refine decisions as new data appears

Imagine you say:

“Look at my emails and identify the most important task. Then complete it.”

The agent might:

  1. Retrieve emails
  2. Extract potential tasks
  3. Check your product board for milestones
  4. Compare urgency and impact
  5. Decide which task matters most
  6. Realize additional context is needed
  7. Pull more information
  8. Adjust its plan

Each new piece of information influences the next decision.

That ongoing loop of gathering context, deciding, and adjusting is what makes it an agent.

To sum it up:

AI agent is a system that autonomously pursues a goal by deciding what actions to take, using tools to interact with external systems, and adapting based on new information.

Looking Ahead

So where does this leave us?

Models are improving. Tool ecosystems are becoming more standardized. As these layers mature, agents are becoming more practical in real-world settings.

You can now use agents to:

  • Automate repetitive knowledge work
  • Handle inbox triage
  • Draft and push code
  • Consolidate information from project boards, documentation, and databases

It is still early. Most systems are heavily supervised and constrained. But the opportunities are growing, and I am genuinely excited to see what the next year brings and what people build.

]]>
<![CDATA[Data (Documentation) Modelling with AI]]>Data modelling is often treated as a specialist discipline. It requires SQL proficiency, familiarity with a transformation framework, and an understanding of conventions that are rarely fully documented. As engineering organisations grow, this lack of shared context can limit who is able to contribute to the data layer.

Connecting an

]]>
https://tech.opply.com/data-documentation-modelling-with-ai/698b506991097d00014e9e13Wed, 11 Feb 2026 14:21:24 GMT

Data modelling is often treated as a specialist discipline. It requires SQL proficiency, familiarity with a transformation framework, and an understanding of conventions that are rarely fully documented. As engineering organisations grow, this lack of shared context can limit who is able to contribute to the data layer.

Connecting an AI assistant, such as Claude Code, directly to a data warehouse can expand access, but the real enabler is documentation. Clear, practical documentation is what makes broader contribution possible.


The Problem: Data Modelling as a Bottleneck


In many organisations, the data team is small while the engineering organisation continues to grow. Engineers are expected to deliver end-to-end solutions, yet data modelling still becomes a handoff to a specialist team. Instead of producing the final data models themselves, engineers often stop at the application layer and pass the rest on.

This is rarely a capability issue. Most engineers can write SQL and reason about data. What's missing is project-specific context. dbt architectures come with rules for naming, testing, and composition that are not obvious unless you already know them. Business logic is embedded in SQL and Jinja macros, often in subtle ways.

Without that context, engineers hesitate. Even when they could implement the change, uncertainty about expectations turns progress into a ticket and a wait.


Documentation That Explains Intent


AI assistants can query warehouses, read dbt models, and generate SQL. Without guidance, they tend to produce code that runs but does not fit the project: wrong layers, inconsistent naming, or incorrect assumptions about how fields relate.

Improving this does not require a better model. It requires documentation that explains what things are supposed to mean.

Schema files should be treated as the source of truth for meaning, not just structure. Column descriptions should explain the business logic behind a field rather than restating its name. When intent is written down clearly, both humans and AI can understand a model without digging through the SQL.

Good descriptions let the AI explain and extend models accurately. Poor descriptions force it to guess.


Using CLAUDE.md for Project Context


Column-level documentation helps with individual models, but working effectively in a dbt project requires understanding how the whole thing is organised. Project instruction files such as CLAUDE.md can provide that context.

A root CLAUDE.md can describe global conventions: how to run dbt, what each layer is for, naming rules, and testing expectations. This gives engineers a clear picture of how the project is structured before they open a single SQL file.

Additional CLAUDE.md files placed in subdirectories can explain local patterns. For example:
- A file in a directory where all of the models are from the same source, can explain intricacies of that source to the AI.
- Another directory might need to always have column names in a specific format e.g. exports to third parties. We can provide this context only when it is required, when building in this directory.

This structure matters because context is picked up as someone moves through the project. High-level guidance explains the overall shape, while local files explain the rules that apply in a specific area.


Documentation as a Practical Feedback Cycle


When AI is used directly against the data warehouse, gaps in documentation show up immediately. Missing or unclear explanations lead to vague or incorrect answers. Each mistake points to an assumption that was never written down.

Fixing those gaps improves the next interaction. Over time, more edge cases get documented, conventions become explicit, and less knowledge lives only in someone's head.

Documentation stops being something you write once and forget. It becomes something the team relies on and keeps up to date because the cost of getting it wrong is obvious.

Conclusion


The problem with contributing to a data warehouse isn't skill. It's not knowing the rules of the project.

Engineers can write SQL and understand data models, but without clear conventions and written intent, it's hard to work confidently. Good documentation, combined with well-placed project context files, makes those rules visible.

AI can then guide engineers toward the correct patterns instead of guessing. The rules are written down rather than passed around informally, and contributing to the data layer is no longer limited to a small group of specialists.

]]>
<![CDATA[Feeling Comfortable with AI-Generated Code]]>It is often said that human beings are distrustful by nature. We distrust people we do not know, the media, advertisements, and anything else that has not earned our trust. But if we had to highlight something that makes us most distrustful, it would be technological advances in artificial intelligence

]]>
https://tech.opply.com/feeling-comfortable-with-ai-generated-code/69835d4b3b78d100011a9d1dWed, 04 Feb 2026 17:27:02 GMT

It is often said that human beings are distrustful by nature. We distrust people we do not know, the media, advertisements, and anything else that has not earned our trust. But if we had to highlight something that makes us most distrustful, it would be technological advances in artificial intelligence and not having complete control over the actions we are taking.

Autopilot has been used in aeroplanes for around a century and has been proven to work very well, but no one would dare to board a plane without pilots. The latest developments in autonomous driving in cars are yielding incredible results, but in percentage terms, very few people dare to use the car on autopilot. Why are people so distrustful? Because they are reluctant to give absolute control to artificial intelligence.

If we find it difficult to trust things created in such detail, how can we not be wary of code generated by AI when we have seen that code that would take us days to create has been created in just a few minutes? It is normal to be wary, we are pre-programmed to be so, but instead of letting that mistrust penalise us by preventing us from using technology, let's use that mistrust to our advantage.

Instead of letting that mistrust penalise us by preventing us from using technology, let's use that mistrust to our advantage.

Being wary of code generated by AI is a very positive thing, because you know you are going to make a good plan and you are going to review all the agents' implementations in detail to make sure the job is done right. If you blindly trust AI, you risk it going haywire because, although AI is an incredible tool, it also makes mistakes.

"Okay, I see the potential of AI and I’m ready to use it even if I don’t fully trust it yet, but now comes the million dollar question: how can I feel comfortable doing that?" Although it seems like a very simple question, it is quite complex. Feeling comfortable can sometimes feel related to trusting the tool, but trust is not gained overnight; it takes time. However, there are certain techniques we can use to feel more confident that the generated code is what we want.

How to Feel More Comfortable Using AI

The first thing you can do is make a detailed plan. The more initial instructions you give the AI, the fewer decisions it will have to make, the less imagination it will need to use to create the code, and consequently, the less likely it is that the AI will hallucinate. If you guide the AI agent on things like which components it should reuse or what structure it should have, the agent will generally follow your instructions, making it more likely that the expected result will satisfy you. In addition to that, since you already know what to expect, it will be easier for you to detect hallucinations.

Another technique you can use is to first detail the result through testing. You can tell the AI agent to first create the tests with the results you expect them to obtain and then create the functions. In that case, the tests themselves will warn you if the AI has hallucinated. If, on the other hand, all the tests "pass" all the possible edge cases you have thought of, you will feel much more comfortable knowing that the function does what it is supposed to do.

Sometimes we forget that AI agents not only create code for us, but we can also have a conversation with them and clarify our doubts. If, when I am checking the generated code and I see something I do not understand, I ask directly, "Why did you do that? Why did you not do it in this other way I had in mind?" And the truth is that the answers are very reassuring.

The AI agent produces outputs that can be reasoned about and challenged. You may or may not agree with them, but it has its reasons. If the explanation does not reassure you, you can always tell it to implement it the way you had it in your head. Of course, your answer may be less efficient than the AI agent's proposal.

Another tip to be more confident about the robustness of the code is to use a different LLM to review the code generated by your AI agent. It may be that a particular LLM has a certain tendency to do things in a way that others do not. Comparing results allows you to gain another perspective that you may not have been able to detect with your manual verification.  

From Verification to Confidence 

If, even after reading these tips, you still feel uncomfortable with AI-generated code, you should keep one important factor in mind: that code can always be verified by a human. As the agent generates the code, you will see the results and be able to verify them, just as when you prepare to push the code to the cloud. Depending on your Company Coding Standards, that human verification could continue with peer reviews by colleagues before merging. At the end, even if the code is generated by the AI agent, the people have the final say.    

If the code has been approved after thoroughly reviewing it, it is not inherently more likely to fail than human-written code. Because if you had coded it manually, you could also have made mistakes that would not have been detected.

Feeling comfortable with AI-generated code is complicated because it is often confused with trust. As I explained at the beginning, we are distrustful by nature, but real comfort doesn’t always come from trust, but from being able to reason about, verify, and control the output. 

That comfort with agentic coding does not appear overnight. It takes time, and everyone moves at their own pace. Some will take longer than others, but sooner or later we reach a point where confidence and trust naturally emerge. I hope these small tips help you get there a little sooner.

]]>
<![CDATA[From Coders to Orchestrators: Why We’ve Re-Engineered Engineering at Opply]]>The industry consensus is shifting fast: the days of developers spending 80% of their time manually typing syntax are numbered. Even Ryan Dahl, the creator of Node.js, recently said “the era of humans writing code is over”. At Opply, we aren’t just watching this happen

]]>
https://tech.opply.com/from-coders-to-orchestrators-why-weve-re-engineered-engineering-at-opply/6979ff859183b7000158856dWed, 28 Jan 2026 12:41:57 GMT

The industry consensus is shifting fast: the days of developers spending 80% of their time manually typing syntax are numbered. Even Ryan Dahl, the creator of Node.js, recently said “the era of humans writing code is over”. At Opply, we aren’t just watching this happen - we’ve rewritten our playbook.

We believe that a "first-mover advantage" in AI-native engineering isn't just about speed; it's about a compounding advantage in how you solve business problems, and in fact it goes way beyond engineering. If you ignore these tools, you’ll be left behind - not in years, but in months.

The Three Phases of Evolution

Looking back, our (and it will be very similar for many other companies) journey with AI tools has moved through three distinct eras:

  • Phase 1 (2023): The Autocomplete Era. Using IDEs as co-pilots for inline tab-completion.
  • Phase 2 (2024): The Chat Era. Using IDE plugins to prompt for snippets and small functions.
  • Phase 3 (2025/2026): The Agentic Era. This is where we are now. Using agentic terminals like Claude Code to orchestrate multi-file changes and autonomous test loops.

In Phase 3, manual coding is no longer the bottleneck. As Andrej Karpathy (founding member of OpenAI) recently noted, many engineers have rapidly flipped from 80% manual coding to 80% agent-driven orchestration. We are now essentially "programming in English," operating over software in large "code actions" rather than line-by-line edits.

Hiring for "Phase 3" Developers

Because the work has changed, our hiring process had to change. We’ve replaced the traditional coding assessment with the Opply Product Engineering Challenge. We explicitly look for engineers who use AI tools like Claude Code or Cursor not just occasionally, but as their core workflow. We want "Product Engineers" - a shift from siloed roles to builders who can ideate, design, and execute simultaneously.

Our live challenge focuses on:

  • The Stakeholder Interview: Candidates must interview us to understand the "why" before they touch the "how".
  • Architectural Reasoning: We watch them build a working prototype live. We don't care how fast they type; we care how they articulate tradeoffs and UX decisions while letting AI handle the implementation. 
  • Verification Loops: Spotting when LLMs misunderstand intent is extremely crucial. We look for the discipline to read the diffs and ensure business logic remains intact while the mechanical work is automated.

The "Allocation Economy" and Agentic Realism

Guillermo Rauch (CEO of Vercel) describes this as the "Allocation Economy." The manual labor is automated; the developer’s job is now the allocation of "compute". At Opply, we hire people who are "agentic by nature," mastering Systems Thinking and Parallel Orchestration.

However, we are well aware of the tradeoffs that come with the "Agentic Era." As Karpathy warns, these agents are essentially "slightly sloppy, hasty junior devs". They have incredible stamina - they never get tired or demoralized - but they can overcomplicate abstractions, fail to push back on bad ideas and can easily misunderstand intent if not properly instructed. 

A common concern is that high-speed AI generation will lead to "Slopacolypse"- a world of bloated, brittle code that "looks" right but is conceptually hollow. We mitigate this through:

  • Observability-Driven Development (ODD): AI-generated code must be validated in production contexts. 
  • The Boris Cherny Model: Running multiple agents simultaneously - one for testing, one for debugging, one for documentation - to create a system of checks and balances.
  • Verification Skills: We recognize that while manual coding skills may "atrophy," our ability to discriminate (read and review code) must become our strongest asset.

The Cost of Velocity

Yes, it’s an investment. AI tools increase the total cost of an engineer by roughly 15%. But when you realize your team can build certain features with higher quality at 5–10x the velocity compared to a year ago, that cost becomes negligible. The "cost of refactoring" has been slashed by agentic tools; we can now restructure entire modules in minutes, not weeks.

Conclusion: Engineering as a Creative Space

We are moving away from engineering as a "laborious IT task" and toward it being a purely creative space.

At our stage, the risk isn't making mistakes while experimenting with autonomous agents. The real risk is "process debt" - slow processes and not being able to move and iterate quick enough. For the "code artist," AI may be a struggle, but for the empathic and well-versed builder, it is the ultimate partner. And the latter, people with a profound business understanding who immediately spot when it goes in the wrong direction, are the people we search for.

]]>
<![CDATA[Why We Hire Product Engineers, Not Software Engineers]]>We don't have software engineers at Opply.

Not because we don't write software — we write plenty — but because the title describes the wrong thing. It puts the emphasis on the output rather than the outcome.

Job titles shape expectations. Expectations shape behaviour. Call someone

]]>
https://tech.opply.com/why-we-hire-product-engineers-not-software-engineers/6970ac20c16ad3000164c975Wed, 21 Jan 2026 14:32:23 GMT

We don't have software engineers at Opply.

Not because we don't write software — we write plenty — but because the title describes the wrong thing. It puts the emphasis on the output rather than the outcome.

Job titles shape expectations. Expectations shape behaviour. Call someone a software engineer and they'll optimise for software. Call them a product engineer and they'll optimise for the product. That difference matters more than it sounds.

The distinction

Software engineers ask "what should I build?" Product engineers ask "what problem are we solving?"

Software engineers measure progress in pull requests. Product engineers measure progress in user impact.

Software engineers are done when the code ships. Product engineers are done when it’s in the hands of real people & solves their problem.

This isn't about skill or seniority. Plenty of brilliant engineers write excellent code and never ask why they're writing it. They take pride in clean architecture, elegant solutions, and well-tested modules. And those things matter. But they're not enough.

The best code in the world is worthless if it solves the wrong problem. A beautiful system that nobody uses is just an expensive art project. Product engineers understand this instinctively. They treat the code as a means to an end, not the end itself.

Why this matters at Opply

We're a startup. We don't have the luxury of a relay race where product defines, design refines, and engineering builds to spec. That model is slow and breeds learned helplessness. Nobody owns the outcome because everyone owns a small piece of it.

We reject that model entirely.

We need people who hold the whole thing. Engineers who talk to customers, challenge requirements, and care whether the feature actually moved the metric. Engineers who feel uncomfortable shipping something they don't believe in — and speak up before it's too late.

Product engineers at Opply aren't waiting for permission to care about the product. They're expected to. That expectation changes everything — how people prepare for meetings, how they review code, how they think about their week.

What we look for

When we interview, we're not just testing whether someone can code. We're looking for signals that they think beyond the ticket.

Do they ask about the user? When given a problem, do they interrogate the assumptions or jump straight to implementation? Have they ever killed a feature they were building because they realised it wasn't the right solution? Do they talk about projects in terms of what shipped, or in terms of what changed?

We listen for curiosity about the "why." Some engineers light up when you give them context about the business problem. Others just want the requirements finalised so they can start building. Both can be effective. Only one fits how we work.

We also look for discomfort with ambiguity paired with willingness to operate in it anyway. Product engineers don't wait for perfect specs. They clarify what they can, make reasonable calls on the rest, and stay close enough to the outcome to course-correct, iterate and solve the right problem.

The industry is shifting

The best engineers we meet already think this way. They're frustrated by environments that treat them as ticket machines. They want context, ownership, and the ability to influence what gets built — not just how.

This isn't a new idea. Good companies have always valued engineers who think holistically. But the explicit distinction between "software engineer" and "product engineer" is becoming more common, and we think that's a good thing. Language matters. When you name something, you can hire for it, develop it, and expect it.

We think the term "software engineer" will start to feel dated. Not because software stops mattering, but because the job is no longer just about the software. It's about the product, the user, and the problem worth solving.

That's who we hire. That's who we are.

]]>
<![CDATA[Fast, Real, and Production-Ready: Prototyping at Opply]]>Recently, I have been reflecting on how the idea of prototyping is often interpreted in software. Too frequently, it is treated as an MVP: an intentionally imperfect product that resembles the real thing, but is never meant to truly be it. Something built quickly, used briefly, and ultimately thrown away.

]]>
https://tech.opply.com/fast-real-and-production-ready-prototyping-at-opply/69678c2cba37540001824116Wed, 14 Jan 2026 16:14:03 GMT

Recently, I have been reflecting on how the idea of prototyping is often interpreted in software. Too frequently, it is treated as an MVP: an intentionally imperfect product that resembles the real thing, but is never meant to truly be it. Something built quickly, used briefly, and ultimately thrown away.

That is not how I understand prototyping.

To better articulate what prototyping should mean, and to describe how we approach it at Opply, I started looking outside the usual software narratives. That search led me to the world of racing car engineering, where prototypes are not disposable sketches, but fully engineered machines designed to run, be tested, and evolve. That comparison is the lens through which I now think about product engineering prototyping at Opply.

When a racing team builds a prototype car, they don’t start with something fragile or disposable.
They build a real car: fully engineered, safe to drive at speed, and capable of running on the track today. What’s missing isn’t quality or discipline, but final optimisation, long-term tuning, and mass-production constraints.

That is how we think about prototyping at Opply.


Not a Demo, Not a Sketch But a Real Machine

In motorsport, a prototype is not a proof of concept. It is a working vehicle, built by experts, following strict engineering rules, designed to be pushed to its limits so the team can learn fast.

Our software prototypes follow the same principle.

They are not throwaway MVPs or temporary demos. They are real systems, built inside the real codebase, following the same architectural rules, conventions, and standards as any production feature.

A prototype at Opply:

  • Runs in real environments
  • Follows the repository’s architecture and domain boundaries
  • Includes tests, logging, and guardrails
  • Is safe to evolve, not something we plan to discard

The goal is learning through reality, not simulation.


What We Deliberately Leave Open

Just like a racing prototype, an Opply prototype is intentionally unbounded in certain areas.

We defer:

  • Hard scalability constraints
  • Performance optimisation
  • Final infrastructure sizing
  • Long-term operational tuning

We do not defer:

  • Code quality
  • Correctness
  • Maintainability
  • Engineering discipline

This balance allows us to move fast without creating hidden debt. The system may not be ready for mass scale, but it is always ready to be trusted.


Why We Don’t Build Throwaway MVPs

In software, MVPs are often designed to be temporary. In practice, they rarely are.
What starts as “just an experiment” frequently becomes production.

By treating prototypes as production-ready from the start, we avoid that trap.

Our prototypes are designed to:

  • Evolve naturally into long-lived systems
  • Be hardened and scaled when the value is proven
  • Preserve clarity and ownership from day one

There is no switch from “hack mode” to “real engineering” at Opply. It is always real engineering.


Prototyping as a Team Discipline

This approach only works because it is shared.

Prototyping at Opply is not about individual speed or clever shortcuts. It is about collective discipline, trust in the codebase, and confidence that even our earliest iterations reflect the standards we stand for.

Like a racing prototype, what we build early may change: components will be upgraded, constraints will be added, and performance will be refined, but the foundation is solid from the first lap.

At Opply, prototyping is not the opposite of production.
It is simply production, allowed to move faster before the final limits are set.

]]>
<![CDATA[Why Responsible AI Matters More Than Ever]]>Imagine you have deployed an AI agent in your team that approves payments, deals with invoices and makes financial decisions. One day, it makes a call that results in a significant monetary loss. Who takes responsibility?

Is it the developer who built the agent?
The team that deployed it?
The

]]>
https://tech.opply.com/why-responsible-ai-matters-more-than-ever/69247b6b56beaf0001ed364bMon, 24 Nov 2025 16:16:08 GMT

Imagine you have deployed an AI agent in your team that approves payments, deals with invoices and makes financial decisions. One day, it makes a call that results in a significant monetary loss. Who takes responsibility?

Is it the developer who built the agent?
The team that deployed it?
The user who trusted it?
Or the agent itself?

These kinds of dilemmas are becoming increasingly common as companies adopt autonomous or semi-autonomous AI systems. They highlight an important truth: AI agents need to be responsible by design, and responsibility requires reliability, transparency, and human oversight.

To understand how to address these dilemmas, we first need to define what responsibility looks like in an AI agent.

What Does a Responsible AI Agent Mean?

A responsible agent is one built to uphold user trust. This involves:

Transparency: users should understand agent decisions at a high level
Accountability: it must be clear who is responsible for the outcomes the agent affects
Predictability: the agent should behave consistently and avoid unexpected actions
Oversight: humans should be part of the loop, especially in sensitive contexts

Without responsibility built in, even high-performing systems can undermine trust.

Current Efforts in Interpretability

Responsibility starts with understanding how agents make decisions, which brings us to interpretability. At the core of most AI agents are large language models, and these models are still very much black boxes. We do not yet have a clear picture of how they form internal concepts, how they combine those concepts, or what specific pathways lead to a given decision.

Anthropic is one of the groups pushing hardest to change this. Their recent work focuses on mapping internal representations inside language models. The goal is to understand how different concepts are stored, how they interact, and how complex reasoning patterns emerge from billions of parameters. In their latest research, they show early progress in identifying clusters of neurons that encode specific ideas and tracking how these ideas transform as the model processes language.

The progress is impressive, but it is still the beginning. These models contain enormous and highly entangled structures, and only a small fraction of them can currently be interpreted with any confidence. Even with advanced techniques, most of the model remains opaque.

This means we cannot rely on interpretability research alone to guarantee responsible behavior in real-world systems.

Why We Need Our Own Safeguards

Interpretability is helpful, but practical responsibility comes from designing safeguards directly into AI workflows.

For decision-making agents, this includes:

Human-in-the-loop checks, especially for high-impact decisions
Clear escalation paths, where risky predictions are flagged to humans
Structured approval flows, ensuring that AI suggestions remain suggestions
Operational monitoring, so deviations from expected behavior are caught quickly

These safeguards ensure that humans remain the final decision makers in critical situations. They also align with how many businesses already operate, making the integration natural.

Safeguards are not just best practice. In many cases, they are becoming legal requirements. In several regions, certain automated decisions must legally involve a human. For example, regulations in areas like finance, HR, and consumer rights place limits on fully automated decisions that could significantly affect individuals or companies.

The regulatory landscape is expanding quickly. New governance frameworks are emerging across the EU, US, and beyond that require:

  • auditability
  • documentation of AI decision pathways
  • human review for impactful automated decisions

The message behind these rules is clear: responsibility cannot be delegated to an agent.

Looking Ahead: Building Responsible AI at Opply

Responsibility in AI is not just a technical challenge; it’s a strategic commitment. Interpretability research will continue to advance, and new governance structures will emerge, but businesses today need systems that are safe, transparent, and easy to trust.

At Opply, we are committed to building AI tools that help SME food and beverage brands move faster while keeping humans firmly in control. As the field evolves, we will continue to design our agents with responsibility, transparency, and trust at the center.

]]>
<![CDATA[Building Better Prompts: A Practical Guide to Prompt Engineering]]>Prompt engineering has become part of daily work for many people with the rise of modern LLMs like ChatGPT, Claude, Gemini, DeepSeek and others. In simple terms, it is the practice of structuring and phrasing instructions for AI systems so that they can perform tasks more accurately. Whether you’

]]>
https://tech.opply.com/building-better-prompts-a-practical-guide-to-prompt-engineering/69244829e4e1c70001814110Mon, 24 Nov 2025 11:58:31 GMT

Prompt engineering has become part of daily work for many people with the rise of modern LLMs like ChatGPT, Claude, Gemini, DeepSeek and others. In simple terms, it is the practice of structuring and phrasing instructions for AI systems so that they can perform tasks more accurately. Whether you’re querying a model for quick tasks, creating content, or building AI-powered tools, the way you phrase a prompt has a major impact on the quality of the output. Even small adjustments can improve reliability, reduce hallucinations, and make the results much easier to use in production.

At Opply, as we develop agents to automate supply chain processes, we’ve identified several practical techniques we rely on to get stable, accurate and context-aware outputs from the models. Below are five tips that have helped us the most.

1. Give the model an identity

One of the most effective prompt engineering techniques is assigning the model a clear identity. Instead of giving instructions in a generic way, you frame the model with a defined role so it adopts the right perspective. This helps it understand how it should think, not just what it should do. Some examples include:

Domain specialist

You are a food and beverage domain specialist who understands ingredient standards.

Tone or style control

You are a concise technical writer who explains concepts in simple language

Safety-focused identity

You are an AI agent, not a human expert. Provide safe, general information and do not make assumptions.

There are many other identities you can use to guide behaviour, such as giving the agent a company-specific role, defining its reasoning style, or narrowing its focus to a single task. These identities help anchor the agent’s behaviour, align responses with the role you expect, and often improve accuracy and consistency.

2. Define the input and output structure

LLM models perform best when the expected format is crystal clear. Provide a structured input format, and define the exact output format you want. JSON works especially well because it is easy to parse in production code.

Example:

The input will be provided in the following JSON format:
{
"product_name": "string",
"description": "string"
}

Return a valid JSON object with the exact structure below. Do not add or remove fields:
{
"ingredients": ["string"],
"allergens": ["string"],
"category": "string"
}

Think of it as giving the model a form to fill in rather than asking for a free-form answer. This reduces ambiguity and makes the results more reliable.

3. Provide fallback responses

Whenever you ask the model to extract, classify or detect something, always define what it should do when there is nothing to return. Without a fallback, the model may try to invent information to complete the task. By giving it a default response, you prevent this from happening and keep the output predictable.

Example:

If no ingredients are found in the input, return "none".
If the field cannot be determined, use an empty list or an empty string instead of guessing.

Clear fallback rules reduce hallucinations and make your downstream pipelines easier to maintain.

4. Be specific and give examples

Specificity is one of the strongest tools you have. Clear rules and unambiguous instructions help the model understand exactly what you expect, which reduces variation in the output. This is especially important for tasks that are unusual, multi step or require domain knowledge the model might not naturally apply.

Whenever possible, include examples. These can be concrete input output pairs, or lighter scenario based hints such as:

If two categories seem possible, choose the more specific one.
If the description contains multiple products, only analyse the first one.

Examples act as behavioural reference points and give the model patterns to follow. Even a single example can significantly improve consistency, reduce confusion and help the model generalise correctly across edge cases.

5. Avoid overloading the prompt

Building on the previous tip, it is helpful to provide examples and clear rules, but there is a limit. If your prompt contains too many conditions, exceptions or branching cases, the model may struggle to follow them. Natural language instructions become harder to interpret when several layers of logic are stacked together. If a task involves multiple branches or complex decision paths, it is more effective to break it into separate steps or even separate prompts. This keeps each instruction focused, reduces errors and helps the model follow the intended logic more reliably.

Example:
Instead of writing one long prompt like:

Extract ingredients, classify the product category, check for allergens, and if the text contains multiple products pick only the first one, and if there are no allergens return none, and if the category is unclear ask for clarification.

Break it into smaller, clearer steps:

  1. Prompt 1: Extract the ingredients.
  2. Prompt 2: Based on the extracted ingredients, classify the product category.
  3. Prompt 3: Extract allergens and return an empty list if none are found.

Each prompt then has a single purpose, which makes the entire chain far more stable and predictable.

Conclusion and Final Tip

While these tips will help you craft stronger prompts, it helps to keep in mind that prompts almost never work perfectly on the first try. Iteration and testing are part of the process. Write a version, see how the model behaves, adjust it and try again. Over time, this cycle will help you build a library of reliable prompting patterns that consistently produce high quality results.

What prompting patterns have worked best for you? Let us know and we might explore them in a future post!

]]>
<![CDATA[Pragmatic Engineering in High-Velocity Startups]]>In fast-moving startups, engineering happens under pressure: ambitious roadmaps, evolving priorities, limited resources, and a constant need to deliver value quickly. Under these conditions, one mindset consistently outperforms all others:

Pragmatism.

At Opply, pragmatic engineering is not about cutting corners or lowering standards. It’s about focusing relentlessly on

]]>
https://tech.opply.com/pragmatic-engineering-in-high-velocity-startups/691f06e519a8de0001aadc68Thu, 20 Nov 2025 12:21:23 GMT

In fast-moving startups, engineering happens under pressure: ambitious roadmaps, evolving priorities, limited resources, and a constant need to deliver value quickly. Under these conditions, one mindset consistently outperforms all others:

Pragmatism.

At Opply, pragmatic engineering is not about cutting corners or lowering standards. It’s about focusing relentlessly on outcomes, embracing iteration, and making smart trade-offs that move the business forward. The goal is building the right thing, at the right time, in the simplest effective way.

Drawing from our internal principles, this post outlines how pragmatic engineering shapes how we work, deliver, and grow.


1. Start With Outcomes, Not Output

Code, dashboards, and models only matter if they produce meaningful change. That means we begin every piece of work by asking:

  • What problem are we solving?
  • How does this help the business today?
  • Can we deliver value with half the scope?

Impact comes first. Everything else is optional.


2. MVP Is the First Real Step

In a high-velocity environment, the Minimum Viable Product is the smallest thing that solves the use case, even if imperfect or manually assisted.

A good MVP at Opply is:

  • Functional enough to be used meaningfully
  • Easy to demo and discuss
  • A catalyst for real feedback

Quick iteration beats delayed perfection, every time.


3. Share Progress Early and Often

Waiting until something is “polished” is a luxury most startups don’t have. Clarity emerges through feedback, not isolation.

That’s why we favour:

  • Quick async updates
  • Short demos
  • Rough prototypes
  • Early walkthroughs

Making work visible early helps avoid misalignment and ensures we maximise learning per unit of effort.


4. Own the Outcome, Not Just the Task

Ownership isn’t about ticking boxes, it’s about understanding why something matters and ensuring it delivers the intended impact.

Pragmatic ownership means:

  • Challenging bloated scope
  • Thinking in terms of milestones, not micro-tasks
  • Asking hard questions about value, effort, and timing
  • Ensuring the final outcome actually solves the problem

When engineers own outcomes, alignment becomes natural and surprises disappear.


5. Bias Toward Action

Perfect plans rarely survive contact with reality. Instead, we prioritise:

Start small → Ship → Learn → Refine.

Small steps forward teach more than elaborate plans that never materialise. Momentum unlocks blockers, surfaces risks early, and keeps the organisation adaptable.

We permit ourselves to fail fast, and learn even faster.


6. Design for Change, Not Permanence

In a scaling startup, nearly everything you build today will evolve tomorrow. That’s a feature, not a bug.

Pragmatic engineering acknowledges that:

  • Code will be rewritten
  • Solutions will be replaced
  • What we learn today reshapes tomorrow

Shortcuts are acceptable when intentional and documented. Flexibility is a competitive advantage, especially in unexplored problem spaces.


7. Feedback Makes the System Better

Processes, systems, and culture are living entities. They improve through reflection.

We encourage a habit of:

  • Asking what worked
  • Exploring what didn’t
  • Understanding why
  • Iterating on both product and practice

Continuous improvement keeps us grounded, aligned, and resilient as we scale.


Final Thought

Pragmatic engineering is a mindset:
Outcome-driven. Fast-learning. Flexible. Humble. Curious.

It’s about choosing progress over polish, momentum over rigidity, and clarity over complexity. When teams embrace this approach, they unlock the ability to build exceptional products with speed, confidence, and purpose.

High-velocity environments don’t reward perfection, they reward teams who learn, adapt, and deliver value continuously.

And that’s exactly the kind of engineering culture we strive to build at Opply.

]]>
<![CDATA[We Stopped Taking Tickets, we Started Owning Products.]]>At Opply, we’ve always had great engineers. Talented, fast, committed. But when I joined, something was missing. Not skill, not dedication, but ownership.

We were brilliant executors. Hand us a ticket, and we would deliver. Hand us a bug, and we would fix it. Features got built, tickets

]]>
https://tech.opply.com/the-end-to-end-impact-of-full-stack-product-engineers/691f02ae19a8de0001aadc5eThu, 20 Nov 2025 12:04:26 GMT

At Opply, we’ve always had great engineers. Talented, fast, committed. But when I joined, something was missing. Not skill, not dedication, but ownership.

We were brilliant executors. Hand us a ticket, and we would deliver. Hand us a bug, and we would fix it. Features got built, tickets got closed. From the outside, it looked like progress. But here’s the truth: we weren’t in control of the game we were playing.

The way we worked felt like a production line. Product wrote the specs, engineering built the features, QA tested them. Each team did their part, then passed the baton to the next.

And with every handoff, a little accountability slipped away. Issues that could have been caught in discovery or design only surfaced when QA tested them at the very end. For engineers, the scoreboard was simple: no bugs = good job. Whether the work actually moved a business metric? That was someone else’s problem.

Yes, the machine kept running. But it ran slowly, reactively, and without a clear sense of purpose. We weren’t building products: we were building parts.


The Shift: From Executors to Product Engineers

The breakthrough wasn’t a new tool or process. It was a change in identity. We stopped thinking of ourselves as “software engineers” in the narrow sense, as people who just build what is asked. Instead, we became "product engineers": owners of problems, architects of solutions, accountable for the outcome.

This mindset shift was huge. It meant that when a business challenge landed on our desks, our first question wasn’t “What’s the spec?” but “What’s the real goal here", and "what’s the smartest way to achieve it?”

We started getting involved early in product discovery. We asked questions. We challenged assumptions. We brought new ideas to the table. By the time something reached development, we didn’t just understand the “what”, we understood the “why.”


What Product Engineers Own

We still collaborate with product management to decide which problems to tackle first. But when it comes to solving those problems, the ownership is fully with the engineers:

We decide which solution to pursue, define what success looks like, design and build the technical implementation, test it at every level, from unit to end-to-end, and track our own KPIs to measure impact. It’s no longer about “Did we ship?” It’s about “Did it work?”


Telling the Story

When a milestone ships, it’s no longer product managers walking stakeholders through a demo. It’s the engineers.

We don’t just click through a feature. We explain why it matters, how we approached it, the trade-offs we made, and the results we expect. It’s our work, and we own the narrative as much as the code.


The Results

The change has been dramatic: we are faster, because we build MVPs, get them into users’ hands, and iterate without waiting for the perfect plan.
We are building with higher quality, because understanding the “why” leads to smarter technical decisions. And our work now connects directly to the company’s goals, every metric we track ties to a business outcome.

But maybe the most important shift is in the way engineers feel. They’re not ticket-takers anymore. They’re problem-solvers, innovators, and genuine stakeholders in Opply’s success.


What We Learned

Ownership isn’t something you declare in a meeting, it’s rather something you practice, every day. You have to give people the trust, context, and space to take it on. And when you do, the difference is night and day.

We’ve moved from building features to building products. From taking tickets to taking responsibility. From software engineers… to product engineers.

]]>
<![CDATA[Why a Tech Blog?]]>

Opply is entering an important new phase, one where our technology, our pace of innovation, and our team's work deserve to be seen. This blog is the first step in asserting our readiness for a public-facing tech presence and sharing the story behind what we are building.

Why

]]>
https://tech.opply.com/why-a-tech-blog/691abeab9f80ae0008ef5b2dMon, 17 Nov 2025 06:20:27 GMTWhy a Tech Blog?

Opply is entering an important new phase, one where our technology, our pace of innovation, and our team's work deserve to be seen. This blog is the first step in asserting our readiness for a public-facing tech presence and sharing the story behind what we are building.

Why a blog now?

Because our brand is ready.
We have matured our platform, expanded our engineering capabilities, and reached a point where the work happening inside Opply is valuable not only internally but also to the wider world. Publishing our thinking helps us define who we are as a tech organisation, not just a product.

It is also a chance to highlight the beliefs that guide the AI systems powering Opply: how we design them, what principles matter to us, and how we approach building responsible, transparent and scalable technology.

What to expect

Over the next days and weeks, this space will fill with posts that reflect how we build:

  • The principles behind the AI systems at Opply
  • Pragmatic engineering stories from the trenches
  • End to end product delivery insights
  • Learnings from our rapid growth and internal transformation

Some posts will be polished, some intentionally simple. The point is to start sharing consistently and transparently, and iterate from there.

A space for the team to take pride in our work

We also want this blog to be a home for our people.
Opply engineers, data scientists and researchers do exceptional work every day. By writing publicly about what we build and how we think, we are giving our team a place to take pride in their craft and celebrate the impact they create.

This is the beginning of our public engineering story, a story that will evolve with the company, post by post.

]]>