Soimplement https://soimplement.com Clarity Drives Performance Sun, 08 Mar 2026 22:56:05 +0000 en hourly 1 https://wordpress.org/?v=6.9.4 https://soimplement.com/wp-content/uploads/2026/02/soimplement180icon-deep-navy-150x150.webp Soimplement https://soimplement.com 32 32 The Hidden Cost of Organisational Fragmentation https://soimplement.com/executive-analysis/the-hidden-cost-of-organisational-fragmentation/ Thu, 05 Mar 2026 13:30:19 +0000 https://soimplement.com/?p=1059

In today’s complex business landscape, organisations grapple with inefficiencies that erode profitability and stifle growth. Structural fragmentation, often misunderstood as mere departmental silos, represents a deeper malaise: the absence of a unified decision-making framework. This article explores the insidious costs of such fragmentation, drawing on expert insights to illuminate pathways to greater coherence and efficiency. Targeted at C-level executives, it provides a strategic lens to diagnose and address these hidden burdens.

Understanding Structural Fragmentation

Structural fragmentation occurs when an organisation lacks a cohesive model for decision-making, leading to dispersed authority and inconsistent processes. Unlike traditional silos, which isolate functions, fragmentation permeates the entire entity, manifesting as ad-hoc decisions without alignment to overarching goals. This results in a patchwork of initiatives where teams operate in isolation, often duplicating efforts or pursuing conflicting objectives.

Research highlights that fragmentation arises from diverse sources. For instance, in interorganisational projects, “shared understanding refers to a state when team members have a common interpretation of the tasks to be completed, the approaches required to complete these tasks, as well as their intended outcomes and deliverables.” Yet, “interpersonal, technical, and contextual sources of fragmentation in understandings can emerge over time,” impeding unified action. Executives must recognise that this is not merely operational discord but a structural flaw that undermines strategic execution.

The Cost Escalation from Lack of Operational Transparency

Without transparency in operations, costs balloon invisibly. Fragmented structures obscure workflows, hiding inefficiencies like redundant processes or unmonitored expenditures. Leaders may believe systems are optimised, but opacity masks realities such as overstaffing or wasteful resource allocation.

Expert analysis reveals that “learning about impacts is the necessary first step toward reducing those impacts,” yet opacity deters investment in visibility. In supply chains, for example, “transparency can have unintended consequences,” as firms avoid scrutiny to evade penalties, perpetuating hidden costs. For C-suite leaders, this translates to eroded margins—estimates suggest operational opacity can inflate costs by 15-20% through undetected leaks in efficiency and accountability.

Reporting Versus Control: A Critical Distinction

Many organisations conflate reporting with control, assuming dashboards and metrics equate to governance. However, reporting merely documents outcomes; control demands proactive intervention and alignment. In fragmented setups, reports often lag, failing to enable real-time adjustments.

This mismatch is evident in management literature: reporting provides historical data, but “control requires structure, not just awareness of outcomes.” True control integrates decision rights with performance insights, whereas fragmented reporting leads to reactive management. Executives should prioritise systems where reporting informs dynamic control, avoiding the illusion of oversight that masks underlying chaos.

Indicators of Diminishing Controllability

Loss of organisational control manifests through telltale signs. Decision latency—the delay between identifying an issue and acting—signals fragmented authority, where approvals bottleneck progress. Shadow processes emerge as unofficial workarounds, bypassing formal systems and fostering inconsistency. Duplicate tooling arises when teams adopt redundant technologies, inflating IT costs and complicating integration.

These indicators compound: “decision latency is a structural outcome of how organizations are designed,” leading to sluggish responses. In one analysis, “matrix organizational structure created ambiguous accountability and duplicated efforts,” highlighting how fragmentation breeds inefficiency. C-level leaders must monitor these metrics to preempt escalation into broader dysfunction.

Why Transformations Often Bypass Structural Issues

Despite ambitious transformation initiatives, most fail to tackle root structural problems. Leaders frequently redesign org charts or implement new tools without addressing fragmentation’s core: incoherent decision models. This superficial approach yields short-term gains but sustains underlying chaos.

Insights reveal common pitfalls: “designing an organization based on the people results in compartmentalized processes… reducing overall efficiency.” Moreover, “drastic staffing cuts or process changes can result in reduced employee morale… and an overall distraction from the mission.” Transformations falter because they prioritise symptoms over structure, leaving decision-making fragmented and costs unchecked.

Identifying Chaos Through Clarity Audit

The Clarity Audit offers a systematic method to pinpoint fragmentation’s origins. By auditing strategy, execution, and measurement, it uncovers disconnects like misaligned activities or random acts draining resources.

As described, “the Clarity Audit for Smarter Growth identifies where those disconnects and random acts live — and shows you exactly how to fix them.” It involves “tailored stakeholder interviews to uncover disconnects, inefficiencies, and friction points,” fostering alignment. For executives, this tool transforms chaos into clarity, enabling sustainable growth by resolving structural fragmentation at its source.

In conclusion, organisational fragmentation imposes hidden costs that demand C-level intervention. By embracing transparency, distinguishing reporting from control, and leveraging audits like Clarity, leaders can forge cohesive structures that drive efficiency and value.

References and Quotes

  1. Henderson, J. (2001). Decision making in the fragmented organisation: A utility perspective. https://www.researchgate.net/publication/242175715_Decision_making_in_the_fragmented_organisation_A_utility_perspective
  2. McCarthy, S. et al. (2021). Shared and fragmented understandings in interorganizational IT project teams. https://www.sciencedirect.com/science/article/abs/pii/S0263786321000788 Quote: “Interpersonal, technical, and contextual sources of fragmentation in understandings can emerge over time.”
  3. Fredrickson, J.W. (1986). The Strategic Decision Process and Organizational Structure. https://journals.aom.org/doi/10.5465/AMR.1986.4283101
  4. Kalkanci, B. & Plambeck, E. (2020). The Costs and Benefits of Supply Chain Transparency. https://www.gsb.stanford.edu/insights/costs-benefits-supply-chain-transparency Quote: “Transparency can have unintended consequences… disclosure mandates can backfire.”
  5. Verburg, R.M. et al. (2017). The Role of Organizational Control Systems. https://pmc.ncbi.nlm.nih.gov/articles/PMC5834078
  6. Ingason, A. (2025). Decision Latency Is the Hidden Bottleneck. https://medium.com/@aingason/decision-latency-is-the-hidden-bottleneck-killing-modern-organizations-fe21141c297d
  7. ScottMadden. (n.d.). 7 Reasons Why Organization Structures Fail. https://www.scottmadden.com/insight/7-reasons-why-organization-structures-fail Quote: “Drastic staffing cuts or process changes can result in reduced employee morale…”
  8. Patterson, L. (n.d.). Clarity Audit for Smarter Growth. https://laurapatterson.co/clarity-audit-smarter-growth Quote: “The Clarity Audit… identifies where those disconnects and random acts live.”
]]>
The cloud cost trap https://soimplement.com/architecture-delivery/the-cloud-cost-trap/ Wed, 04 Mar 2026 23:42:53 +0000 https://soimplement.com/?p=972
As a diligent investor of software firms, you want your portfolio companies to be lean, mean, growth machines. You certainly don’t want hundreds of thousands of dollars filling your cloud provider’s coffers when they could be going to your bottom line.

The cloud cost trap

As a diligent investor of software firms, you want your portfolio companies to be lean, mean, growth machines. You certainly don’t want hundreds of thousands of dollars filling your cloud provider’s coffers when they could be going to your bottom line.

So, beware of the cloud cost trap. You and your executive teams may not be getting the cloud efficiencies you bargained for. Here are three reasons why – and five ways to fix it.

Cloud platforms promise unparalleled speed and ease of application development and deployment. AWS, GCP and Azure offer a dazzling array of services designed to make life easy for their customers’ engineers – whether developers, DevOps or data scientists.

It’s easy to fall into the trap of thinking that, just because you’re using a scalable platform and their latest tools, resource efficiency is a given. This can be a long way from the truth.

To explain why, consider one of the recent innovations in cloud computing to go mainstream: “containerisation”. A container is a package of application code and all its operating dependencies, that can run quickly and reliably across different computing environments.

Containers offer a range of benefits: portability between different platforms and clouds, faster software-delivery cycles, easier implementation of modern, micro services-based software architectures and more efficient use of system resources.

This last point is true in theory. Efficiency gains arise because micros services used in containers are more resource-efficient that monolithic applications run on physical or virtual servers, they start and stop more quickly, and they can be packed more densely and flexibly on their host hardware. 

And in practice? That depends entirely on how effectively containers are deployed.

The problem

Containers require tools to manage and orchestrate where and when they run within and across servers — or in the case of cloud, virtual server instances.

However, this creates another layer of abstraction between DevOps and the underlying hardware, which ultimately, determines operational cost — and is contributing to creating an increasingly common DevOps problem: infrastructure utilisation is no longer being effectively managed.

Looking deeper, there are three causes of this that Soimplement sees in organisations in varying measures:

  1. Process gap: Resource scheduling carried out by container orchestration is based on the developer’s original guestimates of their applications workload — applied as limits — with no follow-up to check whether these estimates were accurate. Since developers’ estimates will be conservative – who doesn’t want their code to run smoothly? —the absence of a workload-validation cycle introduces inefficiency from the start.
  2. Capability gap: The decoupling of DevOps from physical-hardware planning has created a capability gap. Container orchestration tools often make it harder to figure out cause and effect. Engineers that have grown up with cloud computing, have never hit hardware limitations or had to delve into the detail of resource tuning. As a result, they can find it harder to identify and interpret causes of low utilisation — or even mistake low utilisation as a goal, not a problem!
  3. Accountability gap: Maximising development productivity and speed of deployment are now ingrained in DevOps’ psyche. Velocity rules — often over-shadowing a latent need or opportunity to realise operational efficiencies.

The impact

The financial implications can be substantial. 

It is not uncommon for us to find companies running cloud-container infrastructure that is over-provisioned by a factor of two – sometimes equating to hundreds of thousands, or even millions, of dollars per year. Wasted money that could, no doubt, be allocated to far better value-enhancing causes, not least your EBITDA.

The answer

So, how can this be avoided? Here are five actions software firms can take to address this:

  1. Ensure business priorities are aligned: if infrastructure cost matters, then ensure corresponding objectives are cascaded to the DevOps team. This should not necessarily be cost-down targets — and certainly not in isolation. Also consider including provision of workload-resource optimisation analysis and recommendations as a deliverable.
  2. Ensure resource-utilisation analysis is part of day-to-day operations: When building new applications, this will need to be run more frequently than with mature code bases where workloads are more predictable. Like many DevOps tasks, with the right tools and thinking, this can be run automatically and with minimal additional effort. 
  3. Address skills deficits: Assess your DevOps engineers’ ability to carry out these tasks and, if necessary, address any skills gaps.
  4. Apply technologies appropriately: While containerisation has benefits and advantages in many scenarios, it is not the best approach in all. Avoid using blanket approaches to deployments. For example, in complex environments the best architecture may be a mix of containerised micro services, instance workloads and serverless approaches.
  5. Get help: If you don’t feel that you have the expertise inhouse seek external support. Like many cloud technologies, container orchestration, is relatively new and can be difficult to deploy well — certainly for complex applications — without experience and specialist skills.

Ask these simple questions

If you are non-technical and this is starting to feel complicated, fear not. 

Cloud is all about matching compute power to demand. If you’re running more than a handful of virtual servers, there are few reasons why you should not be achieving 50%, or better, 75% utilisation of what you pay for.

Ask your engineering executives these questions:

  • Is resource utilisation measured? If so, is it above or below 75%?
  • Is cost efficiency part of your DevOps’ functional and personal objectives?
  • How is this effected?

Cost matters

Deploying applications to the cloud has never been easier. Doing it well — efficiently, securely, reliably and at scale — requires solid engineering disciplines.

It is all too easy for the sophistication of today’s cloud services to obfuscate the basic economics and allow costs to balloon out of control.

Cash is king. Be sure you’re not wasting it.

 

Investor protection

Download a detailed analysis of symptoms and implications of engineering distress

Want to accelerate?

]]>
Investor protection https://soimplement.com/governance-risk/investor-protection/ Wed, 04 Mar 2026 23:42:05 +0000 https://soimplement.com/?p=967
For many investors in software start-ups, engineering is the least understood function and yet one of the biggest areas of executional risk. This need not be the case.

Investor protection

For many investors in software start-ups, engineering is the least understood function and yet one of the biggest areas of executional risk. This need not be the case. 

Here are six areas to probe for an early-warning of poor health.

Ease of application development and distribution is creating more competition, changing consumer expectations and shrinking market windows in which software firms can build sustainable growth. To win, they must be able to adapt and innovate faster than ever. No longer can firms survive with poor technology.

In an ideal world, a start-up’s path is a series of accretive iterations — nimble, well-timed tacks pushing the business steadily forward through competitive headwinds.

The reality is more often a scramble — a rush to launch, to find its market, to satisfy initial adopters, to scale. Each change in direction or priority, leaves scars on the firm’s code base and infrastructure, progressively sapping its ability to adapt, grow and compete. 

 

Download a detailed analysis of symptoms and implications of engineering distress

Without the right design choices and operational disciplines, application and infrastructure can quickly come to resemble the very antithesis of agility and velocity: ossified, costly to operate, unscalable, unreliable or insecure.

Hidden from view, the impact invariably only surfaces in times of crisis or major change — when trying to pivot or move to a new market, reduce cash burn, scale up rapidly or deal with key staff departures. Suddenly product reconfiguration becomes a mountain to climb; geographic expansion demands major effort and capital; services fail under load; costs can’t be easily reined in…

Investment risk

Unsurprisingly, the relative opacity of engineering can be a concern for investors of early-stage companies in particular.

Despite the promise of rapid, easy application creation offered by modern development practices and cloud platforms, there are many ways in which a firm’s application technology can take a wrong turn. For example:

  • Architectural decisions that fail to reflect the dynamics and dependencies of underlying cloud infrastructure, or indiscriminately apply concepts or tools without considering what’s best for the business
  • Cloud-technology choices that lead to the wrong tools for the job – be they core compute, data or network services, or orchestration, provisioning and monitoring and developer tools
  • Monitoring that fails to provide the requisite granularity and actionable insights to protect and grow the business
  • Development operations that lack capabilities, controls and automation to deliver reliably and at pace
  • Security design that either fails to provide the requisite protection or is unnecessarily constraining service efficiency and scalability.

Ask the right questions

For investors from a non-technical background, it doesn’t have to be a matter of crossed fingers or “gut feel”. Signs of distress can be spotted if you know what to look for. Here are six areas to explore: 

1. Rate of development

  • General productivity: Seemingly simple enhancements involve major changes and take weeks not days to complete
  • Fixing high-severity service issues — which invariably involve non-support staff — takes several days
  • Customer onboarding repeatedly consumes development resource

2. Infrastructure cost, utilisation and control

  • Minimum infrastructure or “base-line” cost is a significant proportion of total business outgoings and cost per-user bears no relation to the business model
  • Utilisation of paid-for cloud services is below 50%
  • No-one can tell you accurately what the infrastructure cost was yesterday, last week or last month

3. Service reliability and management

  • Service performance is consistently below customer expectations and/or unscheduled outages add up to more than 60 minutes per week
  • Resolution of issues regularly involves significant proportion — 25% or more — of non-support staff 
  • Handling of demand spikes relies on manual rather than automated capacity-management processes

4. Expansion friction

Launching into new regions requires weeks rather than days or even hours of engineering effort.

5. Vanity technology choices

Novel cloud technologies or services are employed without critical consideration of business impact.

6. Application/platform monitoring gaps

  • Diagnosis of service failures takes days or hours rather than minutes
  • Information-security teams are either overwhelmed or underwhelmed with data
  • User behaviour on the application is not tracked

Any of these items can point to critical shortcomings or inefficiencies in application and infrastructure design, or operational processes.

Note that there is no reference here to types of technology or their relative merits. Instead it’s focussed on probing business outcomes.

Call the coach

Software is now a high-performance discipline where the margins between winning and losing are getting finer. Serious athletes would not think twice about calling a coach if they wanted to up their game. Business is no different.

And of course, there is no getting away from the fact that software and cloud engineering is complex and fast moving. Investment in a health check by external experts will identify areas of weakness and enable timely remediation. There are natural investment or strategic events that would prompt this. For example, funding rounds or significant business-led initiatives such as a market pivot. 

They don’t need to be long, costly or contentious. For an early-stage company, a relatively comprehensive analysis of architecture, systems and processes could be carried out in a few days. Presented and executed correctly, it should provide an opportunity for engineering teams to take stock, learn and grow. 

Improve the odds

Over 90% of software start-ups fail. For those that make it from start-up to scale-up, the odds of success are still stacked against them. Most will have to change course at least once before finding their product-market fit. Those that manage to climb the greasy pole will have taken their applications and platforms from MVPs to production to scaled operations — and kept current as underlying technologies evolve.

Engineering is just one part of what makes a business tick. But its centrality to success has grown significantly as buyers have become more tech savvy and competition has increased.

By asking the right questions at the right times and instigating regular health checks from appropriately skilled experts, this critical executional risk can be mitigated for the benefit of all stakeholders — customers, employees, management, and investors.

Download a detailed analysis of symptoms and implications of engineering distress

Want to accelerate?

]]>
Democratising cardholder-not-present payment security https://soimplement.com/case-studies/democratising-cardholder-not-present-payment-security/ Wed, 04 Mar 2026 23:41:38 +0000 https://soimplement.com/?p=962

When a new application was developed to help retailers protect their customers from the multi-billion-dollar problem of payment-card fraud, the team was tasked with transforming it from an expensive, complex on-premise solution into a global cloud application

Democratising cardholder-not-present payment security

When a new application was developed to help retailers protect their customers from the multi-billion-dollar problem of payment-card fraud, the team was tasked with transforming it from an expensive, complex on-premise solution into a global cloud application that could be adopted by large and small businesses alike.

The result was an award-winning AWS-hosted service that was the first of its kind to achieve PCI DSS Level-1 accreditation — one that would debunk the myth that security and application flexibility and scalability don’t mix.

Challenge

Keeping customers’ payment information secure is a fundamental business requirement. Unsurprisingly, the payment-card industry’s data security standards, PCI DSS, requires sellers taking payments over the telephone to implement significant measures to protect their consumers.

When a new secure method of taking telephone payments emerged that enabled customers to use their keypad to communicate card numbers during a call without exposing them to the seller’s agent, it was a game changer. Contact centres would no longer need to place payment-taking agents in secure, monitored areas away from other operations. Small merchants could take payments without exposing their customers to real or perceived risk. And card users could feel safe that they were not exposed to potential fraud. 

However there was one problem: it was only available as an on-premise installation requiring substantial time and cost to install. The challenge was how to replicate the solution as a SaaS service that would remove the barrier to adoption for large and small merchants alike. 

Approach

For maximum flexibility and scalability, the obvious answer was to build a multi-tenant service on AWS. However, this created a further technical challenge: how to secure a shared infrastructure to the level demanded by PCI-DSS for this kind of application.

Received wisdom was that security to the level required would inevitably restrict application flexibility and scalability — many questioned whether it was even possible on a multi-tenant public cloud. The team set out to prove the doubters wrong.

Working closely with a security architect and QSAs (qualified security assessors) from Amazon, the team developed a three-pronged approach to system security: 

  • Perpetual image ‘hardening’: An infrastructure-build process was developed that ensured any instance brought into service was built from the virtual equivalent of bare metal to a prescribed, automated security-hardened specification;
  • Security-centric continuous-delivery pipeline: A CD software-release process with security controls, measures and auditability intrinsic to it — in short a rigorous application of what is now referred to as “devsecops”;
  • Comprehensive event monitoring: Using Splunk, the team was not only able to actively monitor and manage platform health, but was also able to monitor its estate to detect and prevent threats and optimize incident response.

Through a process of creative problem solving and by putting information security at the heart of the systems-and-operational design process, the team created a PCI-DSS-compliant application without compromising the intrinsic flexibility and scalability advantages of cloud. 

Furthermore, by harnessing AWS’s seamless geographic reach, the team was able to launch the service in any region in a matter of hours not days.

 

Sector

Fintech

Solutions

AWS —EC2, RDS, S3, SQS
Splunk
IoC/automation
Devsecops
Java

 

Impact

The award-winning application was the first of its kind to achieve PCI DSS Level-1 accreditation.

By removing the need for on-premise infrastructure, it enabled organisations of any size in any location to increase protection of their customers’ most sensitive data — enabling business to flow more freely, and creating a stronger bond of trust between buyer and seller.

Want to accelerate?

]]>
Making a social-network platform fit for growth https://soimplement.com/case-studies/making-a-social-network-platform-fit-for-growth/ Wed, 04 Mar 2026 23:41:01 +0000 https://soimplement.com/?p=957
A private social-network platform was hampered by infrastructure performance, scalability, security, reliability and cost issues that were liable to destroy any prospect of it achieving the business’s substantial potential. Called in to help

Making a social-network platform fit for growth

A private social-network platform was hampered by infrastructure performance, scalability, security, reliability and cost issues that were liable to destroy any prospect of it achieving the business’s substantial potential.

Called in to help, our team was able to quickly stabilise the situation and optimise the infrastructure and reconfigure engineering operations to put the business back on a path to growth.

Challenge

A provider of private social-networks had developed a platform to allow organisations to create customised digital communities.

When a new CTO joined a private-social-network company, it quickly became clear to him that they would be unable to execute their strategy unless action was taken to address their AWS infrastructure. The platform, which had been developed to allow organisations to create customised digital communities, was struggling on multiple counts:

  • Reliability — the platform was prone to frequent service outages;
  • Scalability — platform was struggling to cope during peak workloads, to the point where, even during planned events, it was becoming unusable;
  • Cost — platform costs were exceptionally high — unsustainable if they were to make a success of the planned mass-market business model;
  • Analytics — only limited monitoring and business intelligence existed. This not only affected platform management, but, at an application level, there was no visibility of metrics that would enable it to deliver product-led growth;
  • Security — steps being taken to protect user data were significantly below where they should be.

Recognising he needed specialist expertise, he called upon our resources to help.

Approach

Working with the CTO and the CEO, a three-stage plan of action to optimise the infrastructure and operations was agreed:

  • Identify quick wins — i.e. immediate opportunities to to stabilize platform performance to protect user experience and secondarily, reduce platform costs;
  • Deeper-dive analysis and recommendations — a comprehensive assessment of infrastructure fit against business need and priorities, root-cause analysis of performance deficiencies and design-optimisation recommendations;
  • Implementation acceleration — work with the client’s development team to accelerate implementation of the planned infrastructure changes, establish operational best practices, including deployment of platform monitoring and customer-behaviour analytics.

 

Phase 1: Identify quick wins

The team quickly ascertained that a significant cause of poor service reliability was operations-related. By creating greater separation of duties across the technology team, and introducing more effective change-control processes much of the problems disappeared. In addition, event logging and monitoring using Splunk was set up, so that when issues did arise they could be diagnosed faster.

In parallel, the team analysed infrastructure cost effectiveness. Workloads, platform-service usage and billing were analysed. As a result, services were rationalised and ‘right-sized’, leading to a 75% cost saving within 12 weeks of project commencement — worth $400k per year.

Security was also strengthened through the implementation of a range of improved identity and access-management measures.

Phase 2: Deeper-dive analysis and recommendations

The company’s product line had evolved over time into three separate application stacks, which created significant inefficiencies and inflexibilities across engineering and commercial operations. A plan was therefore agreed to consolidate them into one multi-tenant platform based on a microservices architecture that could scale with the business’s planned growth.

An important part of the architectural review was selecting appropriate cloud applications and services to use. In such a rapidly evolving technology landscape, it was important to choose wisely, assessing not just the immediate capability of a given product, but longevity from a support and skill-pool perspective.

Areas the team considered in depth included container orchestration, IaC (infrastructure-as-code), platform monitoring and application intelligence. After evaluation of different options, it was decided to use microservices on AWS’s Kubernetes implementation (EKS), Terraform for IaC, and a combination of Splunk and Signal FX for instructure and security events management, and application observation and tracing.

Phase 3: Implementation acceleration

Work moved onto infrastructure rebuild, and setting up of new devsecops practices.

Pent-up demand for the service meant that speed of execution was vital. Infrastructure refactoring proceeded hand-in-hand with software engineering who were tasked with transforming their code to a more robust microservices-based stack.

Although the software team was relatively large and experienced, it was spread across one inhouse and two outsourced teams based in the UK, India, Belarus, Switzerland and USA. To maximise momentum across such a dispersed team, a cross-functional leadership ‘pod’ was a established. This ensured that the entire team had a clear understanding of the vision, and software and infrastructure development could be tightly coordinated — underpinned by a standardisation of working processes.

With so many moving parts — upwards of a thousand elements across microservices, security groups, IAM roles, AWS services — enforcing common terminology was a small but critical task to avoid confusion, establish unambiguous documentation and audit of processes.

Security played a key part in the team’s methodology — in cloud-component choices, platform design and operations. For example, a devsecops approach ensured CICD (continuous-integration, continuous-delivery) pipelines had appropriate security measures and auditability inbuilt. Full rollout of Splunk, combined with Signal FX, not only supported infrastructure monitoring but also threat detection, prevention and incident response — Signal FX providing the traceability and observability that’s essential in a complex microservices environment.

Leveraging Splunk to enable product-led growth

The company’s business model put its product — private social networks — at the heart of its go-to-market strategy; i.e. product experience driving user acquisition and usage, which in turn drives monetization through premium-feature adoption and ad revenues. A cornerstone of this “product-led-growth” strategy was effective customer-behaviour analytics that could provide decision-support to prioritise feature development and optimise the customer journey and advertising.

As Splunk was rolled out to support platform management, it quickly became clear to the commercial team how it could support their goals. Dashboards and reports were set up to show who, how and when different user types were using a given network.

 

Sector

Enterprise, consumer

Solutions

Technology security in line with ISO27001 and Cyber Essentials Plus
Splunk
Signal FX
Kubernetes
Terraform
Microservices
AWS EKS, EC2, RDS, S3, SQS, Cognito, Lambda

Impact

With platform reliability and performance stabilised, the infrastructure properly autoscaling, security-hardened and costing 75% less to run — and a clear line of sight into user behaviour — the business was primed for growth and fit for a bright future.

Want to accelerate?

]]>
Building a global PCI-DSS Level-1 accredited platform https://soimplement.com/case-studies/building-a-global-pci-dss-level-1-accredited-platform/ Wed, 04 Mar 2026 23:40:21 +0000 https://soimplement.com/?p=952
When a new application was developed to help retailers protect their customers from the multi-billion-dollar problem of payment-card fraud, the team was tasked with transforming it from an expensive, complex on-premise solution

Building a global PCI-DSS Level-1 accredited platform

When a new application was developed to help retailers protect their customers from the multi-billion-dollar problem of payment-card fraud, the team was tasked with transforming it from an expensive, complex on-premise solution into a global cloud application that could be adopted by large and small businesses alike.

The result was an award-winning AWS-hosted service that was the first of its kind to achieve PCI DSS Level-1 accreditation — one that would debunk the myth that security and application flexibility and scalability don’t mix.

Challenge

Keeping customers’ payment information secure is a fundamental business requirement. Unsurprisingly, the payment-card industry’s data security standards, PCI DSS, requires sellers taking payments over the telephone to implement significant measures to protect their consumers.

When a new secure method of taking telephone payments emerged that enabled customers to use their keypad to communicate card numbers during a call without exposing them to the seller’s agent, it was a game changer. Contact centres would no longer need to place payment-taking agents in secure, monitored areas away from other operations. Small merchants could take payments without exposing their customers to real or perceived risk. And card users could feel safe that they were not exposed to potential fraud. 

However there was one problem: it was only available as an on-premise installation requiring substantial time and cost to install. The challenge was how to replicate the solution as a SaaS service that would remove the barrier to adoption for large and small merchants alike. 

Approach

For maximum flexibility and scalability, the obvious answer was to build a multi-tenant service on AWS. However, this created a further technical challenge: how to secure a shared infrastructure to the level demanded by PCI-DSS for this kind of application.

Received wisdom was that security to the level required would inevitably restrict application flexibility and scalability — many questioned whether it was even possible on a multi-tenant public cloud. The team set out to prove the doubters wrong.

 

Working closely with a security architect and QSAs (qualified security assessors) from Amazon, the team developed a three-pronged approach to system security: 

  • Perpetual image ‘hardening’: An infrastructure-build process was developed that ensured any instance brought into service was built from the virtual equivalent of bare metal to a prescribed, automated security-hardened specification;
  • Security-centric continuous-delivery pipeline: A CD software-release process with security controls, measures and auditability intrinsic to it — in short a rigorous application of what is now referred to as “devsecops”;
  • Comprehensive event monitoring: Using Splunk, the team was not only able to actively monitor and manage platform health, but was also able to monitor its estate to detect and prevent threats and optimize incident response.

Through a process of creative problem solving and by putting information security at the heart of the systems-and-operational design process, the team created a PCI-DSS-compliant application without compromising the intrinsic flexibility and scalability advantages of cloud. 

Furthermore, by harnessing AWS’s seamless geographic reach, the team was able to launch the service in any region in a matter of hours not days.

 

Sector

Fintech

Solutions

AWS —EC2, RDS, S3, SQS
Splunk
IoC/automation
Devsecops
Java

 

Impact

The award-winning application was the first of its kind to achieve PCI DSS Level-1 accreditation.

By removing the need for on-premise infrastructure, it enabled organisations of any size in any location to increase protection of their customers’ most sensitive data — enabling business to flow more freely, and creating a stronger bond of trust between buyer and seller.

Want to accelerate?

]]>
Outdated sales engines offer software start-ups a disruption opportunity in enterprise markets https://soimplement.com/executive-analysis/outdated-sales-engines-offer-software-start-ups-a-disruption-opportunity-in-enterprise-markets/ Wed, 04 Mar 2026 23:28:24 +0000 https://soimplement.com/?p=927
The sales engines of many if not most enterprise-software firms are predicated on a decades-old model, leaving them open to attack from new entrants that take a product-led growth strategy.

Outdated sales engines offer software start-ups a disruption opportunity in enterprise markets

In a fascinating interview on the Start-up Scaleup Game Plan podcast, Hussein Kanji, co-founder of London-based VC Hoxton Ventures, lamented the ability of European enterprise-software start-ups to achieve “hyperscale” growth —held back by poor “sales engines” when compared to their North American counterparts.

Kanji, named by the UK’s Daily Telegraph as “Europe’s most influential technology investor”, is well placed to judge. He has peered under the hood of many firms on both sides of the Atlantic.

This is not just a European start-up issue. Many enterprise software markets are dominated by incumbents running outdated, inefficient sales engines that put a drag on growth and expose them to attack by new entrants.

Time for change

There is a shift happening in how businesses, large and small, adopt and buy applications. Rooted in the consumerization of software, it is changing the way software firms go to market.

Called “product-led growth” (PLG) the approach takes a bottom-up, user-centric route to customer acquisition. In contrast to traditional top-down sales, PLG aims to put applications in the hands of users as early as possible and then draw them onto higher-value, higher-price packages.

While the principle is not new, its applicability to complex product and sales scenarios, is. Early proponents like Dropbox are now joined by Splunk (enterprise security-information and event management), Atlassian (software-development tools), and Slack (collaboration software), to name a few.

PLG offers start-ups the potential to disrupt markets where incumbents are hanging on to old-school sales engines — and deliver the hypergrowth VCs like Hoxton expect.

Misfiring sales engines

Revenue operations at most enterprise-software firms are still based on a model conceived twenty years ago when the advent of the internet moved power from seller to buyer.

In fact, many resemble a digital mutation of twentieth-century outbound selling: spray the internet and subscribers with tiredinformational content; generate “marketing-qualified leads” based on the first whiff of online engagement; send in sales to dial for victory.

Achieving results using the same formula is getting tougher. Harder to get heard above the cacophony of information everyone is pumping out, and harder to connect with decision-makers — ‘warm’ leads or not.

Part of the reason is what Andrew Chen called “the Law of Shitty Clickthroughs”: Over time, all marketing strategies result in reducing clickthrough rates.

But there is a more fundamental issue at play: buyers and users have become more ‘tech-savvy’ and self-reliant. From discovery to deployment, they increasingly expect a friction and hassle-free experience.

Selecting, installing and using new applications is a part of every-day life. Half of all knowledge workers have grown up in the digital world. They appreciate the power of technology to improve their working life. And they are more inclined to try solutions for themselves.

They are also less patient than ever. Disqualification looms large if you can’t speak to their needs in a few words and then demonstrate immediately how you address them.

Development innovations are accelerating the shift

Cloud computing may have started an application revolution, but innovations in UX technology and practices, and code-free application integration, are propelling it forward.

Improvements in UI design are making application navigation easier. Customer-journey mapping, and role and context-based UI rendering, is enabling application functionality to be aligned to user need and ability, at any given point in time. Plus, business configuration is more likely to be an incremental in-flight task rather than a pre-use big bang.

Application-usage analysis, powered by applications like Amplitude or Splunk, enables timely intervention of virtual or human helpers to ensure users get the most from their application, adopt more sophisticated use – and ultimately increase retention.

Sales 3.0

If you can build an application that is easy for users to adopt – ergonomically, technically and economically — you change the dynamic of the sale. You reduce friction, forge a stronger relationship earlier, open up the possibility of viral growth, and lower customer-acquisition costs (CAC).

Zoom zoom

Last year I observed at close quarters, an enterprise win for Zoom with one of the world’s largest banks. The circumstances surrounding the deal is revealing.

Some two years earlier, a competitor had been selected to be the bank’s collaboration tool of choice. It was a classic top-down sale that leveraged the vendor’s existing relationship built over many years, supplying a range of products.

Unfortunately, deployment across the firm’s sprawling organisation of over 200,000 employees in 50-plus countries, was a challenge. Not just technically, but in getting buy-in from its vast user base. For the bank’s implementation team as well as the vendor, the project was a change-management nightmare.

Then, seemingly out of nowhere, in swooped Zoom’s enterprise-sales team. Although the specifics of the deal were unclear; it is safe to assume that, by the time the Zoom account director met the Queen (or King) Bee in the C-suite, they already had thousands of their workers across the hive using and advocating Zoom.

The Zoom example demonstrates how PLG’s bottom-up sales approach not only complements top-down enterprise sales models, but also reduces risk, CAC and sales-cycle time. It also shows how PLG offers strategic advantage, enabling new entrants to outflank established players and move more easily between small, mid and large-enterprise sectors.

Putting product in your sales engine

“Product-led” can sound like an oxymoron to executives schooled in enterprise sales. But in a self-service world, it’s the product that does the selling, at least initially.

Complexity of the customer use case or product sophistication should rarely be a barrier to PLG. Most customer use cases can be broken down into discrete tasks against which accretive functionality can be applied.

But this can only be done if you have an effective software and product platform on which to build.

Putting social-network provider on path to hypergrowth

When a software firm providing private social networks for brands and influencers, wanted to put their foot on the gas, they discovered a problem.

Having proven the concept and secured pilot customers, it became clear that their first-generation platform could not deliver their vision — a self-service model for individuals and organisations to set up and manage their own monetizable social network.

Technology-transformation consultants, Soimplement, were called in to put the business back on track. Within a range of actions undertaken to upgrade product and engineering operations, the following were particularly pertinent to enabling the company’s product-led growth:

  • Consolidating discrete products into a single, multi-tenant platform on which new features could be rapidly developed and rendered in different service permutations
  • Introducing customer-behaviour analytics to optimise the end-user experience, and provide feedback for feature development and packaging
  • Creating cross-functional ‘growth teams’, comprising marketing, product and engineering disciplines, tasked specifically to optimise customer journeys

 

With platform reliability and performance stabilised, the infrastructure properly autoscaling, security-hardened and costing 75% less to run — and a clear line of sight into user behaviour — the company was primed and ready to fly.

Why are firms slow to make the change?

PLG is not suitable for every software application. If you’re selling pure back-office ‘plumbing’ as opposed to applications with end users, it may not be applicable. But often the biggest hurdle is mindset.

Here are some common objections and misconceptions:

 

Objection

Wrong thinking

Right thinking

Dogma

(1) It only works for simple products

 

(2) Buyers won’t appreciate our full value unless Sales are involved

PLG is driving rapid growth for highly sophisticated applications like Splunk, so why not us?

We can break down our rich functionality into bite-size packages
We can create complementary tools for prospects using the same platform
Our enterprise sales team can lubricate the buyer journey not block it — especially where the ultimate buyer is not a user

Myopia
No-one competes in this way in our market
We can create a platform to win more business, protect against new entrants, and attack adjacent markets more easily
Paranoia
We don’t want to divulge the strengths/weaknesses* of our product to competitors by making it readily available * delete as appropriate

We have a great UI and functionality, the faster we secure the market the better;

OR

We have an awful UI; let’s use a PLG model to prioritise development of new design and functionality

Fear
Moving to PLG involves major change and risk
We can mitigate the risk by taking an MVP approach and experimenting
Avarice

(1) Free or low-priced entry options will undermine full-product pricing

(2) Publishing our (high) enterprise pricing will put off buyers

Publishing pricing is a trade-off between velocity and revenue. But we can devise different packages – including keeping high-end packages POA. And we can experiment.

How can we create an entry-level product? Should we be charging for it at all?

The long view

Many enterprise software firms still operate a go-to-market model that was conceived decades ago, for an era where the inexperience of buyers and immaturity of applications required sales intervention.

As a result, product markets world-wide are ripe for disruption by new entrants or incumbents that take a product-led growth strategy, which recognises the pivotal role product design can play in enabling early, easy product adoption.

In 1955 the acclaimed industrial designer, Henry Dreyfuss, wrote in his book “Designing for people”:

“When the point of contact between the product and the people becomes a point of friction, then the [designer] has failed”

The same thing could be said of sales processes.

If you enjoyed this article you may like this

Investor protection

For many investors in software start-ups, engineering is the least understood function and yet one of the biggest areas of executional risk. This need not be the case. 

Here are six areas to probe for an early-warning of poor health.

Want to accelerate?

]]>
Slowing the sales revolving door https://soimplement.com/executive-analysis/slowing-the-sales-revolving-door/ Wed, 04 Mar 2026 23:19:48 +0000 https://soimplement.com/?p=921
Think before you look for another CRO; growth is now a team sport — one where product leads not follows. Enterprise-software sales is a revolving door that only ever seems to spin faster. According to the Bridge Group, the average tenure of a Chief Revenue Officer is now barely two years. This smacks of a systemic problem. There’s a structural shift happening.

Slowing down the sales revolving door

Think before you look for another CRO; growth is now a team sport — one where product leads not follows.

Enterprise-software sales is a revolving door that only ever seems to spin faster. 

According to the Bridge Group, the average tenure of a Chief Revenue Officer is now barely two years. For junior sales staff it’s even worse, with SDRs – sales development representatives — just 1.5 years. In 2010, 44% of respondents reported an average sales tenure of more than three years. today, just 8% report that kind of longevity. 

Sales live and die by their numbers. Soccer strikers aren’t measured on how many shots they get on target — it’s goals alone that count. But businesses fail to grow for all sorts of reasons, many outside of the CRO’s control. Equally employees whose compensation is skewed heavily to results are less likely to want to stick around if they can’t achieve their potential — not least given the fact that future employees will see earnings as proof of performance. 

Yet, by any measure, a two-year tenure smacks of a systemic problem — especially when discounting the time and cost of hiring and onboarding. Yes, life is getting harder in enterprise software — competition is fiercer. But there’s also a structural shift happening. 

Buyer behaviours are changing yet the sales models of most enterprise-software companies are failing to keep up. The go-to-market practices which CEOs and CROs know and trust are now becoming less and less effective. 

The problem is that many firms not so much overplay the importance of the CRO in driving growth; rather they fail to recognise the increasingly pivotal role that product strategy plays in the client-acquisition process. 

Life ain’t what it used to be 

Selling into large enterprises the traditional way — essentially from the top down — is getting tougher. Harder to engage with decision makers and influences at the tip of the pyramid; protracted sales cycles; expanding decision-making groups. These problems are here to stay. 

But that’s only half the story. Buyers and users are becoming more ‘tech-savvy’ and self-reliant. Increasingly, from discovery to deployment, they expect a friction-less, hassle-free experience and they naturally gravitate to vendors that deliver this — self-service demos, free trials or entry options, transparent pricing models, progressive adoption journeys, and UIs that design away complexity to offer simplicity and style. This is the essence of “product-led growth”. 

And yet, most enterprise-software vendors continue to pursue a siloed operating model: Engineering builds, marketing creates demand for sales, and sales sell. Locked in a form of product myopia, that only sees product as fixing the buyer’s end state, they continue to rely on client-acquisition processes that haven’t changed ostensibly since the advent of the internet.

Night and day 

One of the delights of working in sales is the immediate feedback. You find out pretty quickly what works and what doesn’t.

I caught up with an old friend Leo. An experienced, battle-hardened enterprise-sales executive, Leo recently moved to Datadog, a flourishing data-analytics company and arch PLG exponent. I was intrigued to know what it was like at the coal face: how did Datadog’s product-led approach compare with the traditional enterprise sales engines? 

“It’s like night and day, mate!” (Leo lives in Utah but is as English as Bow Bells.)

“There are so many ways I can grow an account.” he enthused. “No barriers to how tactical I can be. I can grow a multi-million-dollar business starting from a $500 per-year entry point. The trick is knowing where to focus my time – what fires to light.”

Datadog’s buyer journey starts with a free trial. “Having a robust product that users can try for free is a great way to build internal advocates – one of our e-commerce customers were experiencing significant downtime and they had no idea why. They used a free trial and solved it in 15 minutes!”

When attacking his target accounts, Leo’s focus is still on the buyers and key influencers that will write the big cheques; but his ability to build momentum and leverage user adoption and advocacy, catalyses that senior-stakeholder engagement.

Sage advice 

Ask any seasoned enterprise sales executive for the secrets of selling into enterprises and you’ll hear comments like: 

  • Seek out tactical wins before strategic ones 
  • Treat large accounts like markets 
  • Find a bridgehead, attack, secure and expand 
  • The earlier you can build influence before they send out RFPs the better 

Wise advice that holds true today. In fact, it is precisely what a product-led growth model delivers.

But this doesn’t apply to me 

It would be easy to dismiss Datadog and their ilk as “different” — that your product simply doesn’t lend itself to ‘carving up’ or ‘dumbing down’ to create similar user journeys. Perhaps this is the case. But ask yourself: 

  • What can you do to reduce friction in your online-buyer interactions? 
  • How could you leverage your product or derivatives of it, to build enduring brand engagement early on? 
  • What are competitors in the small and mid-market space doing? 

To think of PLG purely as ‘freemium’ would be to miss the point. Buyers expect a frictionless experience. Even if no-one in your market operates that way today, someone will soon.

Sales symbiosis 

To maximise velocity and win enterprise markets now requires more than just lobbing the ball to the CRO and waiting for her or him to put it in the net. It needs a symbiotic approach between sales, marketing and engineering that capitalises on the shift in buyer behaviour. For example:

  • Engineering: Product strategy feeds from go-to-market (GTM) strategy. Product design feeds from user and buyer journeys, and the ability to flex them 
  • Marketing has a new target: user sign-ups. They work with engineering to optimise journeys. They provide sales with PQLs — product-qualified leads. 
  • Sales collaborate to optimise GTM strategy and user journeys. Develop sector and account strategies and focus effort — sales and marketing — to achieve maximum penetration, velocity and value. 
Growth depends on effective product strategy more than ever. Strategy development and execution also requires tighter collaboration between commercial and engineering functions than is typically the case in enterprise software. It’s no surprise that forward-looking businesses like Datadog operate cross-departmental “growth” teams within their functional hierarchies. 

Slowing down the revolving door 

To steal a quote from Graham Hawkins‘ excellent book, “The Future of the Sales Profession”, sales has undergone more change in the last decade than it has in the last 130 years.

  • Twenty years ago, the internet liberated and empowered buyers; 
  • Ten years ago, cloud computing started an application revolution which radically changed buyers’ behaviours and expectations; 
  • Today buyers are more tech savvy than ever and are again forcing change in how organisations connect, market and sell their software products. 

What does this mean for the CRO revolving door? Probably, nothing in the short-medium term. CRO will remain a high-expectation, high-stakes role. 

But one thing is for sure: the winners will be those that understand these changes and can respond. Tech-savvy buyers demand tech-savvy CROs.

But tech-savvy CROs need sales-savvy technology and product chiefs, and integrated growth teams beneath them.

If you enjoyed this article you may like this

Investor protection

For many investors in software start-ups, engineering is the least understood function and yet one of the biggest areas of executional risk. This need not be the case. 

Here are six areas to probe for an early-warning of poor health.

Want to accelerate?

]]>