<![CDATA[Cloudnative.ly - Medium]]> https://cloudnative.ly?source=rss----cc8419b0e3e8---4 https://cdn-images-1.medium.com/proxy/1*TGH72Nnw24QL3iV9IOm4VA.png Cloudnative.ly - Medium https://cloudnative.ly?source=rss----cc8419b0e3e8---4 Medium Tue, 17 Mar 2026 11:50:00 GMT <![CDATA[Human Connection Is Essential]]> https://cloudnative.ly/human-connection-is-essential-4d6b1113e0c0?source=rss----cc8419b0e3e8---4 https://medium.com/p/4d6b1113e0c0 Tue, 09 Dec 2025 11:24:48 GMT 2025-12-09T11:24:47.389Z In December 2023, an 88-year-old Italian pensioner named Caterina received a water bill for €15,339 instead of the €65 she actually owed.

The automated billing system had misread her meter, claiming she’d used enough water to fill an Olympic swimming pool. The shock of seeing this impossible sum and watching her life savings vanish in an automatic withdrawal triggered a heart attack.

After more than a month in intensive care, she passed away on Christmas Eve.

The water company later admitted their error and returned the money. But by then, it was too late.

This tragedy reveals what happens when we remove human oversight from systems that affect vulnerable people. An automated meter reading, an algorithmic bill calculation, an automatic bank withdrawal: each step executed flawlessly by machines, with no human asking the obvious question: “Does it make sense for an elderly woman living alone to use 4 million litres of water in two months?”

A human reviewer would have caught this immediately. A person would have recognised the impossibility, investigated the discrepancy, and made a phone call. Instead, the system worked exactly as programmed, efficiently processing an error that cost a life.

Empathy Should Never Be Outsourced

Consider the last time you received genuinely meaningful feedback. Was it valuable because of the information it contained, or because of who delivered it and how they understood your context?

When a teammate takes time to thoughtfully review your work, they’re not just checking for errors and compliance. They’re considering your growth trajectory, remembering previous conversations about your goals, and tailoring their feedback to your specific learning style. They notice when you’re struggling and adjust their approach accordingly.

An AI can simulate empathetic language. It can generate phrases that sound caring. But empathy isn’t about the right words. It’s about genuine understanding born from shared experience and mutual investment in each other’s success.

Keep Human Presence in Feedback, Care, and Conflict

There’s a difference between correction and growth. AI can identify errors and suggest improvements. Only a human mentor can help you understand why you keep making the same mistakes and work with you to build better mental models and habits.

Real feedback is relational. It requires trust built over time, an understanding of someone’s aspirations, and the ability to deliver hard truths with genuine care. When we outsource feedback to AI, we reduce it to information transfer, stripping away the relationship that makes feedback transformative.

When a critical system fails or a deadline crisis hits, AI can help diagnose problems and suggest solutions. But when that same colleague is burning out from too many emergencies, they need human support. They need someone to notice the exhaustion in their voice during meetings, to suggest workload redistribution, and to check in with genuine concern.

Care isn’t efficient. It’s supposed to be inefficient. The inefficiency (the time taken, the attention given) is precisely what communicates value and builds psychological safety.

Conflict is uncomfortable, which makes it tempting to delegate to an “objective” AI mediator. But conflict, handled with care, is one of our most powerful tools for deepening understanding and strengthening teams.

When two colleagues disagree on a strategic approach, the resolution isn’t just about finding the optimal solution. It’s about understanding different perspectives, acknowledging expertise, negotiating trade-offs, and building shared ownership. These human negotiations create buy-in and trust that no AI-mediated decision could achieve.

Use AI to Create Time for People, Not Instead of People

The most profound promise of AI isn’t that it can replace human interaction. It is that it can free us from tasks that prevent meaningful human interaction.

How many hours do we lose to repetitive reports, data entry, scheduling, and administrative tasks? These activities don’t just consume time. They consume energy and attention that could be directed toward collaboration, innovation, and relationship-building.

When AI handles these mechanical tasks, it doesn’t eliminate the human element. It liberates it. Instead of spending an hour formatting spreadsheets, you can spend that hour mentoring a junior colleague. Instead of writing another status update, you can have a real conversation about project challenges.

The equation isn’t “AI + Less Human Time = Same Output.” It’s “AI + Same Human Time = Deeper Human Connection + Better Output.”

Consider performance reviews. AI can instantly flag metrics, identify patterns, and generate initial assessments. This doesn’t mean we need fewer human reviewers. It means human reviewers can focus on what matters: understanding context, discussing career aspirations, recognising potential, and providing meaningful guidance for growth.

The key is intentionality. When we delegate tasks to AI, we must be deliberate about how we reinvest that saved time. The default tendency is to simply take on more work, to increase velocity without increasing connection. This requires conscious resistance.

The Competitive Advantage of Connection

Organisations that understand this paradox will thrive. While others race to automate every interaction, they’ll build teams where:
• People feel seen and valued, not interchangeable with AI agents
• Innovation emerges from trust and psychological safety, not just computational power
• Retention improves because employees experience genuine investment in their growth
• Complex problems get solved through collective intelligence augmented by, not replaced with, artificial intelligence

A Framework for Decisions

When considering whether to use AI for a task, ask:
1. Does this interaction build a relationship? If yes, keep it human.
2. Does this require understanding context beyond the immediate situation? If yes, keep it human.
3. Would automating this free up time for more meaningful human interaction? If yes, automate it.
4. Is human presence the value being delivered? If yes, preserve it.

Closing Reflection

The future of work isn’t human versus AI. It’s human with AI, deliberately and thoughtfully integrated. Every tool we adopt, every process we automate, every efficiency we gain should be measured against a simple standard: Does this enhance our capacity for human connection, or diminish it?

As we work with AI assistants, automate reviews, and generate content with language models, let’s remember that our ultimate product isn’t just what we deliver. It’s the team that creates it, the relationships that sustain it, and the human creativity that imagines what it could become.

The machines are here to handle the repetitive. We’re here for everything else that matters.


Human Connection Is Essential was originally published in Cloudnative.ly on Medium, where people are continuing the conversation by highlighting and responding to this story.

]]>
<![CDATA[Human Judgment Is Paramount]]> https://cloudnative.ly/human-judgment-is-paramount-3c57b929290e?source=rss----cc8419b0e3e8---4 https://medium.com/p/3c57b929290e Mon, 08 Dec 2025 15:21:10 GMT 2025-12-08T15:21:08.832Z I’d been working toward a promotion for six months. The kind of process that defines your year… high stakes, visible, and full of pressure.

I took on extra projects. I coached others. I spent weekends polishing the small things no one asked for, but everyone noticed.

When the time came to write my self-assessment, I treated it seriously. Not as a box to tick, but as a moment to reflect honestly: where I’d grown, where I’d failed, where I needed to stretch next.

It wasn’t performative. It was vulnerable. I cared deeply because this outcome mattered to me.

When my feedback came back, I could tell immediately that no one had written it.

The tone was sterile. The phrasing too neat. Phrases like “continues to leverage leadership skills” and “has demonstrated strong collaboration and alignment with goals” … all the linguistic fingerprints of AI-generated text.

There was no story. No reflection. No sign that anyone had seen me. It was like being hugged by a PowerPoint deck.

And that’s when this principle hit me in the chest:

Never let a tool own the final say on matters that affect people, outcomes, or meaning.

Because this wasn’t a spreadsheet or a sprint retrospective. It was a moment of recognition. A human-to-human judgment call about my growth, potential, and readiness.

And judgment requires presence.

Where we lose something when we automate care

AI can do a lot of things faster and, sometimes, more accurately than people.
But there’s a line between what can be automated and what should be.

When we cross that line, when we automate moments that depend on care, empathy, or ethical discernment, we don’t just save time. We hollow out meaning.

A promotion review, a performance conversation, a decision about a teammate’s future. Those are spaces where ethics and connection live. They’re also the places where AI is most persuasive and most dangerous, because it can sound right without being right.

That’s why this principle exists:

The higher the stakes, the more essential human ethics become.

We have to protect those moments because when trust is at stake, automation isn’t efficiency, it’s erosion.

What happens when we replace judgment with probability

AI doesn’t understand context. It predicts it. AI doesn’t care whether you get promoted. It cares about matching a pattern. So when we use AI for work that involves fairness, emotion, or moral weight, it can only approximate what those things sound like, not what they mean.

That’s why scrutiny precedes trust.

In technical work, we’re used to this idea. We write tests. We verify code. We measure twice before merging to main.

But in communication, we forget that scrutiny is still our job.

When AI drafts something that sounds professional, we stop asking, “Is this true?” and settle for, “Does this read well?”

There’s no bad intent. It’s just misplaced trust.

Automation has made it too easy to skip the very friction that keeps our work ethical.

Where human judgment still matters most

There’s a reason the pillar Human Judgment Is Paramount is connected to the practice We Keep Empathy Human. They aren’t about resisting technology. They’re about resisting dehumanisation.

We keep empathy human.

  • Care and connection can’t be outsourced.
  • Never use AI to deliver feedback, care, or emotional presence.
  • Choose human tone and judgment in communication, even when AI drafts.
  • Use AI for prep; do difficult conversations synchronously.

Empathy isn’t a soft skill. It’s the foundation of trust in teams, leadership, and culture. You can’t automate that and expect the same results.

When we replace the messy human process of giving thoughtful feedback with a “polished” AI summary, we don’t just lose warmth. We lose learning.

Feedback isn’t data transfer; it’s dialogue. It’s the difference between hearing “You met expectations” and hearing “You’ve outgrown this role. Here’s how we help you stretch further.”

That difference is judgment.

Where AI belongs in the process

AI isn’t the enemy here. It’s the assistant or the stagehand, not the speaker. There are good ways to use it:

We create space for people.

  • Admin goes to AI; presence stays with us.
  • Use AI to offload logistics and routine tasks.
  • Use AI to create time for human collaboration, growth, and connection.
  • Preserve space for judgment, story, and empathy.
  • Increase oversight when decisions touch fairness, ethics, or impact on people.

Let AI help you summarise notes, prepare discussion points, organise thoughts, but when it comes to the human parts of the work, show up yourself.

That’s the deal we make when we choose to lead, mentor, or evaluate: we give our attention. AI can speed up the mechanics, but we still owe people the meaning.

The human layer is the quality layer

As we build more automation, it’s tempting to think of “the human in the loop” as a safeguard, or a final QA check before shipping. But that’s not judgment; that’s compliance.

Judgment isn’t a gate. It’s the act of deciding what matters before the system ever runs. The future of AI-assisted work isn’t about replacing human evaluation. It’s about raising its bar.
Ensuring that as AI handles more routine work, people focus on more meaningful tasks with clarity and care.

Because the truth is, the quality of our output now depends on the quality of our judgment.

Closing Reflection

AI will keep learning to sound more human. That’s inevitable. What isn’t inevitable is whether we let it make us less human in return.

Because care, fairness, and empathy don’t scale. They’re chosen. And when the stakes are highest, judgment isn’t what slows us down. It’s what makes us worth trusting.

Next up: Human connection is essential.


Human Judgment Is Paramount was originally published in Cloudnative.ly on Medium, where people are continuing the conversation by highlighting and responding to this story.

]]>
<![CDATA[Don’t Panic: A Human’s Guide to AI Productivity.]]> https://cloudnative.ly/dont-panic-a-human-s-guide-to-ai-productivity-bc5871e42cf6?source=rss----cc8419b0e3e8---4 https://medium.com/p/bc5871e42cf6 Sat, 06 Dec 2025 08:02:15 GMT 2025-12-06T08:02:15.434Z Contributed by: Shane Harger and Andrea Laforgia

AI is everywhere right now. It promises speed, creativity, and efficiency. It also threatens to overwhelm, create noise, and bring confusion. The truth is simpler: AI doesn’t decide how we work — we do.

This is not a manual for machines. It’s a field guide for people.

A way to keep our judgment, connection, and craft at the centre while still making the most of what AI can offer.

We made this because we needed it ourselves.
Maybe you do too.

This series will explore the values and principles behind a human-first approach to AI in work:

  • Why human judgment must always remain paramount
  • How accountability cannot be transferred to a tool
  • Why clarity matters more than sheer volume
  • And how mastery, intent, and connection can shape AI into an accelerator, not a replacement.

Each post will take one value at a time, share the principles beneath it, and give practical defaults for teams. By the end, you’ll have a working playbook that you can use as a field guide that you can pick up when AI feels overwhelming or when you want to use it with purpose.

AI won’t replace the best of our practices. It will amplify them or even erode them, depending on how we use it.

This series is about choosing amplification wisely.

Next up: Human judgment is paramount.


Don’t Panic: A Human’s Guide to AI Productivity. was originally published in Cloudnative.ly on Medium, where people are continuing the conversation by highlighting and responding to this story.

]]>
<![CDATA[The Apprentice Team]]> https://cloudnative.ly/the-apprentice-team-cf55117bacba?source=rss----cc8419b0e3e8---4 https://medium.com/p/cf55117bacba Thu, 09 Oct 2025 09:05:06 GMT 2025-10-09T09:05:03.816Z

During my career, I have spent a lot of time working with early-career software engineers (university graduates, self-taught engineers, boot camp graduates and engineers on apprenticeship schemes). I have recently been thinking about how to onboard a new group of early-career software engineers into teams and organisations.

Many companies will hire a cohort of Early Career Software Engineers a few times a year and then spread the cohort amongst many teams in 1s or 2s around their businesses.

I have seen several teams that have found it very difficult to onboard and coach Early Career Software Engineers. This could be because of constant delivery pressure, not providing them with adequate time to work with Early Career Software Engineers or a lack of experienced coaches, or it could just be there is no one in the team interested in taking on teaching Early Career Software Engineers.

This is where the Apprentice Team comes in.

Instead of taking on a cohort of Early Career Software Engineers and splitting them amongst various teams you create a new team(s) (7 + or — 2 people) made up of the entire cohort. Assigning each team an experienced engineer with teaching and coaching experience whose job it is to turn them into a fully functioning team. The coach should not be responsible for the team's line management.

At Armakuni we believe that a team should use specific practices and ways of working to become a high-functioning unit of delivery that can create the best experience for their customers.

Such as:

  • Test Driven Development (TDD)
  • Continuous Integration / Continuous Deployment (CI/CD)
  • Pair / Ensemble Programming
  • Product Thinking

The coach will tutor the team in all the skills they need to work on any feature the company requires. To achieve this, the apprenticeship team will go through several phases:

Phase 1: Cohort On-boarding

Months 1–2

The coach will support the team through the company's standard onboarding process and help them set up laptops. For most Early Career Software Engineers, this would be their first time working in the corporate world, so the onboarding may also need to cover things like:

  • Meeting Etiquette
  • Business Tooling — Team, Slack, Email

Once the onboarding is complete, the coach will start running some introductory training sessions covering topics like:

  • Using Version Control in a Team
  • Agile
  • Product Thinking
  • DevOps Culture

and introducing the team to the product they will be working on.

As very few businesses are going to want an Apprentice Team responsible for an entire product (If they do, then WooHoo! What an opportunity), the apprentice team should be aligned with an existing product team and pull work from them. The apprentice team and the existing team can share non-programming members of the team (Product Owner, Agile Coach and Designers) to support them in running agile ceremonies, understanding feature requirements and ensuring their work looks awesome and is user-friendly

The apprenticeship team share non-engineering team members with an existing product team

Phase 2: Ensemble Programming (Months 3–6)

Once Phase 1: Cohort On-boarding is complete the Apprentice Team will start the work that has been identified for them (this should be done with the coach, Product Owner, Agile Coach and team lead of the product team). As an Ensemble, this work is likely to start with small bug fixes and simple UI or non-functional backend changes (e.g. Logging improvements or refactoring). As the group’s confidence grows they will pick up more complex bugs and small features.

The apprenticeship team takes work from an existing product team

During this phase, the Learning / Delivery ratio will be 60% / 40%.

Learning Time

Learning time will be focused on group learning with a mix of formal training provided by the coach or other experts from around the business.

  • API interfaces
  • Software Architecture
  • Technical Debt & Code Quality
  • Secure by Design
  • Containerisation & Orchestration
  • Automated Testing
    - Behaviour Driven Development (BDD)
    - Test Driven Development (TDD)
  • Observability
  • Continuous Integration, Continuous Delivery & Continuous Deployment

The learning time would also include a simulated group project, allowing the group to get concrete practice of the training topics and exposure to all the technology they will be using on their production project.

Learning Plans

We will also introduce some individual learning into the plan that will allow each engineer to start learning around their individual interests.

I like to use the Learning Plan Miro Template we use at Armakuni. It encourages you to look at what you want to be, analyse your Strengths, ̶W̶e̶a̶k̶n̶e̶s̶s̶e̶s̶ Growth Edges, Opportunities and Threats, design a learning plan and develop a set of personal objectives and key results (OKR).

Fortnightly Progress Check-ins

The Coach will meet with the core Supporting Team for Progress Check-ins once a fortnight.

During these meetings, the coach will provide data on the progress of teams, their abilities and deliveries.

The coach will also provide data on the progress of individuals, identifying possible growth areas and working on plans with the group.

These meetings will provide time for the core team to pull levers on the Learning / Delivery ratio and move the group to 2 smaller ensembles.

Phase 3 Ensemble Splits in Two (Months 6 -12)

At the start of phase 3, the cohort will split into two smaller ensembles.

Each ensemble will take on its tickets from the shared backlog.

Each day the coach will split their time evenly between both teams. Allowing each team time to work without the Coach aswell. This will allow the apprentices to build confidence in their own skills and their teams ability.

The apprenticeship team split into two ensembles, both supported by the Engineering Coach

Learning / Delivery Ratio Shift

Towards the end of phase 2 the skills within the team should be such that the amount of time learning can be reduced to a 50% / 50% ratio.

Phase 4: Introduction of Pairing (Months 12–24)

The Apprentice Team will start to introduce pairing into their repertoire. Enabling them to practise a different way of working and helping them to gain confidence in their ability outside of an ensemble programming session.

The pairs will receive support from the coach for around a quarter of the day at a time. This can be rebalanced if a particular pair get stuck on their work.

During phase 4 the team should be taking more responsibility for areas of the product, working with Product owners and Agile Coaches to prioritise, plan and deliver their own work. This shift should enable them to start gaining a greater understanding of product engineering and their role in delivering value to the customer.

Learning Time

Towards the end of phase 4, the skills within the team should be such that the amount of time spent learning can be reduced to a 20% / 80% ratio. This learning shift should map with the learning time for all engineers in the company.

The coach will continue to run structured learning sessions for the team but these will be shorter than some of the earlier sessions. They would start using Samman Coaching at this stage.

The coach will also start to focus on individual learning so each engineer can deepen their knowledge relating to their individual specialisms, helping them to identify what they would like to focus on and creating their own learning path.

Phase 5: Graduation & Continuous Improvement

At this point, the Early Career Software Engineers would no longer be apprentices but fully fledged engineers capable of working in any strong, long-lived team, capable of taking on any work.

The Coach will work with each engineer to update individual learning plans, helping the engineers to discover their unknown unknowns and expanding their knowledge in their specialism.

At this time the existing product team and the apprenticeship team should merge and split into two mixed product teams. Creating two product teams made up of engineers with differing levels of experience.


The Apprentice Team was originally published in Cloudnative.ly on Medium, where people are continuing the conversation by highlighting and responding to this story.

]]>
<![CDATA[Adoption of AI does not translate to more profits]]> https://cloudnative.ly/adoption-of-ai-does-not-translate-to-more-profits-6b62841103d4?source=rss----cc8419b0e3e8---4 https://medium.com/p/6b62841103d4 Mon, 06 Oct 2025 16:02:40 GMT 2025-10-06T16:02:36.076Z
Executives trying to figure out if using AI on their companies translated to higher profits.

The artificial intelligence revolution was supposed to transform business almost overnight. CEOs authorised investments, companies hired talent, and organisations launched ambitious AI pilots. But despite the frenzy of adoption, the promised returns remain frustratingly out of reach for most businesses.

According to Boston Consulting Group’s landmark 2024 research, 74% of companies struggle to achieve and scale value from AI. Even more sobering: only 4% are creating substantial value from their AI initiatives. Meanwhile, adoption rates continue to climb: 78% of organisations now use AI in at least one business function, up from 55% just a year earlier, according to McKinsey’s 2025 State of AI report. These numbers came from a pool of companies that both consulting groups surveyed between 2024 and early 2025.

This disconnect between adoption and value creation isn’t a technology problem. It’s a strategy problem.

The Real Cost of Getting It Wrong

The consequences of rushing into AI without clear objectives are measurable and painful. Take Taco Bell’s experience with AI-powered drive-through ordering. The fast-food chain rolled out voice AI systems across more than 500 locations, confident that the technology would improve order accuracy and cut wait times. Instead, the system made mistakes, frustrated customers, and proved easily manipulated leading to viral moments like a customer ordering 18,000 water cups to bypass the AI and reach a human server. The company has since paused its rollout to reconsider its approach.

Taco Bell isn’t alone in discovering that AI deployment without proper testing and preparation can backfire spectacularly. Research from MIT Sloan examining U.S. manufacturing firms revealed what researchers call the “productivity paradox”. AI adoption frequently leads to a measurable decline in performance that follows a “J-curve” trajectory (a trendline that shows an initial loss immediately followed by a dramatic gain).

The numbers are stark. Organisations that adopted AI for business functions saw an average productivity drop of 1.33 percentage points. When researchers corrected for selection bias, accounting for the fact that companies expecting higher returns are more likely to be early adopters, the short-run negative impact ballooned to around 60 percentage points.

This isn’t just about growing pains. The decline points to a fundamental misalignment between new digital tools and existing operational processes. Companies are deploying AI systems for predictive maintenance, quality control, and demand forecasting without first investing in the complementary infrastructure these tools require: robust data pipelines, comprehensive staff training, and redesigned workflows.

Without these foundational elements in place, even the most sophisticated AI creates new bottlenecks rather than eliminating old ones. Older, established firms struggle most with this transition. Their decades of ingrained routines, layered hierarchies, and legacy systems prove difficult to unwind. MIT’s research found that older firms actually saw declines in structured management practices after adopting AI. And that alone accounted for nearly one-third of their productivity losses.

The Gap Between Promise and Reality

The optimism surrounding AI remains undeterred despite these challenges. Thomson Reuters’ 2025 Future of Professionals Report found that survey respondents predict AI will save professionals 5 hours weekly within the next year, representing an average annual value of $19,000 per person. Similarly, 87% of executives expect revenue growth from generative AI within three years, with about half projecting increases exceeding 5%.

These are predictions, not realised gains. The gap between expectation and outcome continues to widen.

BCG’s follow-up research in April 2025 identified why most companies fail where leaders succeed. The unsuccessful majority are making three critical mistakes: they aim too low with small-scale productivity initiatives, they spread their efforts too thin by placing too many AI bets, and they neglect workforce development. Struggling companies pursue an average of 6.1 AI use cases simultaneously, compared to just 3.5 for leading organisations. Yet those leaders anticipate generating 2.1 times greater ROI from their more focused approach.

Perhaps most tellingly, less than one-third of companies have upskilled even one-quarter of their workforce to use AI effectively. And most organisations don’t track financial KPIs for their AI initiatives at all, making it impossible to determine whether their investments are paying off.

AI Is Just a Tool; You Need to Know What You’re Building

The fundamental problem is that companies are treating AI as a solution looking for a problem rather than as a tool to achieve clearly defined objectives. This gets the entire process backwards.

Before any organisation selects an AI tool, it must answer three essential questions: What specific business problem are we trying to solve? What measurable outcome defines success? How will we know if this AI implementation is working?

Without clear answers, businesses are essentially buying a hammer and then wandering around looking for nails. The result is wasted investment, disrupted workflows, and demoralised teams who see yet another initiative that promises transformation but delivers confusion.

Leading companies take a radically different approach. They focus on core business processes and support functions with clearly articulated goals: reshape this specific process, improve productivity in that function by a measurable amount, and create this particular new revenue stream. BCG found that successful AI leaders allocate more than 80% of their AI investments to reshaping key functions and inventing new offerings, not dabbling in dozens of disconnected pilots.

These leaders also recognise that AI is remarkably diverse. “AI” isn’t a single technology but a constellation of tools: machine learning models, natural language processing, computer vision, predictive analytics, and generative AI, among others. Each serves a different purpose and requires a different implementation strategy. Selecting the right tool demands understanding both what you’re trying to accomplish and what each technology actually does.

Test Before You Scale

Given AI’s newness and the complexity of integrating it into existing operations, organisations must resist the urge to implement at scale immediately. The path to AI value runs through rigorous testing and data gathering.

Start with one to three high-value, relatively easy-to-implement initiatives. MIT’s research showed that despite early productivity losses, manufacturing firms that adopted AI eventually outperformed non-adopting peers, but only after a period of adjustment during which companies fine-tuned processes, scaled digital tools strategically, and learned from the data their systems generated.

This adjustment period is not optional. It’s where organisations discover the gap between how they thought their processes worked and how they actually work. It’s where they identify which complementary investments in data infrastructure, workflow redesign, or staff training are necessary for AI to deliver value. And it’s where they gather the operational and financial data needed to make informed decisions about scaling.

The companies succeeding with AI follow what BCG calls the “10–20–70 principle”: they dedicate 10% of their efforts to algorithms, 20% to data and technology, and 70% to people, processes, and cultural transformation. This distribution reflects a crucial insight: winning with AI is as much a sociological challenge as a technological one.

Testing reveals which workflows need reimagining. It identifies which team members need which skills. It exposes assumptions about how work gets done that turn out to be wrong. All of this learning must happen before committing to full-scale implementation, because the cost of getting it wrong at scale is exponentially higher than the cost of careful experimentation.

The Path Forward

The AI revolution is real, but it won’t happen on the timeline or in the manner that early hype suggested. The technology itself is powerful and transformative. The barrier to value isn’t the tool but how organisations are trying to use it.

Companies must shift from an adoption-first mentality to a strategy-first approach. That means defining clear business objectives before evaluating tools. It means recognising that different AI technologies solve different problems and choosing accordingly. It means investing in the people, processes, and infrastructure that allow AI to function effectively within existing operations.

Most importantly, it means treating AI implementation as a learning process that requires testing, measurement, and iteration before scaling. The 26% of companies successfully generating value from AI aren’t lucky, they’re disciplined. They know what they’re trying to accomplish, they’ve selected tools matched to those objectives, they’ve tested rigorously, and they’ve built the organisational capabilities to execute effectively.

For the 74% still struggling, the message is clear: slow down to speed up. The race isn’t to deploy AI fastest but to deploy it effectively. That requires knowing what you’re building before you pick up the tools.

Sources:


Adoption of AI does not translate to more profits was originally published in Cloudnative.ly on Medium, where people are continuing the conversation by highlighting and responding to this story.

]]>
<![CDATA[Stop Buying AI, Start Building AI Readiness]]> https://cloudnative.ly/stop-buying-ai-start-building-ai-readiness-0b5c667caea9?source=rss----cc8419b0e3e8---4 https://medium.com/p/0b5c667caea9 Fri, 03 Oct 2025 11:18:23 GMT 2025-10-03T11:27:30.097Z
Photo by Omar:. Lopez-Rincon on Unsplash

While most companies put huge focus on which AI tools to buy or which vendors to partner with, the smart money is on a different question: how quickly can your organisation experiment, iterate, and deploy AI-driven features? The real competitive advantage doesn’t necessarily come from having the best AI — it comes from having the organisational capability to use AI effectively.

The AI Reality Check

Walk into any corporate boardroom these days and you’ll hear the same “AI” buzzwords being thrown around like confetti. But here’s the uncomfortable truth, most executives using these terms couldn’t define them — and with Gartner forecasting worldwide generative AI spending to hit $644 billion in 2025 which is a staggering 76.4% increase from 2024 — misunderstandings are about to get very expensive.

The problem isn’t just semantic. When companies don’t understand what they’re buying, they make terrible decisions. They purchase “AI solutions” that are actually glorified spreadsheet macros. They implement chatbots powered by basic rule systems and call it machine learning. They spend millions on technology that doesn’t match their actual needs, then wonder why their “AI transformation” feels more like expensive AI theatre.

The AI Fantasy

In businesses across the world, AI appears to be something approaching mystical status. Executives treat it like a magic wand that will solve decades of operational inefficiencies, competitive pressures, and strategic challenges, and it’s leading to some spectacular failures.

The typical executive AI fantasy goes something like this: implement some AI tools, automate complex decision-making, achieve human-level insights across all business functions resulting in improved customer satisfaction across the board with no uplift in skills or additional headcount. The reality is messier, more expensive, and far more limited. Gartner predicts that 30% of generative AI projects will be abandoned after proof of concept by the end of 2025, citing poor data quality, inadequate risk controls, escalating costs, or unclear business value.

Marketing departments have amplified this delusion, promoting AI-powered solutions as revolutionary tools that will fundamentally transform work. Every software vendor now claims their product uses AI, regardless of whether it actually does anything intelligent. The result is an environment where the “AI” label gets applied to everything from basic automation to sophisticated machine learning, making it nearly impossible for buyers to understand what they’re actually purchasing.

Business leaders often struggle to distinguish between different types of AI systems and their appropriate applications. An application powered by an LLM might excel at certain assistance business activities but may struggle if used with the intention of being a delegate. Understanding these distinctions isn’t just technical knowledge, it’s essential for making informed decisions about AI investment and avoiding expensive mistakes.

The majority of companies that will succeed with AI aren’t those blindly buying the most advanced models or partnering with the hottest vendors. They’re the ones building realistic assessments of what AI can and cannot accomplish, implementing appropriate safeguards, maintaining healthy skepticism about vendor promises and most of all investing in AI readiness.

The Governance Nightmare

Legal and compliance teams are having nightmares. The rapid adoption of AI technologies has created a governance crisis that most organisations are woefully unprepared to handle.

AI governance isn’t just about having policies. It’s about building oversight mechanisms for systems that can exhibit unpredictable behaviours, generate biased outputs, and make decisions that affect real people’s lives. Traditional risk management approaches, designed for predictable software systems, can be inadequate for AI technologies.

The challenge is compounded by regulatory uncertainty. The EU’s AI Act, various U.S. state initiatives, and emerging frameworks worldwide reflect different approaches to balancing innovation with the protection of individual rights. Organisations must navigate this while building internal capabilities to manage AI risks and pivot accordingly.

Key governance concerns include data privacy and security, algorithmic bias and fairness, transparency and explainability, accountability for AI-driven decisions, and compliance with rapidly evolving regulations. Companies need clear guidelines for AI use, including who can access these tools, how they should be applied, and what safeguards must be in place to prevent misuse. Companies also need safe spaces, and time for experimentation with AI tools to learn their strengths and weaknesses.

Gartner research shows that 45% of organisations with high AI maturity keep AI projects operational for at least three years, suggesting that building trust and proper governance fundamentally drives successful adoption. Organisations that invest in governance frameworks upfront are more likely to achieve sustained value from their AI investments.

The Hallucination Problem

AI systems, particularly LLMs, can be very convincing at presenting incorrect information. They can generate information that sounds authoritative but is completely fabricated — a phenomenon researchers refer to as “hallucination.” According to Deloitte, 77% of businesses are concerned about AI hallucinations, and they should be.

These aren’t occasional errors or edge cases. LLMs can confidently present false information, cite non-existent sources, and make claims that sound plausible but are factually incorrect. A Stanford study found that when asked legal questions, LLMs hallucinated at least 75% of the time. That’s not a bug, it’s a fundamental characteristic of how these systems work.

Unlike traditional information sources, which can be traced to specific authors or publications, some AI-generated content currently lacks clear provenance. The systems don’t distinguish between verified facts and patterns they’ve learned from potentially unreliable sources in their training data. They can’t always verify information against real-world sources in real-time, and they don’t understand the difference between truth and statistical likelihood.

This creates serious implications for any organisation using AI-generated content for decision-making, public communication, or applications where accuracy matters. Companies must implement robust fact-checking processes, including human oversight, cross-referencing with authoritative sources, and technical solutions that help verify AI outputs. We are certainly seeing huge improvements in this area with the likes of Opus 4.1 that will reference and cite accordingly.

The responsibility for accuracy cannot be delegated to AI systems themselves. Organisations must develop workflows that combine AI capabilities with human judgment and verification processes. This isn’t just about avoiding embarrassment — it’s about preventing decisions based on fabricated information that could have serious business consequences.

AI Readiness: The Real Competitive Edge

While most companies are trapped in a procurement mindset, endlessly debating vendor selection and feature comparisons, the winners are asking an entirely different question: how fast can we ship, test, and improve AI-powered features? Forget having the shiniest AI model. The real game is building an organisation that can actually execute with whatever AI tools exist, adapt when better ones emerge, and learn from failures faster than competitors can even launch their first pilot program.

The pace of AI development means the window for competitive advantage from any single AI feature is shrinking fast. What matters isn’t having the perfect AI solution, but having the organisational agility to continuously deploy, test, and improve AI-powered features. This requires robust engineering practices, automated deployment pipelines, adequate data models and cultural readiness for rapid iteration.

Companies like Armakuni demonstrate this approach through their focus on building high-performance engineering capabilities and accelerated delivery practices. Their emphasis on automation, streamlined processes, and constant feedback allows teams to innovate efficiently while staying aligned with customer needs. This model recognises that AI success depends on organisational agility alongside access to AI models.

Strategic AI implementation requires frameworks that help organisations evaluate and prioritise AI projects based on real value rather than technological novelty. Effective approaches prioritise real-world impact, ethical practices, and scalability, helping organisations build systems that are practical and future-ready rather than impressive in demos.

The companies that succeed with AI will be those that have already mastered the fundamentals of modern software delivery: continuous integration and deployment, comprehensive monitoring and observability, automated testing, infrastructure as code, and rapid feedback loops. These capabilities become even more critical when dealing with AI systems that can exhibit unpredictable behaviours and require careful monitoring in production.

AI readiness also extends to cultural and organisational factors. Teams need comfort with experimentation and failure, as AI projects often require multiple iterations to achieve desired outcomes. Organisations must develop new competencies around data management, model governance, and cross-functional collaboration between technical teams and business stakeholders.

Unlike traditional software features that remain relatively stable once deployed, AI systems require ongoing attention, retraining, and optimisation. Organisations must build capabilities for continuous model monitoring, performance evaluation, and improvement. This means establishing feedback loops that can quickly identify when AI systems are underperforming and rapidly implement corrections.

Gartner predicts that over 40% of agentic AI projects will be canceled by the end of 2027, due to escalating costs, unclear business value, or inadequate risk controls. The organisations that avoid this fate will be those that build foundational capabilities for effective AI implementation rather than just chasing the latest technological trends.

Deloitte have released a State of AI report that gives some really interesting insight.

Beyond the Hype

The future of AI in business won’t be determined by those who adopt it fastest or spend the most money. It will be shaped by organisations that build realistic assessments of AI capabilities, implement appropriate safeguards, maintain healthy skepticism about vendor promises, and develop the organisational agility to iterate quickly and learn from failures.

The AI revolution is real, but it’s not the magic transformation that marketing departments promise. It’s a collection of powerful but limited tools that require careful implementation, conscious oversight, and realistic expectations. Companies that understand this will build sustainable competitive advantages. Those that don’t will join the growing pile of abandoned AI projects that looked great in PowerPoint presentations but failed to deliver real value.

The choice isn’t whether to use AI — that ship has sailed. The choice is whether to use it intelligently, with full awareness of its capabilities and limitations, or to stumble blindly forward, hoping that expensive technology will solve problems that require human judgment, organisational change, and strategic thinking.


Stop Buying AI, Start Building AI Readiness was originally published in Cloudnative.ly on Medium, where people are continuing the conversation by highlighting and responding to this story.

]]>
<![CDATA[How to run a Cognitive Load Mapping session]]> https://cloudnative.ly/how-to-run-a-cognitive-load-mapping-session-5a97fa42a2b6?source=rss----cc8419b0e3e8---4 https://medium.com/p/5a97fa42a2b6 Tue, 05 Aug 2025 09:56:18 GMT 2025-08-05T10:00:16.244Z

If your team struggles to complete trivial tasks or takes longer to deliver some types of work than expected, they could be suffering from high cognitive load. How can you be sure? And what can you do to help them? This post will help you answer both those questions and many more.

My personal heuristics for this are listening out for specific phrases in 1-to-1s, standups, retros, and all your other team ceremonies, things like:

  • “It’s too complex…”
  • “This is tricky…”
  • X keeps interrupting me.”
  • “It’s hard because I keep being distracted by…”

Do these phrases sound familiar? If so, a lightweight exercise we’ve devised called “Cognitive Load Mapping” can help individuals and teams pinpoint where to invest their effort.

Cognitive load mapping is one of my favourite ways to categorise and appropriately tackle the different sorts of work to maximise your team’s ability to get stuff done.

To understand Cognitive Load Mapping, we must first understand its constituent parts.

Types of cognitive load

All of these things relate to how hard it is to learn, and they come from the context of education. There are three types of cognitive load: Extraneous, Intrinsic, and Germane. What even is cognitive load?

Cognitive load

Cognitive load is how hard it is to do any specific task. It is that tongue-sticking-out feeling you get when you’re focusing on threading the eye of a needle or the difficulty of the 46 times table (n.b. if people were supposed to do maths, we wouldn’t have made computers).

The reason you get that feeling is all about the information architecture of the human brain. When you save something to that massive hard drive on top of your neck, the brain works like this: alleged experiences come through the senses into your brain (little epistemology joke there for you), into your working term memory, this is then written back into long term memory. We call this learning. You’re doing it right now.

This is all well and good, but brains both read and write; they’re not read-only. To load a memory from long-term memory, it must be transitioned into working memory, where it is used in concert with current sensory experiences. It’s then written back to your brain combined with those new experiences. I find this intensely creepy because it means those much-reviewed memories are probably furthest away from their original experience.

This is why these concepts are important even in a business context. We constantly learn because that is how brains work: they read, combine new experiences, and then write those back into our brains.

Extraneous

However, there is something I am not mentioning here much like my junior school science teacher who taught me about the states of matter. What about plasma Mr. Trilby? WHAT ABOUT PLASMA?! I don’t want to list all the states of matter but for the record, I think that Bose–Einstein condensates are pretty cool.

There are things that can affect how difficult all of this feels and how successful you are in utilising your working memory. For example, if you were reading an article on cognitive load mapping and you kept being told terrible physics jokes instead. (Cool, as in it occurs near absolute zero, and cool means like… oh, never mind).

That feeling you got just now of whiplash and distraction - that’s Extraneous Cognitive Load.

Let’s break it down. Extraneous Cognitive Load is the easiest to define. It’s poor question design for a task or the person over the road in the VW Polo who keeps beeping in traffic while you are trying to code. It’s the bits you can cut away from a problem to simplify it.

These types of cognitive load are Extraneous, and there are two more types: Germane and Intrinsic. Combined, they make up the Working Memory Load

Intrinsic

Next up, we have the Intrinsic. This is the bit related to the task itself or the problem you’re trying to solve. For example, it is the difficulty in adding two numbers together, or two multiple-choice answers on that mandatory training vs 5.

Try it out; here is a list of 5 words. Look at them, then minimise this window and see how many you remember

  • chrome
  • missable
  • dot
  • resignation
  • move

Now try it with 10

  • transport
  • real
  • dice
  • gyro
  • son
  • cup
  • unbridled
  • dice
  • leap
  • indemnity

That’s me increasing the intrinsic complexity of a task. Notice how I choose 5 items then 10. This is foreshadowing.

Germane

Finally, we have Germane. Germane cognitive load is the schema relating to the current problem at hand.

If I tell you that a trick to remembering words is to create a story to link them all together, I have given you a schema that can be used for Germane cognitive load. Humans are great at remembering stories, binding together words is called “Chunking” and can turn two words into one in your brain. That’s another schema your brain might use to make this task easier.

What’s more, typically, the human brain can remember between seven plus or minus two things. Now, this is more of a guide than an actual rule, and actually, it’s more about the duration of the words and your chunking skills, but it’s a good rule of thumb. I told you it was foreshadowing.

Here’s that list again. Try out those techniques to remember this list.

  • transport
  • real
  • dice
  • gyro
  • son
  • cup
  • unbridled
  • dice
  • leap
  • indemnity

Repetition can help increase the amount of Germane cognitive load too.

So Germane cognitive load is the tips and tricks you can use to make things easier for yourself. It’s the brain equivalent to knowing that the 10 times table is the number with a 0 on the end, or remembering the order of the planets is “My very easy method just speeds up naming planets”, or if you are an AI, that rulers are key symptoms of skin cancer.

These tricks aren’t always right, but are very fast for your brain.

Application in Teams

Teams don’t have one giant brain, but they are what we work with most in organisations. So, to apply this to a team, we need to look at generalities in the types of work being done.

There are structural things that can increase the likelihood of Germane cognitive load over intrinsic. For example, in a conversation with Sweller, one of my co-workers asked about the effect of cross-functional teams with experience in many areas. Sweller suggested that teams with this structure may be more able to transfer knowledge of schema between one another.

However, to start with, we need to examine a team’s activities and determine where they fall in each of the previously mentioned categories.

Running cognitive load mapping

Before you start, you want to frame your thinking in terms of the goal that your team is trying to achieve (and it’s never bad to find opportunities to remind yourself and everyone else of it).

I’ll run through this for a chef so you can see how this works.

Identification

The first step is to work out what you do, what disrupts you, and things that are “hard”. You can do this individually or as a group. I would recommend trying a first run alone, as I found that I needed a few tries to really understand the difference between the types of cognitive load. Alternatively, you could use 1–2–4-all with your team to pull together everything your team collectively does.

A good way to get a sense of these things is to keep an hourly journal, where you record hour-by-hour what you have done during the day. These don’t need to be shared with anyone, so no one needs to know precisely how much time you spend on TPS reports.

Another good place to look is your old retrospective boards. You can pull out the most common feedback about negative and positive things. Finally, a quick look through your calendar can jog your memory.

Sorting

Once you have the pile of “stuff” that you do, the next step is to sort them. Split them into three columns. Look at each thing and decide if it’s primarily Extraneous, Intrinsic, or Germane.

My rule of thumb is this: Does it primarily elicit positive feelings of ease and completion? Well, that’s probably Germane. Is it hard but important? Well, that’s Intrinsic. Could it be cut away or replaced, and it elicits negative feelings? Well, that is probably extraneous.

Experimentation

Once you’ve identified Germane, Intrinsic and Extraneous cognitive load, you can start to make improvements. The best way to do this is to use a practice from lean product design, the experiment. Firstly, select one of the items to work on. Dot voting is how I recommend doing this, but you could also use the cost of delay to rank the importance of each activity if you are alone.

You want to maximise the Germane, minimise the Extraneous, and simplify the Intrinsic. So, for each of these categories, we have a way to improve the situation.

You can simplify the intrinsic through training courses and building the schema to enable germane cognitive load. Other examples include deliberate practice and peer-to-peer learning.

You can optimise the extraneous by either reducing or eliminating it. Sometimes, this means no more TPS reports, other times, it might mean moving to a quieter meeting space or moving meetings to a single day in the week.

The germane is what you can look for success. To maximise the germane, look to what it does and then see if you can apply what makes that a success to other activities that you do. Alternatively, find ways to simply do more of it. If the continuous delivery pipeline made deployments easier, add it to more projects.

Once I have some ideas in this section I like to plot them onto an experiment sheet. Coming up with a “Learning Goal” about what we want to achieve with this experiment, what our assumptions are regarding the experiment, the metrics we will record, what will indicate failure or success, and an early stop mechanism to help us fail fast and move on..

I’ve attached an example of the template I use for this, made by another AK-er, Benedict Steele.

It’s also important to note down the constraints of the experiment, your plan for monitoring it, the duration, and how you will share it with others.

Conclusion

And that’s it! You’ve run your cognitive load mapping session. I find this a really useful tool to ideate on how to reduce the cognitive load on a team. I have used a template I created for this example, but you can just as easily do it with a whiteboard.

How can I take this further?

A great place to start is the work of J. Sweller.


How to run a Cognitive Load Mapping session was originally published in Cloudnative.ly on Medium, where people are continuing the conversation by highlighting and responding to this story.

]]>
<![CDATA[Everyone is missing the point of teams]]> https://cloudnative.ly/everyone-is-missing-the-point-of-teams-8d6c473dd0c5?source=rss----cc8419b0e3e8---4 https://medium.com/p/8d6c473dd0c5 Fri, 19 Jul 2024 11:31:14 GMT 2024-07-19T11:31:14.633Z This simple definition can help make teams high-performing
3 people from the  Armakuni team wearing silly party hats at an Agile conference
Billie, Darren, and Zenon as the Agile Scotland “How to have a conversation” Team

The language inside the technology industry has a habit of suffering from semantic drift. Words lose their meanings over time; DevOps stops being about bringing together Dev and Ops and instead becomes what we previously called Ops, TDD starts meaning there are (probably) tests, and what we call Agile begins to mean Waterfall.

Even seemingly ubiquitous language drifts over time, even words so familiar that they almost become wallpaper, such as “Team.”

Try it out, and ask someone nearby for their definition of a Team. Was it the same as yours?

One of the most common stumbling blocks within large organisations is the need to understand the purpose and meaning of this most foundational concept.

If you, dear reader, aspire to lead a high-performing organisation, misunderstanding these words is one of the most common problems and has disastrous consequences.

One of the most well-known effects is Melvin Conway’s law: “organizations which design systems (in the broad sense used here) are constrained to produce designs which are copies of the communication structures of these organizations.”. For those unfamiliar with this law, Eric Raymond brilliantly rephrased it: “If you have four groups working on a compiler, you’ll get a four-pass compiler.”

If you get this wrong, not only will your delivery be chaotic, mirroring the communication in your organisation, but its implementation will be chaotic, leading to bugs and poor user experience.

This is mirrored in my own experiences. I have seen “teams” ranging from individuals to hundreds of people, working on common goals or not, representative of the work flows in the organisation or not, grouped by product or technology or seemingly randomly. Often, the root cause of many organisations' problems comes down to a poor understanding of what a team is and how to make it effective.

So, what is a team? If everyone has a slightly different definition, can we define it more clearly to maximise its usefulness?

The most basic definition of a team is simply an abstract grouping of people. This definition is easy to understand, but why do we form these groupings? They are so universal that there must be a reason behind it.

Let’s think about what these groupings are doing in the software industry; it might be some design activities, coding, or ensuring a product complies with some law. Work. This definition holds even if we move away from the software industry: Imagine a team of farmers, a team of people in a doctor’s surgery, or even a team of track and field athletes competing in a relay.

This is mirrored in the dictionary definitions. This is from the Cambridge Dictionary

A number of people or animals who do something together as a group

“Teams exist to do work” seems obvious, but it tells us something important: Teams form to maximise the work a group can do, at least in a business setting.

Let’s rephrase our definition: A team is an abstract group of people who combine to maximise the work done.

A 2019 study of two data sources proved this by looking at two “planetary-scale collaborative systems,” GitHub and Wikipedia, titled “Collaboration Drives Individual Productivity“. They found that team “productivity of smaller groups scales super-linearly with group size,” but you stop gaining performance gains after a certain point when these performance gains are “saturated.” This result mirrors a study (the link is to the paper, but you might be better off with the Wikipedia page) by Maximilien Ringelmann (an agricultural engineer) in 1913.

Thanks to “Collaboration Drives Individual Productivity”, we know that this benefit of teams is that performance gains from collaboration stop once the team size is between 10 and 15 people, and the optimum size is around six people for maximum gain by adding someone new to the team.

Let’s rephrase our definition. A team is an abstract group of people who come together to maximise the work done with a maximum of 15 people, but optimally around 6.

So why does this effect happen, why does performance drop off, and how can we get the most out of the people on our team?

Plants have a natural height limit. If they grew any taller, their own weight would become too much for their supporting structures to handle, causing them to collapse.

You have seen this effect in action. Within organisations, it looks like a mysterious slowdown in delivery as the company grows its hierarchy like a plant reaching for sunlight. The more complicated the organisation’s communication, the more roles you need to include to handle that until delivery grinds to a near halt.

John Forsyth, a social psychologist, found that the factors that affected this in teams were “identifiability” — the ability for work to be linked back to an individual — common “goals” to reduce coordination needs, and increasing involvement by identifying the overall reason behind the local goals or a grouping to a broader meaning.

Daniel Pink restated this in his hit book “Drive: The Surprising Truth About What Motivates Us”. Finding that essentially motivation, after money reaches a level to cover basic needs, is primarily driven by

  • Purpose — The desire to do something that has meaning and is important, being the most critical of the 3.
  • Autonomy — A desire to be self-directed; it increases engagement over compliance.
  • Mastery — The urge to get better skilled

Let’s rephrase our definition again.

“A team is an abstract group of people who come together to maximise the work done with a maximum of 15 people, but optimally around 6, where team members can maximise mastery, purpose and autonomy.”

It’s clunky but more helpful. However, it needs something added. We have a clear direction for the team size, but how can we help people achieve the latter part?

Well, the purpose of Daniel Pink’s book is a practical constraint. A team only has a purpose if it can produce something meaningful. A team of developers that only ever writes fragments of a system, for example, a web designer without an HTTP server, can never have a purpose because the work has no purpose.

To achieve something with a purpose, we need a team with enough functions baked into it and enough different capabilities to create something functional and meaningful. A “cross-functional” team

John Sweller is, perhaps, the most famous educational psychologist working in the field of Cognitive Load Theory. Recently, my colleague, Andrew, asked him about the effect of cross-functional teams on cognitive load:

“Engineers do have a problem with economics and vice-versa which is why putting them together can work. The engineer does not have to learn economics and vice versa. Learning economics or engineering puts more of a strain on working memory than using someone else’s cognitive system!”

Of course, this effect isn’t possible if the person you want to use the brain of is unavailable at the moment because they are on an entirely different team.

Cross-functional teams are doubly effective when you bring in Eliyahu Goldratt’s work on the theory of constraints. His work shows that a system is constrained to move at the pace of its slowest component (the bottleneck). Suppose we have a cross-functional team that can deliver something by itself; you have simplified the number of elements within the system and reduced the complexity of identifying and optimising the bottleneck.

In addition, it also minimises the amount of work in progress. If a designer designs something, that design is unfinished until the design has been coded up by a developer and tested by a tester. If somebody stops this imaginary project tomorrow, a cross-functional team limits the amount of in-progress work to what is local to that team rather than everything into a broader system, enabling you to be nimble, or rather “agile”, in the face of a changing business environment because the costs of being so are low.

Let’s take one last look at that definition.

“A team is an abstract group of people who come together to maximise the work done with a maximum of 15 people, but optimally around 6, where team members can maximise mastery, purpose and autonomy through cross-functional teams with a common goal.”

If you want a high-performing organisation, you need to cultivate teams in cross-functional groupings of between 6 and 15 people who share a common goal and can achieve that goal.


Everyone is missing the point of teams was originally published in Cloudnative.ly on Medium, where people are continuing the conversation by highlighting and responding to this story.

]]>
<![CDATA[Why organisations misunderstand things]]> https://cloudnative.ly/why-organisations-misunderstand-things-edee3873f5ec?source=rss----cc8419b0e3e8---4 https://medium.com/p/edee3873f5ec Thu, 28 Sep 2023 08:37:22 GMT 2023-09-28T08:37:22.339Z

In August this year, McKinsey released an article called “Yes, you can measure software developer productivity”. This created a lot of reactions from people saying that McKinsey had completely missed the point of recording metrics, why we record some metrics, and why we don’t record others.

And I broadly agree with those commentators critical of these metrics (Dave Farley’s analysis is particularly good, and there’s one from our very own Andrew Gibson finding the good in the metrics). However, what has happened here is not limited to McKinsey, or even to metrics.

I have not met a developer without a story where they have joined a team that claimed to be doing agile but instead were doing something else, usually waterfall. This is another example of the same problem, an organisation missing the deeper reasoning.

So how does this happen?

In 1985, Edgar Schein published a book called “Organisational Culture and Leadership: A dynamic view”. In it, he hoped to explain how organisations decide what is and what isn’t a priority issue for them to fix. It provides a great model for understanding why organisations misunderstand.

He proposed a model of three layers from most visible to least. On the outside, we have Artefacts; below this, we have Values, and below that, we have Assumptions.

Artefacts are the things you see in your organisation day to day. This might be tickets in Jira, or your daily standup, or even the tests in your codebase. These are the easiest things to see around your office and are often tangible or visible to the casual observer.

Below this we have Values. These are harder to experience, they are the things that the creators of the artefacts value. A great example of when people have considered this and written it down is the Agile Manifesto.

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
Manifesto for Agile Software Development

By the way: think back to your experience of that organisation that missed the point of agile. Did they hold these values?

Under all of these are the fundamental assumptions needed to make these values work. These are even less visible than the values. In the case of the Agile Manifesto, the assumption is that developing software is as much about discovering the requirements as implementing them and that working in small iterative chunks will produce better results than other methods.

When McKinsey wrote this article, they fell into the trap that Schein’s model exposes. They have looked at the fantastic work of Amy C. Edmondson and Nicole Forsgren in researching how organisations create effectively and seen the Artefacts: metric frameworks and not the Values or even the Assumptions below them.

It isn’t that McKinsey stupid, or malicious, it’s that they only saw the most visible part of the work. So when they came to build on those metrics, adding their own, they were unable to iterate on that work. I think that’s a very natural misstep to make.

My reading of the values underlying these metrics is that while metrics are important, they are merely the outward manifestation of the underlying value of using scientific research to identify, on aggregate, what the best indicators of organisational success are.

However, even Amy Edmonson and Nicole Forsgren are making assumptions. One of those is that organisations, when exposed to metrics, will react to them in a logical way and that we can perform scientific research to discover what these metrics should be.

I suspect that both of these are true, but they are very much assumptions.

So, next time you are learning something new, try and dig a little deeper and see if you can discover the values and assumptions of the thing you are trying to learn, and if someone is struggling to pick up what you are trying to teach them, make sure you have signposted your assumptions and values as best you can — it’ll help.

I will end this on a tip: Think of an artefact that you use. If you were to ban yourself from using it, what could you do to give yourself the same effect? Now ban that too, and come up with another. Do this enough times and the common thread between them is what you value.

You can likewise use this approach for assumptions too. List all the situations you don’t think the value applies, the common thread is what your assumptions are.

I’d love to hear about your experiences of discovering things you valued that you didn’t expect.


Why organisations misunderstand things was originally published in Cloudnative.ly on Medium, where people are continuing the conversation by highlighting and responding to this story.

]]>
<![CDATA[Beginning Team Topologies]]> https://cloudnative.ly/beginning-team-topologies-764c796dc202?source=rss----cc8419b0e3e8---4 https://medium.com/p/764c796dc202 Tue, 29 Aug 2023 09:18:46 GMT 2023-08-29T09:33:28.813Z A table football table taken from one of the goal mouths

Team Topologies is an increasingly popular topic in organisations looking to improve delivery speed. It is fascinating; It has that perfect combination of ideas to hook me, both practically in my daily life and well backed up by scientific research I can dive into.

When I find something interesting, I write. I love to get more people excited about things I am excited about. However, when writing this article, I struggled to articulate the experiences of those who are coming to these ideas for the first time.

So I did what any AK-er would do in this situation: I reached out to someone with a different perspective. Victoria has been helping a bunch of organisations get their first taste of Team Topologies, she bridges that gap between being an expert and knowing what her attendees experience, so can replay their experiences of coming to Team Topologies for the first time.

In this article, I want to report her experiences and what attendees of her workshops are telling her. I want to use them to tackle those first problems you experience as you dip your toes into the Team Topologies world.

So why are people turning to Team Topologies?

According to Victoria, what she hears most often from people attending her workshops is that people turn to Team Topologies because organisations are struggling to get teams to deliver.

“Organisations are created for a reason,” Victoria says “they have a mission, and they bring in the best people they possibly can to help them to deliver on that mission”. However, as time passes, something is lost, and people, the team, “the powerhouse of delivering on that mission”, become hampered by layers of bureaucracy that made sense in previous iterations of the organisation but are just causing teams to split their focus now.

This problem has many names: the frozen middle, the build trap, a lack of agility, a lack of DevOps or a lack of Domain Driven Design (DDD) practices. What can be frustrating with this is that we have been finding solutions to these problems for years, and we have a lot of tools to help, but very few that work beyond just one team (without excessive bullshit, I am looking at you SAFe and Scrum).

Organisations need the help of Team Topologies to help them solve this problem, as Victoria puts it, “the key thing about Team Topologies for me is unravelling that knot and refocusing organisations on the importance of setting up the teams for success”.

What are the core components?

Team Topologies brings together a lot of thinking from the Agile and DDD Communities. In particular, it combines the ideas of a Team Topology (or type), Cognitive Load Theory as applied to teams, and Conway’s Law into one usable and, perhaps more importantly, explainable package.

Book cover of Team Topologies by Matthew Skelton and Manuel Pais

If you are familiar with these communities, much of it will already be familiar. However, for those who are not, I’ll quickly introduce them.

Conway’s law is probably the easiest of these to grasp. In 1967 Conway said, “Any organisation that designs a system (defined broadly) will produce a design whose structure is a copy of the organisation’s communication structure.” My favourite rephrasing of this is by Eric S Raymonds, who wrote The Cathedral and the Bazaar. Eric said, “If you have four groups working on a compiler, you’ll get a 4-pass compiler.”.

Conway’s law is a foundation of Team Topologies and is useful to remind us that how organisations design their teams impacts their output, so let’s use that to our advantage.

But I thought this was called Team Topologies, so what are these topologies? There are 5 (or 3, depending on how you look at it). Let’s go over the primary 3:

  • Stream Aligned Teams -These teams align themselves with the flow of work or their value stream. They take ownership of the process from conception to delivery to their users.
  • Enabling Teams — These are teams that work with other teams, with one important feature: they facilitate capability uplift rather than doing the work for them, and their engagement with the other team has an end date that is soon.
  • Complicated Subsystem Teams — These teams work with another team but are a specialised group of experts to simplify a complicated problem, reducing the cognitive load of a Stream Aligned Team. Usually chock full of brilliant domain experts.

And the two that are not quite direct team structures

  • Undefined topology — An absence of any defined topology, not a desirable state in Team Topologies, but most people’s starting point.
  • Platform Group — This grouping of other team types allows the topologies to be fractal, something we commonly see in resilient structures in nature. This topology is a grouping of teams that delivers a service accessed in a non-blocking way to accelerate the flow of other teams. For example, a database provisioning team. The critical thing to know is that these teams should be able to be competitive with what public services can offer — that means being customer centric, and understanding the users needs. Think of it as a box that a group of other teams fit in.

These come with some defined effective communication styles.

  • Facilitation: almost exclusively provided by Enabling Teams — a time delimited interaction that uplifts the capability of another team.
  • X-as-a-Service: most frequently provided by Platform Groups or Complicated Subsystem Teams and consumed by Stream Aligned Teams — the provision of a service in a non-blocking way.
  • Collaboration: most frequently between two Stream Aligned Teams, or between a Complicated Subsystem Team/Platform Group and a Stream Aligned team at the inception of an X-as-a-Service — collaboration is costly, so it should be time delimited with defined outcomes.

These patterns allow the limiting of team cognitive load by limiting handoffs, but also to let teams say, “Talk to us like this, so you don’t disrupt us”.

Team Cognitive Load is also a fascinating idea; it builds on the work by Australian Psychologist John Sweller, but instead of applying it to the individual, we use it for the team.

Cognitive load theory as applied to individuals works like this, when a person is solving a problem their brain is under differing sorts of pressures. These types of load are Intrinsic, Germaine, and Extraneous.

Think back to the first time you did algebra. Was it hard? The existence of mental models that let you solve problems fast, and reduces effort needed to solve a problem. This is called the “Intrinsic” cognitive load.

Ever tried to focus while in a noisy office? Pressures from the environment are called “Extraneous” cognitive load.

Finally there is how hard this particular instance of a problem is to solve. This is the “Germane” cognitive load. For example adding 3+3 is easier than 7543565+654334, but it uses the same skills.

So let’s apply that to a team.

  • We want to “Simplify the intrinsic”. This isn’t about solving less hard problems, it’s about building those mental models in our team members that make the intrinsic load in individuals lower, so that might be solving a problem more often, adding an expert to the team, or doing some training.
  • We want to minimise the extraneous by removing environmental noise for the team (such as bureaucracy),
  • Finally we want to maximise the work which is germane to the problem.

Put all these concepts together, and you have Team Topologies, a set of patterns you can apply to your organisation.

But trying this out isn’t all smooth sailing.

When first introduced to these ideas, it’s tempting to take whatever your team currently does and label it with one of these names.

Victoria told me about the confusion from attendees at her workshops, a common question is, “So how do we fit into this model that you’re trying to teach us?”.

An essential thing to say here is that a team won’t just magically fit into these topologies, you need to evolve it intentionally. As Victoria puts it “When we’re asking a team to define themselves as a platform group or a stream aligned team or an enabling team, frequently the answer is, ‘but we do a bit of all of that’…it makes a lot of sense to us, but we can’t put our square peg into this round hole.”

To solve this, Victoria’s advice is, “Take some of those core concepts and make small changes. Proving their value at a small scale is a good place to start, and then start rolling it out.”

Some practical resources

This article aims to answer some of the questions that commonly come up when talking about Team Topologies based on Victoria’s and her attendee’s experiences, making it a bit easier to get started with the topic.

If you (like me) find this fascinating and would like to know more, I recommend the following:

I would love to hear from you about your experiences, you can get in touch via the Team Topologies page on the Armakuni site, or just drop me a message on LinkedIn. I love feedback, and learning about the perspectives of people with the same passions as me.


Beginning Team Topologies was originally published in Cloudnative.ly on Medium, where people are continuing the conversation by highlighting and responding to this story.

]]>