Capgemini Software EngineeringJekyll2026-03-13T10:57:21+00:00https://capgemini.github.io/Capgeminihttps://capgemini.github.io/https://capgemini.github.io/engineering/critical-thinking-age-of-ai2025-11-17T00:00:00+00:002025-11-17T00:00:00+00:00Stuart Williamshttps://capgemini.github.io/authors#author-stuart-williams
<h2 id="what-is-critical-thinking">What Is Critical Thinking?</h2>
<p>Critical thinking emerged as a formal area of study from multiple traditions. The most influential modern definition comes from the 1990 Facione Delphi study <sup id="fnref:Facione" role="doc-noteref"><a href="#fn:Facione" class="footnote" rel="footnote">1</a></sup>, where expert philosophers and educators reached consensus on what constitutes critical thinking - no small feat in philosophy. This definition has been widely adopted in education and professional development. They arrived at this:</p>
<blockquote>
<p>We understand critical thinking to be purposeful, self-regulatory judgment which results in interpretation, analysis, evaluation, and inference, as well as explanation of the evidential, conceptual, methodological, criteriological, or contextual considerations upon which that judgment is based.</p>
</blockquote>
<p>That might sound abstract, but it maps surprisingly well to what we already do in software development. Consider how the skills involved in critical thinking align with common software engineering tasks:</p>
<h3 id="core-cognitive-skills">Core cognitive skills</h3>
<ul>
<li><strong>Interpretation</strong> - understanding what code, requirements, or error messages actually mean</li>
<li><strong>Analysis</strong> - examining the structure of arguments in RFCs, identifying assumptions in designs</li>
<li><strong>Evaluation</strong> - assessing whether evidence supports claims, whether benchmarks are meaningful</li>
<li><strong>Inference</strong> - reasoning about consequences, exploring what could go wrong</li>
<li><strong>Explanation</strong> - articulating why we made particular decisions</li>
<li><strong>Self-regulation</strong> - monitoring our own reasoning, catching our own biases</li>
</ul>
<p>The Delphi study identified habits of mind that distinguish people who use these skills consistently - inquisitiveness, systematicity, truth-seeking, open-mindedness, and judiciousness. We’ve all experienced this during code reviews: technical skill isn’t enough if someone becomes defensive or stops genuinely engaging with alternative perspectives; we refer to this as our <em>disposition</em>.</p>
<p>The work also established standards for good critical thinking: clarity, accuracy, precision, relevance, depth, breadth, logic, and significance. These are not arbitrary criteria - they’re the qualities that distinguish robust reasoning from reasoning that merely sounds convincing - and we’ll see how important this is as we explore how AI changes what critical thinking means in practice.</p>
<h2 id="the-changing-nature-of-thinking-with-ai">The Changing Nature of Thinking With AI</h2>
<p>For decades, software development involved humans reasoning about problems and then expressing those solutions through code; critical thinking should be happening <em>before</em> and <em>during</em> the act of coding. AI hasn’t eliminated that, isn’t replacing human judgment, and isn’t removing the need for critical thinking. Instead, it’s changing <em>where</em> and <em>how</em> critical thinking is applied.</p>
<p>Consider what happens when we work with an AI assistant on a coding task. The AI can generate plausible code remarkably quickly. This creates a new cognitive demand: we need to evaluate code we didn’t write, based on a mental model we may not have fully formed yet. The traditional workflow of “think deeply, then implement” becomes “sketch intent, evaluate output, refine understanding.”</p>
<p>Peter Naur’s 1985 essay “Programming as Theory Building” <sup id="fnref:Naur" role="doc-noteref"><a href="#fn:Naur" class="footnote" rel="footnote">2</a></sup> argued that programming is fundamentally about developers building and maintaining a theory of how the program relates to the problem domain. The program itself is just an expression of that theory. When AI generates code, we face a question: whose theory is being built? Or rather, <em>is</em> a theory being built?</p>
<p>The answer matters because theory - in Naur’s sense - is what enables us to modify programs appropriately, explain why things are the way they are, and respond to new requirements sensibly. Without theory, we’re left with code we don’t fully understand, making changes through trial and error.</p>
<p>This suggests that critical thinking in the AI era requires conscious attention to theory building in ways it perhaps didn’t before. When we wrote every line ourselves, theory building was almost unavoidable. When AI writes substantial portions, we need to deliberately construct understanding of what was generated and why it works - we have an increased focus on learning and evaluation.</p>
<p>There’s also a shift in what constitutes “good” critical thinking. Previously, questioning our own assumptions and biases was primarily about catching our own, or our colleague’s errors. Now it includes questioning whether the AI has understood our intent, whether it’s applied patterns appropriately for our context, and whether plausible-sounding explanations actually correspond to what the code does.</p>
<p>The redistribution of cognitive work makes it worth examining how critical thinking applies in each domain where AI is now active.</p>
<h2 id="critical-thinking-in-software-and-product-development">Critical Thinking in Software and Product Development</h2>
<p>Software development has always demanded critical thinking, even if we didn’t always call it that. The field emerged from mathematics and engineering, disciplines where rigorous reasoning is foundational. But software occupies an unusual position: it’s both a formal system (governed by logic and mathematics) and a social artefact (built by humans, for humans, within organisations).</p>
<p>This dual nature means critical thinking in software development operates on multiple levels simultaneously.</p>
<h3 id="at-the-code-level">At the code level</h3>
<p>When we’re debugging, we’re interpreting symptoms, analysing causes, evaluating hypotheses, and inferring what change will fix the issue. The self-regulation component shows up when we catch ourselves assuming “it must be the database” because the database caused problems last time - and we force ourselves to actually gather evidence.</p>
<p>When reviewing code, we’re evaluating whether the implementation matches stated intent, whether edge cases are handled, whether the approach is appropriate for the context. The dispositional elements matter enormously here: is our review genuinely truth-seeking, or are we trying to demonstrate cleverness?</p>
<h3 id="at-the-architectural-level">At the architectural level</h3>
<p>Every architectural decision involves inference about future consequences - how will this choice constrain us later? Should this be synchronous or asynchronous? Monolith or microservices? What failure modes does it introduce? The quality of these inferences depends directly on the depth and breadth of our analysis.</p>
<p>Fred Brooks observed in “No Silver Bullet” <sup id="fnref:Brooks" role="doc-noteref"><a href="#fn:Brooks" class="footnote" rel="footnote">3</a></sup> that there’s an essential complexity in software - complexity that derives from the problem domain itself, not from our tools. Critical thinking helps distinguish <em>essential</em> from <em>accidental</em> complexity. When we’re evaluating a proposed solution, are we assessing whether it addresses the actual problem, or whether it’s solving a simpler problem the team finds more tractable?</p>
<h3 id="at-the-product-level">At the product level</h3>
<p>Product development introduces radical uncertainty. We’re making decisions with incomplete information, under time pressure, with imperfect understanding of user needs. Critical thinking here includes meta-reasoning: “how confident should we be in this decision?”, “what would change our mind?”, “what’s the cheapest way to test our assumptions?”.</p>
<p>As we’re now reasoning about user needs, market dynamics, strategic positioning, the evaluation standards include not just “does this work?” but “does this matter?” and “is this the right thing to build?”</p>
<h3 id="at-the-organisational-level">At the organisational level</h3>
<p>Amy Edmondson’s research on psychological safety <sup id="fnref:Edmondson" role="doc-noteref"><a href="#fn:Edmondson" class="footnote" rel="footnote">4</a></sup> reveals something important: critical thinking isn’t just an individual cognitive skill. Teams need environments where people can question assumptions, admit uncertainty, and challenge ideas without fear. A technically brilliant team that can’t question each other’s reasoning will make worse decisions than a less skilled team with strong critical thinking culture.</p>
<p>This matters because software is built by teams, not individuals. Our ability to think critically is bounded by our ability to think <em>together</em> critically. Code review, pair programming, design discussions are all <em>collaborative</em> critical thinking exercises.</p>
<h3 id="ai-enters-the-fray">AI Enters The Fray</h3>
<p>Traditional software development assumed humans did the reasoning and computers did the executing. AI blurs this boundary. When an AI suggests an implementation, we’re evaluating not just correctness but appropriateness. When it generates tests, we’re assessing whether they actually validate what matters.</p>
<p>The critical thinking demand hasn’t decreased - if anything, it’s intensified. But the focus has shifted from “am I solving this correctly?” to “has this been solved correctly, and does it solve the right thing?”</p>
<p>Having examined how critical thinking operates in software development, we can now examine the practice of prompting an AI.</p>
<h2 id="critical-thinking-about-prompting">Critical Thinking <em>About</em> Prompting</h2>
<p>Prompting has emerged as a new skill, and like any new skill it’s accumulating a body of practices, advice, and received wisdom. Some of this is evidence-based, much of it isn’t, and some of the wilder claims are being shared on social media to boost engagement statistics. Critical thinking about prompting means evaluating these practices with the same rigour we’d apply to any technical claim.</p>
<h3 id="the-problem-with-evidence">The Problem With Evidence</h3>
<p>Prompt engineering is remarkably young. The techniques that work today weren’t possible three years ago because the models didn’t exist. This creates a methodological challenge: by the time we could rigorously study a technique, the models have often changed. There are releases of models every few months, each with different capabilities and behaviours.</p>
<p>Consider “chain-of-thought” prompting - asking the model to think step-by-step. Wei et al.’s 2022 research <sup id="fnref:Wei" role="doc-noteref"><a href="#fn:Wei" class="footnote" rel="footnote">5</a></sup> showed significant improvements on reasoning tasks, providing solid evidence for the technique. Yet we’ll also encounter advice like “always use polite language” or “threaten the model with consequences” - claims that spread through social media without controlled evaluation.</p>
<p>Critical thinking here means asking: what’s the actual evidence? A viral tweet isn’t evidence. An anecdote isn’t evidence. Even a compelling demonstration with one example isn’t evidence - it’s an existence proof that something <em>can</em> work, not that it <em>does</em> work reliably.</p>
<h3 id="cargo-cult-prompting">Cargo cult prompting</h3>
<p>You’ve likely seen prompts that contain elaborate rituals: specific phrasings, particular structures, magical incantations. Some of these have theoretical grounding. Others persist because someone successful uses them, so they must work - classic cargo cult thinking.</p>
<p>The Anthropic documentation on prompting offers guidance based on their understanding of how their models work. That’s different from a random Medium article about “10 ChatGPT tricks experts use to make $100k per week”. The former comes from people who built the system. The latter comes from someone who found something that may have only worked once.</p>
<p>Folk wisdom isn’t necessarily wrong - practitioners sometimes discover effective techniques before researchers can validate them - though we still need to evaluate the source, understand the claimed mechanism, and test whether it actually improves outcomes for our use case.</p>
<h3 id="understanding-the-mechanism">Understanding the mechanism</h3>
<p>When someone claims a technique works, ask why it would work. If the explanation is “the AI likes it when we’re polite” - that’s anthropomorphisation, not mechanism. If the explanation is “providing examples helps the model identify the pattern we want” - that’s a testable claim about how pattern matching and few-shot learning operate.</p>
<p>The best prompt engineering advice explains <em>why</em> something works in terms of how language models actually function. This lets us reason about when to apply it and when it’s irrelevant.</p>
<h3 id="few-shot-learning-example-of-examples">Few-shot learning, example of examples</h3>
<p>Research <sup id="fnref:Snell" role="doc-noteref"><a href="#fn:Snell" class="footnote" rel="footnote">6</a></sup> clearly shows that providing examples (few-shot prompting) improves performance on many tasks. But critical thinking pushes further: how many examples? What makes a good example? Does order matter? Does diversity of examples matter more than quantity?</p>
<p>These are answerable questions, but they require moving beyond “use examples because someone said so” to “use examples because they help the model identify the relevant pattern, and I understand what pattern I’m trying to indicate.”</p>
<h3 id="evaluating-claims-of-effectiveness">Evaluating claims of effectiveness</h3>
<p>When we read that a technique “dramatically improves” results, critical thinking asks: improves according to what metric? On what tasks? Compared to what baseline? “This prompt is better” is meaningless without specification of better <em>for what</em> and better <em>how</em>.</p>
<p>You can’t apply critical thinking standards like precision and accuracy to vague claims. If someone says “this prompt structure gets better code,” we need to know: better by what measure? Fewer bugs? More maintainable? Faster? Closer to requirements?</p>
<h3 id="challenges-with-replication">Challenges with replication</h3>
<p>Software engineering has a replication crisis of its own - Shepperd et al.’s 2014 analysis <sup id="fnref:Sheppard" role="doc-noteref"><a href="#fn:Sheppard" class="footnote" rel="footnote">7</a></sup> found that many claimed improvements in software practices don’t replicate. Prompt engineering faces the same risk, amplified by rapid model changes.</p>
<p>When we find advice about prompting, consider: could we replicate this? Is the claim specific enough to test? Are the conditions clear enough that we’d know if it applies to our situation?</p>
<h3 id="summary">Summary</h3>
<p>Critical thinking about prompting means treating it as an engineering discipline, not a mystical art. It means evaluating techniques based on evidence, understanding mechanisms not just recipes, and being willing to test rather than assume.</p>
<p>This meta-level critical thinking - about the practice itself - sets the foundation for the next question: how do we embed critical thinking <em>within</em> our prompts?</p>
<h2 id="critical-thinking-in-prompting">Critical Thinking <em>In</em> Prompting</h2>
<p>If critical thinking <em>about</em> prompting is the meta-level question of which techniques work, critical thinking <em>in</em> prompting is the practice-level question: how do we structure prompts to elicit reasoning that meets critical thinking standards?</p>
<p>The distinction matters because AI can produce two very different kinds of output: answers that sound authoritative, and answers that result from careful reasoning. The difference often lies in how we prompt.</p>
<h3 id="requesting-explicit-reasoning">Requesting explicit reasoning</h3>
<p>The simplest technique is to ask for it. “Explain your reasoning” or “think step-by-step” prompts the model to make its inference process visible. Explicit reasoning serves multiple purposes: it lets us evaluate the reasoning, it often improves the quality of the final answer, and it helps us identify where the reasoning might be flawed. Many models have a ‘thinking mode’ that can be activated to perform this kind of reasoning. The benefit extends beyond accuracy to auditability - we can see where the reasoning goes wrong.</p>
<h3 id="prompting-for-analysis">Prompting for analysis</h3>
<p>Rather than asking “what should we do?”, structure prompts to request analysis first: “What are the key factors in this decision? What are the trade-offs between approaches? What assumptions underlie each option?”</p>
<p>Such structured prompts mirror how we’d approach a problem ourselves - decompose it, examine components, identify relationships. When we prompt for analysis explicitly, we’re more likely to get output that’s genuinely analytical rather than pattern-matched from training data.</p>
<h3 id="requesting-evaluation">Requesting evaluation</h3>
<p>You can prompt AI to evaluate claims, evidence, or even its own outputs. “What are the weaknesses in this argument?” or “What evidence would contradict this conclusion?” or “What are the failure modes of this approach?”</p>
<p>Evaluation prompts are particularly valuable in code review scenarios. Rather than asking AI to “review this code,” we might ask: “What edge cases might this code not handle? What assumptions does this implementation make? What would break if requirements change in these specific ways?”</p>
<h3 id="prompting-for-alternative-perspectives">Prompting for alternative perspectives</h3>
<p>Critical thinking requires considering multiple viewpoints. You can explicitly request this: “What are three different ways to approach this problem?” or “What would a security engineer’s concerns be? What would a performance engineer prioritize?”</p>
<p>Requesting multiple perspectives is especially useful when we’re working in a domain where we have gaps in expertise. By prompting for different perspectives, we’re creating a synthetic version of diverse team input.</p>
<h3 id="structured-reasoning-frameworks">Structured reasoning frameworks</h3>
<p>Rather than free-form responses, we can request specific reasoning structures. For instance:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Analyse this architectural decision using these criteria:
- Performance implications
- Maintainability concerns
- Failure modes
- Scaling characteristics
- Team expertise requirements
</code></pre></div></div>
<p>Such frameworks ensure the analysis is comprehensive rather than focusing only on salient features.</p>
<h3 id="prompting-for-assumption-identification">Prompting for assumption identification</h3>
<p>One of the most valuable applications: “What assumptions does this solution make?” or “What would need to be true for this approach to work?”</p>
<p>When designing systems, unexamined assumptions are often where things go wrong. By explicitly prompting for assumption identification, we’re creating a forcing function for critical examination.</p>
<h3 id="self-evaluation-prompts">Self-evaluation prompts</h3>
<p>You can ask AI to evaluate its own outputs: “Review my previous response. What questions should I ask to verify this is correct? What could be wrong with this analysis?”</p>
<p>Self-evaluation is particularly useful in iterative problem-solving. After getting an initial response, asking for self-evaluation often surfaces issues or limitations that weren’t apparent in the first pass.</p>
<h3 id="requesting-evidence-and-sources">Requesting evidence and sources</h3>
<p>When AI makes claims, prompt for the basis: “What evidence supports this recommendation? What research or established practices inform this approach?”</p>
<p>While this won’t always yield perfect citations, it shifts the response from assertion to argumentation - from “do this” to “here’s why this makes sense given these principles.”</p>
<h3 id="comparative-analysis">Comparative analysis</h3>
<p>Rather than asking for a single solution, request comparison: “Compare these three approaches across relevant dimensions. What are the strengths and weaknesses of each?”</p>
<p>Comparison forces evaluation rather than mere generation. It’s the difference between “give me a solution” and “help me reason about which solution fits my context.”</p>
<h3 id="the-limitations">The limitations</h3>
<p>It’s important to recognize what this doesn’t do. Prompting for critical thinking doesn’t guarantee the AI has actually engaged in reasoning - it may be pattern-matching what critical thinking looks like. The output may appear rigorous without being rigorous.</p>
<p>Critical thinking about prompting and critical thinking in prompting need to work together for precisely this reason. You prompt for structured reasoning, then we apply our own critical thinking to evaluate whether that reasoning is sound.</p>
<h3 id="practical-application">Practical application</h3>
<p>In practice, embedding critical thinking in prompts means moving from:</p>
<table>
<thead>
<tr>
<th>Baseline Prompt</th>
<th>Enhanced Critical Thinking Prompt</th>
</tr>
</thead>
<tbody>
<tr>
<td>Write a function to parse JSON</td>
<td>Write a function to parse JSON. <br />What edge cases should it handle? What errors might occur? <br />How should it fail gracefully?</td>
</tr>
<tr>
<td>Is this a good architecture?</td>
<td>Evaluate this architecture against these criteria: [list].<br /> What are the trade-offs? <br />What assumptions does it make?</td>
</tr>
<tr>
<td>Fix this bug</td>
<td>Analyse this bug. What’s the root cause? <br />What are three possible fixes? <br />What are the implications of each?</td>
</tr>
</tbody>
</table>
<p>The prompts become longer, more structured, and more explicit about the reasoning process we’re requesting. But the output becomes more valuable because it’s not just an answer - it’s reasoning we can examine, question, and build upon.</p>
<h2 id="what-remains-human">What Remains Human</h2>
<p>We started by defining critical thinking: purposeful, self-regulatory judgment involving interpretation, analysis, evaluation, and inference. We’ve examined how AI redistributes this cognitive work across software development and prompting practices. Now we can ask: what does this mean for human cognition when AI is present?</p>
<h3 id="theory-building">Theory-building</h3>
<p>Naur argued that a programmer’s value lies not in the code they produce but in the theory they hold - their understanding of how the program relates to the problem domain. Theory enables appropriate modification, explains design choices, and guides responses to new requirements.</p>
<p>When AI generates code, this theory doesn’t automatically transfer. The code might work, but we don’t necessarily understand why it works, what alternatives were considered, or how it should change when requirements shift. Theory building becomes a more conscious, deliberate practice - we can’t rely on writing code to build theory, so we actively construct understanding of what was generated through critical evaluation.</p>
<h3 id="curation">Curation</h3>
<p>Previously, the hard work was generating solutions - thinking through the problem, designing an approach, implementing it carefully. Evaluation was important but often easier than generation, so the cognitive load was front-loaded.</p>
<p>AI inverts this relationship: generation becomes fast while evaluation becomes the bottleneck, requiring us to judge not just correctness but appropriateness, maintainability, and contextual fit. we’re assessing whether the generated solution reflects good thinking about the problem, forming judgments about work we didn’t create, often rapidly and while our mental model is still forming. The critical thinking skills - interpretation, analysis, evaluation - operate more continuously than when we controlled the pace of generation.</p>
<h3 id="human-judgement">Human Judgement</h3>
<p>AI analyses patterns across millions of codebases and suggests implementations based on statistical regularities. But it doesn’t hold theory or understand our specific context - the organisational constraints, planned system evolution, or tacit team knowledge about what matters.</p>
<p>Human critical thinking remains essential precisely here: evaluating contextual fit, judging trade-offs against specific constraints, and determining whether solutions address actual problems rather than surface manifestations. The dispositional elements from the Delphi study - inquisitiveness, systematicity, judiciousness, truth-seeking - become more important as plausible-sounding AI suggestions demand the disposition to question, probe for weaknesses, and consider alternatives.</p>
<h3 id="collaboration">Collaboration</h3>
<p>Psychological safety research found that critical thinking isn’t purely individual cognition - it’s collective. Teams need environments where members can question each other’s reasoning, including decisions about accepting AI-generated solutions. The quality of team reasoning may depend on whether people feel comfortable admitting “I don’t fully understand this AI-generated code” rather than feigning comprehension they lack.</p>
<h3 id="whats-automated-what-isnt">What’s automated, what isn’t</h3>
<p>AI automates pattern matching and statistical synthesis at scale - recognising “this looks like that” and generating outputs following familiar patterns, known from its training data. What it doesn’t automate is judgment about pattern appropriateness. It lacks goals, constraints, or understanding of consequences beyond immediate output. It doesn’t hold theory or care whether solutions fit our context.</p>
<p>These require human critical thinking: interpreting complex, ambiguous situations; evaluating against implicit or contextual criteria; and self-regulation - catching biases, questioning assumptions, recognising when we’re accepting something because it <em>sounds</em> good rather than because it <em>is</em> good.</p>
<h3 id="the-development-of-expertise">The Development of Expertise</h3>
<p>If expertise develops through practice, (the prevailing theory), and AI now handles much of what used to be practice, how will people develop expertise? This question echoes concerns that emerged with calculators and IDEs - tools that automated cognitive work. The answer has generally been that people learn different things at different levels of abstraction. Yet it reinforces that critical thinking becomes more important as AI handles routine implementation, leaving humans to excel at the non-routine: judgment calls, context-dependent decisions, and recognising when standard patterns don’t apply.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Working with AI doesn’t reduce the need for critical thinking, it redistributes where that thinking is needed and demands it with greater intensity and discipline. Less in generation, more in evaluation; less in implementation, more in judgment about appropriateness; less in writing code, more in building and maintaining theory about systems.</p>
<p>The standards remain constant: clarity, accuracy, precision, relevance, depth, breadth, logic, significance. The dispositional elements remain essential: inquisitiveness, systematicity, judiciousness, truth-seeking, open-mindedness. What changes is the context - we’re thinking critically not just about problems and solutions, but about AI outputs, prompting practices, and whether understanding is being built or merely code generated.</p>
<p>Perhaps the core insight is this: <strong>AI is powerful not because it replaces human thinking, but because it creates conditions where high-quality human thinking becomes more valuable</strong>. The question isn’t whether to think critically - it’s whether we’re deliberate about doing so in this new context.</p>
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:Facione" role="doc-endnote">
<p><a href="https://eric.ed.gov/?id=ED315423">Facione, P. A. (1990). Critical Thinking: A Statement of Expert Consensus for Purposes of Educational Assessment and Instruction. The Delphi Report.</a> <a href="#fnref:Facione" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:Naur" role="doc-endnote">
<p><a href="https://gwern.net/doc/cs/algorithm/1985-naur.pdf">Naur, P. (1985). Programming as Theory Building. Microprocessing and Microcomputers, 14(11), 365-371.</a> <a href="#fnref:Naur" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:Brooks" role="doc-endnote">
<p><a href="https://ieeexplore.ieee.org/document/1663532">Brooks, F. P. (1987). No Silver Bullet: Essence and Accidents of Software Engineering. Computer, 20(4), 10-19.</a> <a href="#fnref:Brooks" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:Edmondson" role="doc-endnote">
<p><a href="https://www.hbs.edu/faculty/Pages/item.aspx?num=2959">Edmondson, A. C. (1999). Psychological Safety and Learning Behavior in Work Teams. Administrative Science Quarterly, 44(2), 350-383.</a> <a href="#fnref:Edmondson" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:Wei" role="doc-endnote">
<p><a href="https://arxiv.org/abs/2201.11903">Wei, J., Wang, X., Schuurmans, D., Bosma, M., Xia, F., Chi, E., … & Zhou, D. (2022). Chain of Thought Prompting Elicits Reasoning in Large Language Models. </a> <a href="#fnref:Wei" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:Snell" role="doc-endnote">
<p><a href="https://arxiv.org/abs/1703.05175">Snell, J., Swersky, K., & Zemel, R. (2017). Prototypical Networks for Few-shot Learning.</a> <a href="#fnref:Snell" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:Sheppard" role="doc-endnote">
<p><a href="https://www.semanticscholar.org/paper/Evidence-Based-Software-Engineering-and-Systematic-Kitchenham-Budgen/266aa9741c6559af0c6dcee2e1947ced0385b4bd">Shepperd, M., Kitchenham, B., Budgen, D., Brereton, P., & Mair, C. (2014). A systematic review of evidence for software engineering practice. Information and Software Technology, 56(11), 1230-1246.</a> <a href="#fnref:Sheppard" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>
<p><a href="https://capgemini.github.io/engineering/critical-thinking-age-of-ai/">Critical Thinking in the Age of AI</a> was originally published by Capgemini at <a href="https://capgemini.github.io">Capgemini Software Engineering</a> on November 17, 2025.</p>https://capgemini.github.io/development/how-to-ask-for-help2025-07-04T00:00:00+01:002025-07-04T00:00:00+01:00Malcolm Younghttps://capgemini.github.io/authors#author-malcolm-young
<p>So you have a difficult problem to solve, and you’ve decided to ask for help with it. Congratulations! Recognising you’ve got a problem is the first step to solving it. Nobody knows how to solve every problem, so <a href="https://capgemini.github.io/development/its-sometimes-clever-to-admit/">you shouldn’t feel bad about needing to ask for help</a>. The next step, which a lot of people find difficult, is to do the asking in a way that helps people to help you. This article aims to help you with that.</p>
<p>A lot of what I want to say here is already covered in Eric Raymond’s classic essay <a href="http://www.catb.org/~esr/faqs/smart-questions.html">How To Ask Questions The Smart Way</a>, and I’d recommend reading it. Having said that, it was written in 2001, with open source hackers in mind, and isn’t the easiest read (not least because of the aggressive tone, which hints at some of the <a href="https://en.wikipedia.org/wiki/Eric_S._Raymond">troubling aspects of the author’s worldview</a>), so I want to write a newer, kinder version, with some of my own ideas added.</p>
<h2 id="whats-your-problem">What’s your problem?</h2>
<p>Before anyone can solve a problem, they need to know what the problem is. If you want people to help you, help them by providing a clear and concise explanation of what it is that you need help with.</p>
<p>Being able to explain things clearly is one of the most important skills a developer needs in my view, both when dealing with clients, and when <a href="https://capgemini.github.io/development/how-we-work/">helping teammates to learn</a>. When I interview developers, one of the things I want to know is whether they can explain concepts clearly and without assuming prior knowledge.</p>
<h2 id="what-have-you-done-for-me-lately">What have you done for me lately?</h2>
<p>Most people generally want to be helpful, but <a href="https://contextkeeper.io/blog/the-real-cost-of-an-interruption-and-context-switching/">context switching is a big problem for anyone who needs to solve complex problems</a>, so when somebody is trying to help you, they’re already giving you a big piece of their attention and energy - don’t make it harder than it needs to be to help you.</p>
<p>Hopefully you haven’t just given up at the first hurdle and asked for help before at least trying a few different things, searching, or asking your AI tool of choice. Before other people can help you, you should do some work for yourself. If all you’re saying is “please help me with this”, you may as well be saying “please do my job for me”.</p>
<h2 id="before-you-ask-the-question">Before you ask the question</h2>
<p>I can’t say this any better than <a href="https://www.xkcd.com/627/">Randall Munroe of XKCD</a>, so I’ll share his advice here:</p>
<p><img src="https://imgs.xkcd.com/comics/tech_support_cheat_sheet.png" alt="flowchart of how to solve computer problems" /></p>
<p>This article is basically a more advanced version of that flowchart for software engineers.</p>
<h2 id="why-is-it-important-to-ask-in-the-right-way">Why is it important to ask in the right way?</h2>
<p>When I was at university I worked on the student union stage crew, doing sound and light for gigs and discos. We were volunteers, but as I’ve written elsewhere, <a href="https://red-route.org/articles/close-enough-side-project-how-know-when-things-are-good-enough">being amateurs didn’t diminish our levels of professionalism</a>. We were trained and supported by a full time technician.</p>
<p>At the end of the night, if any of the equipment wasn’t working, we had to report it to the technician, so that he could work on it the next day. One of his pet peeves was if someone reported a problem, but didn’t give him any more details beyond saying “it’s not working”. He was a busy man, and the more information we could give him, the less time he would have to spend figuring out what the problem was, and the more time he could spend on fixing it. The people reporting the problem already knew what the problem was, so for us to spend a few moments explaining it to him would save him from having to diagnose it.</p>
<p>How is it not working? For example with a disco light, does it completely fail to turn on, or does it flicker intermittently? Is it dimmer than it should be?</p>
<p>He was happy to help, but like Eric Raymond and his fellow hackers, if we were wasting his time, he would call us out for it publicly, and his criticism would sting.</p>
<p>The lessons I learned there have stayed with me for a long time, and some of them are very relevant to software development. In particular, there is a lot of value in <a href="https://capgemini.github.io/testing/effective-bug-reports/">clear and effective bug reports</a>, which can save developers a lot of time. When a developer needs to ask for help, you could think of it as a bug report on work in progress, and a thoughtful question will save time for the person helping.</p>
<h2 id="where-should-you-ask">Where should you ask?</h2>
<p>Your first instinct might be to ask your team lead. After all, they’re more experienced, they’re familiar with the code, and it’s part of their job to help you. A quick message to them and they’ll surely solve your problem in no time. The trouble is that they’re busy, and they’re not the only person on the team who can help.</p>
<p>You might feel shy about asking for help in your team’s chat channels, but please don’t ask for help in private messages. It’s almost always better to <a href="https://capgemini.github.io/culture/ask-in-the-channel/">ask in the channel</a> - that way you increase the chances of somebody answering, and the rest of the team will benefit from the answer.</p>
<p>If it’s a problem with open source code, ask in the relevant support forums for that project. For more general queries, Stack Overflow and its siblings might be more appropriate.</p>
<p>Wherever you ask, it’s always best to search before asking - don’t waste people’s time by asking a question that has already been answered.</p>
<h2 id="rwtfm">R(&W)TFM</h2>
<p>You have read the documentation, haven’t you? If not, then please do. The people who seem to know more than you aren’t magical gurus with every answer - they’ve just spent time learning things. If you’ve read the documentation, and you don’t understand how to do something basic on your project, it’s probably a sign that the documentation needs to be improved - once you’ve found an answer, apply the <a href="https://snappify.com/blog/boy-scout-rule">boy scout rule</a> to your documentation and share what you’ve learned.</p>
<h2 id="a-picture-isnt-always-worth-a-thousand-words">A picture isn’t always worth a thousand words</h2>
<p>If you have an error message, don’t screenshot it - copy and paste the text. That makes it a lot easier to search for, either online or in the source code.</p>
<p>Similarly, a picture of some code isn’t an easy way to read and understand it, and nor is chat software like Teams and Slack. If you’ve got some partly working code, or even just an idea, and you need a few pointers, please don’t dump the code into a chat. Create a pull request with your work in progress, and make it clear that it isn’t ready to be merged - most code repositories have a way to <a href="https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/changing-the-stage-of-a-pull-request">mark a pull request as draft</a> or <a href="https://support.atlassian.com/bitbucket-cloud/docs/review-code-in-a-pull-request/#Create-tasks">create tasks that will prevent it from being merged</a>.</p>
<h2 id="what-are-you-trying-to-achieve">What are you trying to achieve?</h2>
<p>It’s useful to know the context of your question. Sometimes it might not be the right question to ask - there may be a better way to approach the problem. The context can help other people to understand the actual problem you’re trying to solve, not just the problem that you’ve got stuck on.
If you’re halfway down the road, you might need to think about whether it’s the right path - there may be a complete different approach that you haven’t thought of.
Perhaps you’re finding things difficult because you’re not <a href="https://capgemini.github.io/drupal/module-already/">working with the grain</a> - maybe the problem can be avoided altogether.</p>
<h2 id="isolate-the-problem">Isolate the problem</h2>
<p>When trying to solve a problem, it’s very easy to get distracted by other things. Is it possible to create a minimal test case that demonstrates the problem? For example, if it’s a front end problem, can you create a <a href="https://codepen.io/">CodePen</a> that has only the elements you need to demonstrate it?</p>
<p>This is also a good idea for debugging generally: Be scientific. Experiment with what might fix the problem by changing one thing at a time and seeing what happens. Take notes about the experiment so that you know what made a difference, and which variables were important.</p>
<h2 id="help-yourself">Help yourself</h2>
<p>If you take the time to formulate a good question, often you find that you’re able to answer the question yourself. This is the basis of <a href="https://rubberduckdebugging.com/">rubber duck debugging</a> - the act of preparing the question helps you to understand the problem more clearly, and that can often help you to solve it without needing someone else to help. I’ve lost count of the number of times I’ve got part way through asking for help and answered the question myself, leaving the other person with nothing to do beyond accepting my thanks for their time.</p>
<p>If you’ve taken the time to do all that, and you haven’t solved the problem, then by all means ask for help, but remember that other people don’t owe you their time. Whether they’re strangers on the internet or colleagues on your team, be respectful of the time that they are giving you, and help them to help you by asking good questions. If you ask a good question, you’re much more likely to get a good answer.</p>
<p>I hope that this is useful, but if it isn’t clear, please ask for help (in the right way).</p>
<p><a href="https://capgemini.github.io/development/how-to-ask-for-help/">How To Ask For Help</a> was originally published by Capgemini at <a href="https://capgemini.github.io">Capgemini Software Engineering</a> on July 04, 2025.</p>https://capgemini.github.io/engineering/standard-model-forces-of-software-engineering2025-06-11T00:00:00+01:002025-06-11T00:00:00+01:00Stuart Williamshttps://capgemini.github.io/authors#author-stuart-williams
<p>Just as the universe is governed by a handful of fundamental
forces, (gravity, electromagnetism, and the strong and weak nuclear
forces), software engineering is too shaped by irreducible forces.
These forces, drawn from the deep structure of our discipline, define
what is possible, what is difficult, and why certain patterns of success
and failure recur.</p>
<p>Viewing software engineering through this lens allows us to see
not just a collection of best practices, but a coherent framework
grounded in the very nature of software itself.</p>
<h2 id="synthesis-a-standard-model-of-software-engineering">Synthesis: A Standard Model of Software Engineering</h2>
<p>These three forces can be viewed as an interdependent foundation for
software engineering:</p>
<ul>
<li><strong>Organised Complexity</strong> creates the challenge.</li>
<li><strong>Cognitive Theory</strong> makes human understanding the central resource
and bottleneck.</li>
<li><strong>Mathematical Limits</strong> define the boundaries within which all
software must operate.</li>
</ul>
<p>All established principles and recurring failure patterns in software
engineering can be traced to the interplay of these forces. Just as the
Standard Model in physics explains the behaviour of matter and energy,
this “Standard Model” of software engineering explains why certain
practices succeed and why the same mistakes recur across generations.</p>
<h2 id="1-the-force-of-organised-complexity">1: The Force of Organised Complexity</h2>
<h3 id="summary">Summary</h3>
<p>Like gravity, which acts everywhere and shapes the structure of the
universe, the force of organised complexity is ever-present in software.
It binds components together and makes their interactions non-trivial.
This force explains why software systems are more than the sum of their
parts, why small changes can have unpredictable effects, and why
complexity can never be fully eliminated, only managed. Just as gravity
cannot be “turned off,” complexity is an inescapable property of
software systems.</p>
<p><strong>Key Insights:</strong></p>
<ul>
<li>Complexity is fundamental, not a sign of poor design.</li>
<li>Small changes can ripple unpredictably.</li>
<li>Abstraction and modularity help but cannot eliminate complexity.</li>
</ul>
<p>Imagine trying to explain how a city works by describing the properties
of every building, road, and person individually - you’d miss the
crucial patterns of how the actions of its inhabitants wear paths,
operate machines and vehicles, connecting and influencing each other.</p>
<p>Software is similar: it’s not just a collection of instructions that
computers follow, but an intricate web of interconnected parts in which
a change to one small thing can ripple through the entire system in
unexpected ways. Unlike a physical machine where you can isolate and fix
a broken gear, software problems often emerge from the complex
interactions between parts that each work perfectly on their own.</p>
<p>When software seems to break mysteriously, or when adding a “simple”
feature takes months, or when fixing one bug creates three new ones,
it’s not incompetence. It’s because software engineers are managing
thousands or millions of precise interactions that all must work
together perfectly, and human minds simply aren’t equipped to hold all
these connections in our heads at once.</p>
<h3 id="theoretical-foundation">Theoretical Foundation</h3>
<p>Software represents a unique form of organised complexity that cannot be
decomposed into simpler elements without losing essential properties.
This principle derives from several theoretical foundations:</p>
<p><strong>Systems Theory</strong>: <a href="https://en.wikipedia.org/wiki/Herbert_A._Simon">Herbert Simon’s</a> architecture of complexity
demonstrates that hierarchical systems exhibit emergent properties that
are not present in their components. Software systems exemplify this
principle - the behaviour of a complete application cannot be predicted
solely from understanding individual functions or modules.</p>
<p><strong>Complexity Science</strong>: <a href="https://en.wikipedia.org/wiki/Warren_Weaver">Warren Weaver’s</a> distinction between
disorganised complexity (amenable to statistical methods) and organised
complexity (requiring new analytical approaches) places software firmly
in the latter category. Unlike physical systems governed by statistical
mechanics, software exhibits precise, deterministic complexity where
every detail matters.</p>
<p><strong>Essential vs Accidental Complexity</strong>: <a href="https://en.wikipedia.org/wiki/Fred_Brooks">Fred Brooks</a> identified that
software possesses irreducible essential complexity inherent to the
problem domain. No methodology, tool, or abstraction can eliminate this
fundamental intricacy - it can only be managed, not simplified away.</p>
<h3 id="implications">Implications</h3>
<p>This Fundamental Force establishes that:</p>
<ul>
<li>Abstraction layers inevitably leak, as Joel Spolsky observed, because
complexity cannot be fully hidden</li>
<li>Modularity provides boundaries but cannot eliminate interdependencies</li>
<li>Testing cannot achieve complete coverage due to combinatorial
explosion</li>
<li>Documentation can never fully capture system behaviour</li>
</ul>
<h2 id="2-the-force-of-cognitive-theory">2: The Force of Cognitive Theory</h2>
<h3 id="summary-1">Summary</h3>
<p>Comparable to electromagnetism, which governs interactions at both the
atomic and macroscopic scale, the cognitive force in software is about
the transfer and maintenance of understanding. It’s the “field” that
connects human minds to code and to each other. This force explains why
code is not the software itself, but a representation of shared mental
models. It governs knowledge transfer, team dynamics, and the
persistence (or loss) of understanding over time.</p>
<h4 id="key-insights">Key Insights:</h4>
<ul>
<li>Software exists as a theory in human minds and teams.</li>
<li>Code is an imperfect externalisation of this theory.</li>
<li>Knowledge transfer is about reconstructing mental models, not just
reading documentation.</li>
</ul>
<h3 id="explanation">Explanation</h3>
<p>When you look at code, you might think you’re seeing the software
itself - but you’re looking at something more like sheet music. Just as
sheet music isn’t the music but rather instructions for creating music,
code is really instructions that reflect a programmer’s understanding
of how to solve a problem, and those instructions are translated through
various processes into more detailed sequences of instructions that
execute on the machine. The most real representation of the software
lives in the development team’s mental model of what the system is
supposed to, what it actually does, why it does it, and how all the
pieces fit together. We try to represent our collective mental model
using software architecture, references to design patterns, various
types of diagrams and the typed code – which is just an attempt to
capture this understanding in a form that computers can execute.</p>
<p>This is why a team can inherit “perfectly good code” from another team
and still struggle to work with it - they’ve received the sheet music
but not the understanding of how to perform it. It’s why programmers
spend so much time in meetings drawing diagrams and discussing
approaches rather than typing and why losing key team members can
cripple a project even when all the code remains. The challenge isn’t
writing code; it’s building and sharing the mental theory of what the
software should do and how it should work. When developers say they need
to “understand the codebase,” they’re not memorising the code -
they’re reconstructing the original thinking behind it.</p>
<h3 id="theoretical-foundation-1">Theoretical Foundation</h3>
<p>Software exists primarily as a theory in human cognitive systems - both
individual minds and the collective understanding distributed across
teams - with code serving as an imperfect externalisation of this
knowledge.</p>
<p>This perspective draws from multiple theoretical traditions:</p>
<p><strong>Cognitive Science</strong>: <a href="https://en.wikipedia.org/wiki/Peter_Naur">Peter Naur’s</a> 1985 paper “Programming as Theory
Building” establishes that programming consists of building and
maintaining a theory about how computational processes solve real-world
problems. The program text merely reflects this theory but cannot fully
contain it. The theory (of a piece of software) often exists not in a
single mind but distributed across team members who each hold partial
understanding.</p>
<p><strong>Philosophical Foundations</strong>: <a href="https://en.wikipedia.org/wiki/Michael_Polanyi">Michael Polanyi’s</a> concept of tacit
knowledge explains why software understanding transcends documentation.
Much of what programmers know about their systems exists as personal,
embodied knowledge that cannot be fully articulated. This tacit
knowledge accumulates not just individually but culturally within teams.</p>
<p><strong>Distributed Cognition</strong>: <a href="https://en.wikipedia.org/wiki/Edwin_Hutchins">Edwin Hutchins</a> framework shows how
cognitive processes extend across multiple agents and artefacts. In
software teams, understanding emerges from the interaction between
individual mental models, shared representations, and external tools.
Recent research on cumulative culture in problem-solving suggests that
software solutions emerge through collective learning across time, with
each generation of developers building upon previous understanding.</p>
<p><strong>Social Learning Theory</strong>: Software development exhibits
characteristics of cumulative culture, (per <a href="https://www.drcathicks.com/">Cat Hicks</a> et al), where
knowledge accumulates through social transmission and collaborative
refinement. Teams naturally develop shared languages, patterns, and
practices that embody collective understanding beyond what any
individual member possesses.</p>
<h3 id="implications-1">Implications</h3>
<p>This Fundamental Force reveals that:</p>
<ul>
<li>Code without its accompanying theory becomes unmaintainable “legacy”
systems</li>
<li>Successful software development requires shared mental models among
team members</li>
<li>Knowledge transfer involves theory reconstruction, not just
information transmission</li>
<li>Teams naturally evolve collective understanding through documented
patterns, shared practices, and cultural transmission</li>
<li>The most successful teams develop mechanisms for preserving and
transmitting understanding across time</li>
<li>AI participation in development introduces new forms of cognitive
partnership where understanding emerges from human-AI dialogue</li>
</ul>
<h2 id="3-the-force-of-mathematical-limits">3: The Force of Mathematical Limits</h2>
<h3 id="summary-2">Summary</h3>
<p>Like the strong and weak nuclear forces, which set hard boundaries on
what is possible in the physical world, mathematical limits in software
define what can and cannot be achieved, regardless of skill or effort.
This force explains why some problems are undecidable, why perfect
verification is impossible, and why trade-offs and heuristics are
necessary. These are not limitations of current technology, but
fundamental boundaries proven by mathematics.</p>
<h4 id="key-insights-1">Key Insights:</h4>
<ul>
<li>Some problems are mathematically undecidable.</li>
<li>Perfect verification is impossible for non-trivial systems.</li>
<li>Heuristics and approximations are not compromises, but necessities.</li>
</ul>
<p>Many people assume that with enough time, effort, and skill, programmers
can make software do anything and fix any problem. But software faces
hard mathematical limits - walls that cannot be climbed no matter how
clever we are. It’s like asking someone to draw a square circle or find
the highest number; these aren’t difficult tasks, they’re impossible
ones. Computer science has mathematically proven that certain things
software simply cannot do, such as perfectly predicting whether any
given program will crash or run forever or automatically finding the
absolute best solution to many scheduling and optimisation problems.</p>
<p>These aren’t limitations of current technology that future advances
will overcome - they’re as fundamental as the laws of physics. This is
why your computer sometimes freezes trying to solve seemingly simple
problems, why software can’t automatically find and fix all its own
bugs, and why programmers often talk about “good enough” solutions
rather than perfect ones. When software companies say they can’t
guarantee their products are completely bug-free or perfectly secure,
they’re not making excuses - they’re acknowledging mathematical
reality. Understanding these limits helps explain why software
development remains difficult and expensive despite decades of
technological advancement.</p>
<h3 id="theoretical-foundation-2">Theoretical Foundation</h3>
<p>Software engineering confronts absolute boundaries defined by
mathematical logic and computation theory. These limits shape what is
possible and guide how principles must be formulated:</p>
<p><a href="https://en.wikipedia.org/wiki/Computability_theory"><strong>Computability Theory</strong></a>: Turing’s halting problem and related
undecidability results prove that fundamental questions about program
behaviour cannot be algorithmically determined. No amount of engineering
sophistication can overcome these limits.</p>
<p><a href="https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems"><strong>Gödel’s Incompleteness Theorems</strong></a>: These establish that no formal
system can be both complete and consistent. Software systems, as formal
structures, inherit these limitations - there will always be true
properties that cannot be formally proven.</p>
<p><a href="https://en.wikipedia.org/wiki/Information_theory"><strong>Information Theory</strong></a>: Shannon’s theorems define minimum complexity
for representing information. Data compression, error correction, and
communication all face theoretical boundaries that constrain software
design.</p>
<p><a href="https://en.wikipedia.org/wiki/Computational_complexity_theory"><strong>Computational Complexity</strong></a>: The P vs NP problem and complexity classes
define which problems can be efficiently solved. Many critical software
engineering tasks (optimisation, verification, testing) face exponential
complexity barriers.</p>
<h3 id="implications-2">Implications</h3>
<p>This Fundamental Force necessitates that:</p>
<ul>
<li>Perfect verification remains impossible for non-trivial systems</li>
<li>Optimisation must target “good enough” rather than optimal solutions</li>
<li>Heuristics and approximations become essential tools rather than
compromises</li>
<li>Human judgement remains irreplaceable for navigating undecidable
territories</li>
</ul>
<h2 id="synthesis-the-foundational-trinity">Synthesis: The Foundational Trinity</h2>
<p>These three Fundamental Forces form an interdependent foundation:</p>
<p><strong>Organised Complexity</strong> creates the challenge that cannot be wished
away through better tools or methods. It represents the irreducible
difficulty of mapping computational processes to real-world problems.
Teams cope with this complexity not through individual brilliance but
through accumulated collective knowledge - patterns, libraries, and
practices that evolve over time.</p>
<p><strong>Cognitive Artefact</strong> nature makes human understanding the central
challenge and resource. Software quality depends on the clarity and
shareability of mental models, which exist not just in individual minds
but distributed across teams and time. Mature teams develop cultural
mechanisms - mentoring, documentation practices, code reviews - that
preserve and transmit this understanding.</p>
<p><strong>Mathematical Limits</strong> define the boundary conditions within which all
software engineering must operate. They transform the discipline from
seeking perfect solutions to navigating trade-offs within fundamental
constraints. Collective exploration of the possibility space often
yields better approximations than individual attempts.</p>
<p>Together, these Fundamental Forces explain why software engineering
differs qualitatively from other engineering disciplines. Physical
engineering works with materials governed by continuous mathematics and
statistical properties. Software engineering manipulates discrete, exact
structures where small changes can have unbounded effects, where
understanding exists primarily in minds rather than blueprints, and
where mathematical impossibility theorems directly constrain practice.</p>
<h2 id="established-principles-explained-through-the-fundamental-forces">Established Principles Explained Through the Fundamental Forces</h2>
<p>Well-known software engineering principles emerge as logical
consequences of these foundational realities:</p>
<h3 id="dry-dont-repeat-yourself">DRY (Don’t Repeat Yourself)</h3>
<p>Every piece of knowledge should have a single, unambiguous
representation in a system. This emerges from <strong>Fundamental Forces 1 and
2</strong>: duplication multiplies the complexity of maintaining mental
models - when the same concept exists in multiple places, developers
must synchronise multiple theories, violating cognitive limits.</p>
<h3 id="yagni-you-arent-gonna-need-it">YAGNI (You Aren’t Gonna Need It)</h3>
<p>Don’t add functionality until needed. This follows from <strong>Fundamental
Forces 1 and 3</strong>: every addition increases complexity exponentially due
to interactions. Given the inability to perfectly predict future needs,
premature abstraction adds complexity without proven value.</p>
<h3 id="conways-law">Conway’s Law</h3>
<p>Organisations design systems mirroring their communication structures. A
direct consequence of <strong>Fundamental Force 2</strong>: since software is
fundamentally theory in human cognitive systems, and theories are built
through communication, communication channel structure inevitably shapes
the shared theory - and thus the software.</p>
<h3 id="single-responsibility-principle">Single Responsibility Principle</h3>
<p>A class/module should have only one reason to change. This arises from
<strong>Fundamental Forces 1 and 2</strong>: multiple responsibilities create
cognitive overhead - humans must maintain multiple, potentially
conflicting theories about what a component does. Limiting
responsibility limits mental model complexity.</p>
<h3 id="fail-fast">Fail Fast</h3>
<p>Systems should detect and report failures immediately. This emerges from
<strong>Fundamental Force 3</strong>: since correctness cannot be proven for complex
systems, failures must be assumed. Early detection minimises error
propagation, keeping debugging tractable despite analytical limitations.</p>
<h3 id="separation-of-concerns">Separation of Concerns</h3>
<p>Different program aspects should be separated into distinct sections.
This follows from all three Fundamental Forces: reduces interaction
complexity (Fundamental Force 1), allows different theories to be held
separately (Fundamental Force 2), and enables partial verification
within mathematical limits (Fundamental Force 3).</p>
<h3 id="openclosed-principle">Open/Closed Principle</h3>
<p>Software entities should be open for extension but closed for
modification. This arises from <strong>Fundamental Force 2</strong>: modifying
existing code requires reconstructing the original theory, which may be
incomplete or lost. Extension allows new theories to be built on stable
foundations without disturbing existing understanding.</p>
<h3 id="liskov-substitution-principle">Liskov Substitution Principle</h3>
<p>Superclass objects should be replaceable with subclass objects without
breaking the system. This emerges from <strong>Fundamental Forces 2 and 3</strong>:
it maintains theoretical consistency - developers can reason about base
classes without knowing all implementations, a partial knowledge
strategy working within the limits of complete system analysis.</p>
<h3 id="principle-of-least-astonishment">Principle of Least Astonishment</h3>
<p>Software should behave as users and developers expect. A direct result
of <strong>Fundamental Force 2</strong>: surprising behaviour forces theory
reconstruction. Predictable behaviour allows existing theories to remain
valid, reducing cognitive load.</p>
<h3 id="loose-coupling-and-high-cohesion">Loose Coupling and High Cohesion</h3>
<p>Components should minimise interdependencies while grouping related
functionality. <strong>Loose coupling</strong> (from Fundamental Force 1) limits the
combinatorial explosion of interactions. <strong>High cohesion</strong> (from
Fundamental Force 2) ensures modules correspond to coherent theories - a
single mental model explains the module’s behaviour.</p>
<h3 id="continuous-integrationdeployment">Continuous Integration/Deployment</h3>
<p>Integrate code frequently and deploy regularly in small increments. This
arises from all three Fundamental Forces: small changes limit complexity
growth (Fundamental Force 1), frequent integration keeps team theories
synchronised (Fundamental Force 2), and small increments make debugging
tractable within analytical limits (Fundamental Force 3).</p>
<h3 id="test-driven-development">Test-Driven Development</h3>
<p>Write tests before implementation code. This follows from <strong>Fundamental
Forces 2 and 3</strong>: tests externalise the theory of what code should do.
Given the inability to prove correctness, tests provide empirical
evidence that theory matches implementation.</p>
<h3 id="information-hidingencapsulation">Information Hiding/Encapsulation</h3>
<p>Modules should reveal minimal internal details. This emerges from
<strong>Fundamental Forces 1 and 2</strong>: hiding implementation reduces the
complexity others must understand and allows different theories to
coexist - users need only understand interface theory, not
implementation theory.</p>
<h3 id="premature-optimisation-as-anti-pattern">Premature Optimisation as Anti-Pattern</h3>
<p>Don’t optimise before identifying actual performance problems. This
follows from <strong>Fundamental Forces 3 and 1</strong>: performance characteristics
cannot be predicted analytically, and optimisation adds complexity.
Adding complexity without empirical evidence of need violates both
constraints.</p>
<p>These principles aren’t arbitrary “best practices” but logical
consequences of software’s fundamental nature. Understanding their
theoretical basis explains why violating them causes problems and why
they’re rediscovered independently across teams and cultures.
Principles focusing on <strong>Fundamental Force 1</strong> manage complexity through
decomposition and isolation. Those emphasising <strong>Fundamental Force 2</strong>
ensure cognitive manageability and theory preservation. <strong>Fundamental
Force 3</strong> principles acknowledge analytical limitations and promote
empirical approaches. The most sophisticated principles navigate all
three constraints simultaneously.</p>
<h2 id="common-paths-to-failure">Common Paths to Failure</h2>
<p>Just as established principles emerge from respecting the three
Fundamental Forces, common failure patterns arise from violating or
ignoring these foundational constraints. These recurring disasters in
software engineering are not random misfortunes but predictable
consequences of working against the fundamental nature of software.</p>
<h3 id="the-hero-programmer-dependency">The Hero Programmer Dependency</h3>
<p>Relying on a single brilliant developer who holds all system knowledge
in their head. This violates <strong>Fundamental Force 2</strong> catastrophically:
theory exists in one mind instead of being distributed across the team.
When the hero leaves, the theory dies with them, leaving behind
incomprehensible code. The organisation discovers too late that it
possessed code but not software - the cognitive artefact departed with
the individual.</p>
<h3 id="the-big-rewrite">The Big Rewrite</h3>
<p>The decision to discard existing code and start fresh, assuming the
current system’s problems stem from poor implementation rather than
essential complexity. This violates <strong>Fundamental Force 2</strong>
fundamentally: it mistakes code for software, throwing away years of
accumulated theory and hard-won understanding. The rewrite team must
rediscover all the edge cases, business rules, and subtle interactions
the original code embodied - often taking longer than the original
development and sometimes failing entirely, as Netscape 6 famously
demonstrated.</p>
<h3 id="second-system-syndrome">Second System Syndrome</h3>
<p>The tendency for a successful simple system to be replaced by an
over-engineered, feature-laden version that collapses under its own
weight. This violates <strong>Fundamental Force 1</strong>: success with a simple
system breeds false confidence about managing organised complexity.
Designers assume that if they could handle X complexity, surely 3X is
manageable with better architecture. Brooks identified this pattern
decades ago, yet it recurs because each generation must learn that
complexity grows exponentially, not linearly.</p>
<h2 id="integration-hell">Integration Hell</h2>
<p>Delaying component integration until late in development, only to
discover incompatible assumptions and irreconcilable architectures. This
violates all three Fundamental Forces simultaneously: complexity
compounds invisibly (Fundamental Force 1), team theories diverge without
synchronisation (Fundamental Force 2), and problems become analytically
intractable - debugging requires understanding all components
simultaneously (Fundamental Force 3). The adoption of continuous
integration practices directly addresses this failure pattern.</p>
<h3 id="the-documentation-myth">The Documentation Myth</h3>
<p>Believing that comprehensive documentation can substitute for developer
understanding, leading to extensively documented but unmaintainable
systems. This represents a pure <strong>Fundamental Force 2</strong> violation: it
confuses the externalisation (documentation) with the theory itself.
Documentation can support theory transmission but cannot replace it.
Teams inherit binders of specifications yet cannot modify the system
safely because documentation captures what the code does, not why it
does it or how the pieces form a coherent whole.</p>
<h3 id="analysis-paralysis">Analysis Paralysis</h3>
<p>Attempting to design the perfect system upfront through exhaustive
analysis, never actually building anything. This violates <strong>Fundamental
Force 3</strong> fundamentally: it denies mathematical limits on prediction and
analysis. The belief that sufficient thought can anticipate all
requirements and interactions ignores undecidability results and the
impossibility of perfect foresight. Months or years pass in design while
requirements shift, opportunities vanish, and nothing gets built. The
quest for perfection prevents the empirical learning that actual
implementation provides.</p>
<h3 id="summary-3">Summary</h3>
<p>These failure patterns persist because they stem from intuitive but
incorrect assumptions about software’s nature. They represent attempts
to treat software as simpler than it is (violating Fundamental Force 1),
as existing in artefacts rather than minds (violating Fundamental Force
2), or as analytically tractable when it isn’t (violating Fundamental
Force 3). Understanding these patterns through the lens of the three
Fundamental Forces explains not just why projects fail, but why the same
failures recur despite being well-documented - each generation must
learn to respect the fundamental constraints that define software
engineering.</p>
<h2 id="practical-consequences">Practical Consequences</h2>
<p>From these theoretical foundations flow practical principles:</p>
<ol>
<li><strong>Embrace complexity</strong> rather than deny it - develop tools and
practices suited to organised complexity, recognising that solutions
emerge from collective effort over time</li>
<li><strong>Invest in theory-building</strong> through documentation, dialogue,
knowledge-sharing practices, and cultural transmission mechanisms</li>
<li><strong>Accept imperfection</strong> as mathematical necessity, focusing on
resilience over correctness</li>
<li><strong>Design for comprehension</strong> since human understanding remains the
bottleneck, both for individuals and teams</li>
<li><strong>Respect fundamental limits</strong> by choosing achievable goals over
impossible ideals</li>
<li><strong>Cultivate collective intelligence</strong> through practices that
accumulate and transmit knowledge across team members and
generations</li>
</ol>
<p>These Fundamental Forces suggest that future methodologies -
particularly those involving AI collaboration - must address all three
dimensions: managing organised complexity through human-AI cognitive
partnership while respecting mathematical boundaries.</p>
<p>The emerging dialogue between human and artificial intelligence
represents not just a new tool but a fundamental evolution in how
software <em>theories</em> are constructed, maintained, and evolved.</p>
<p>The most successful development teams are already those that
intentionally adopt, or organically discover, practices aligned with
these foundational realities: they build pattern libraries, establish
mentoring relationships, create living documentation, and develop
rituals for knowledge sharing - all mechanisms that acknowledge software
development as a fundamentally collective cognitive endeavour operating
within mathematical constraints.</p>
<p>The most successful development teams in future will be those who
recognise the new form of these realities and apply them to develop and
maintain software using AI.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Just as the Standard Model in physics provides a coherent framework for
understanding the fundamental forces shaping the universe, this
conceptual Standard Model of Software Engineering reveals the deep
structure underlying our discipline. By recognising and respecting the
three Fundamental Forces of organised complexity, cognitive theory, and
mathematical limits, we can better understand why software engineering
is inherently challenging, why certain principles succeed, and why
certain failure patterns recur.</p>
<p>This framework not only explains established practices but can also
guide future innovations, particularly as we integrate AI into our
development processes. Embracing this Standard Model allows us to
navigate the complexities of software engineering with greater clarity
and effectiveness, ultimately leading to more reliable, maintainable,
and successful software systems.</p>
<p><a href="https://capgemini.github.io/engineering/standard-model-forces-of-software-engineering/">The Standard Model: Fundamental Forces of Software Engineering</a> was originally published by Capgemini at <a href="https://capgemini.github.io">Capgemini Software Engineering</a> on June 11, 2025.</p>https://capgemini.github.io/development/modernising-legacy-applications2025-06-06T00:00:00+01:002025-06-06T00:00:00+01:00Tochukwu (Charles) Ofoegbuhttps://capgemini.github.io/authors#author-tochukwu-charles-ofoegbu
<p>In today’s rapidly evolving technology landscape, enterprises are under growing pressure to innovate,
stay competitive, and reduce operational overhead. At the heart of these demands lies the daunting
challenge of modernising legacy applications. From high costs and vendor lock-ins to fragile
architectures and limited test coverage, modernisation is a multi-faceted journey fraught with
complexity. Before delving further, let’s take a moment to define what we mean by a legacy application.</p>
<h2 id="what-is-a-legacy-application">What Is a Legacy Application?</h2>
<p>According to Google, a legacy application is “a software system that is outdated but still in use,
often built with older technologies and potentially lacking the support and maintenance that newer
systems receive.”</p>
<p>These applications often remain critical to business operations, despite their
aging architecture and limitations. Understanding their nature is key to grasping the challenges
involved in modernising them.</p>
<p>This article explores the real-world challenges engineers and businesses face and why, in
some cases, a full rewrite may be the smartest path forward.</p>
<h2 id="1-regression-risk-the-silent-killer-of-confidence">1. Regression Risk: The Silent Killer of Confidence</h2>
<p>Modernising code is risky. One of the biggest unknowns is whether the migrated application behaves
as expected. Legacy applications, especially those built over decades, often have <strong>low or
non-existent test coverage</strong> or rely heavily on manual testing. When modernising such systems,
ensuring no regression is introduced becomes a serious challenge.</p>
<p>The lack of comprehensive, robust automated tests and absence of baseline performance benchmarks makes
it hard to:</p>
<ul>
<li>Validate critical business logic post-migration</li>
<li>Ensure that changes in infrastructure or environment don’t break core functionality</li>
<li>Refactor or modernize code with confidence</li>
</ul>
<p>Without a solid safety net of tests and regression coverage, teams find it difficult to ensure that
the refactored or modernised version doesn’t introduce <strong>regressions or functional discrepancies</strong>.
For instance, even minor changes — like moving to a new deployment model or database — can cause
silent failures that may surface in production. This slows down delivery and erodes confidence in
the new system.</p>
<h3 id="what-you-can-do">What You Can Do:</h3>
<ul>
<li>Invest early in test automation before or during migration</li>
<li>Design and implement benchmarking processes before migration</li>
<li>Use shadow deployments or canary releases to mitigate risk</li>
<li>Prioritise end-to-end tests for critical workflows</li>
</ul>
<h2 id="2-the-knowledge-gap-bridging-generations-of-tech">2. The Knowledge Gap: Bridging Generations of Tech</h2>
<p>A major and often overlooked challenge is <strong>legacy knowledge loss</strong>. Many legacy systems are written
in outdated languages like <a href="https://en.wikipedia.org/wiki/Perl">Perl</a>, <a href="https://en.wikipedia.org/wiki/COBOL">COBOL</a>,
earlier versions of <a href="https://en.wikipedia.org/wiki/Java">Java</a>, and usually rely on <strong>obsolete tools</strong> like
<a href="https://en.wikipedia.org/wiki/Apache_Subversion">SVN</a>, <a href="https://en.wikipedia.org/wiki/IBM_DevOps_Code_ClearCase">ClearCase</a>, <a href="https://en.wikipedia.org/wiki/IBM_WebSphere">WebSphere</a>, etc. The original developers are often no longer around,
and documentation is often minimal, outdated, or missing. This results in:</p>
<ul>
<li>High onboarding time for new engineers</li>
<li>Increased risk when making changes</li>
<li>Dependence on a shrinking pool of specialists</li>
</ul>
<p>Bridging this gap requires both <strong>upskilling</strong> and often <strong>reverse engineering</strong>, and without robust
clear documentation or source control history, teams may spend months just understanding what the
legacy system actually does before they can begin migration which slows down projects and adds risk.</p>
<h2 id="3-legacy-databases-dilemma-old-expensive-and-rigid">3. Legacy Databases Dilemma: Old, Expensive, and Rigid</h2>
<p>One of the thorniest parts of legacy modernisation is dealing with <strong>enterprise databases</strong> like
<a href="https://en.wikipedia.org/wiki/Oracle_Database">Oracle</a>, <a href="https://en.wikipedia.org/wiki/IBM_Db2">DB2</a>,
<a href="https://en.wikipedia.org/wiki/Microsoft_SQL_Server">MS SQL</a>, or <a href="https://en.wikipedia.org/wiki/Sybase">Sybase</a>.
These systems not only lock businesses into costly licenses but also
limit agility. These databases were once the gold standard, but they now act as heavy anchors, because:</p>
<ul>
<li>Their licensing fees are prohibitive</li>
<li>Integration options are limited</li>
<li>Migration paths are complex</li>
<li>Support and upgrade cycles are expensive</li>
<li>Running on virtualised / cloud environment is restrictive</li>
</ul>
<p>Moving to modern alternatives like <a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_AuroraOverview.html">Amazon Aurora</a>
or open-source RDBMS like <a href="https://www.postgresql.org/">PostgreSQL</a>, <a href="https://www.mysql.com/">MySQL</a> or <a href="https://mariadb.org/">MariaDB</a>
can yield significant cost savings by up to 50% and improve performance, automation compatibility,
agility, and scalability. Tools such as <a href="https://aws.amazon.com/dms">AWS Database Migration Service</a> and <a href="https://cloud.google.com/database-migration">Google Database
Migration Service</a> simplify the transition, making legacy-to-cloud migrations more feasible than
ever.</p>
<h2 id="4-the-security-paradox-legacy-closed-environments-vs-cloud-realities">4. The Security Paradox: Legacy Closed Environments vs. Cloud Realities</h2>
<p>Legacy systems often run in highly <strong>restricted internal networks</strong> with tightly controlled
environments often with bespoke security models. While this may seem secure, it creates friction.
I recently worked on a modernisation project involving code refactoring, where development was
restricted to a virtual machine environment, requiring use of legacy tools like ClearCase which is
only accessibly within the VM. Furthermore, permission to install essential development tools within
the VM was limited, making the workflow highly restrictive. Some of the other challenges include:</p>
<ul>
<li>Manual certificate handling often posed with incompatible Transport Layer Security (TLS) Versions,
No centralised PKI,</li>
<li>Limited TLS support, recurrent issues with hardcoded and expired certificates</li>
<li>Difficulty in implementing modern authentication / authorisation mechanisms (e.g., OAuth2, OpenID
Connect)</li>
<li>Incompatibility with modern DevSecOps practices</li>
</ul>
<p>Modernising entails adjusting to <strong>new security paradigms</strong>:</p>
<ul>
<li>Zero trust architecture</li>
<li>Using <a href="https://en.wikipedia.org/wiki/Automatic_Certificate_Management_Environment">ACME</a>-based automation to automate certificate renewal</li>
<li>Centralising secret and certificate management using tools like <a href="https://www.hashicorp.com/en/products/vault">HashiCorp Vault</a>, <a href="https://azure.microsoft.com/en-us/products/key-vault">Azure Key Vault</a>, or <a href="https://docs.aws.amazon.com/secretsmanager/">AWS Secrets
Manager</a></li>
<li>Offloading TLS to modern proxies like <a href="https://nginx.org/">NGINX</a>, <a href="https://www.envoyproxy.io/">Envoy</a>, or <a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/application/application-load-balancers.html">AWS ALB</a></li>
<li>IAM roles instead of static users / passwords</li>
<li>Exposure to internet-based risks, requiring strong encryption and logging</li>
</ul>
<p>Convincing security and compliance teams to shift from their tried and trusted on-prem policies can be
an uphill battle. Cloud-native platforms provide <strong>built-in security features</strong>, compliance controls,
and <strong>seamless integration with identity providers</strong>, reducing the security management overhead
while improving visibility and threat detection.</p>
<h2 id="5-the-cost-factor-breaking-free-from-licensing-lock-ins">5. The Cost Factor: Breaking Free from Licensing Lock-ins</h2>
<p>Application built on proprietary software (think Oracle databases, WebLogic servers, or legacy
middleware) can carry cost overheads. One of the most immediate and painful revelations during
legacy modernisation is the inflated cost of licensing.
Oracle databases, for instance, are licensed based on cores or users and can easily run into
six-figure bills annually. By contrast, modern alternatives like PostgreSQL, or managed services
like AWS RDS offer:</p>
<ul>
<li>Pay-as-you-go pricing</li>
<li>No upfront licensing fees</li>
<li>Auto-scaling capabilities</li>
</ul>
<p>Moving away from legacy enterprise databases like Oracle where possible to alternatives can offer
up to 50% cost savings, if not more, with better scalability and native cloud integration
<sup id="fnref:DBXLabs" role="doc-noteref"><a href="#fn:DBXLabs" class="footnote" rel="footnote">1</a></sup>, <sup id="fnref:EnterpriseDB" role="doc-noteref"><a href="#fn:EnterpriseDB" class="footnote" rel="footnote">2</a></sup>, <sup id="fnref:ScaleGrid" role="doc-noteref"><a href="#fn:ScaleGrid" class="footnote" rel="footnote">3</a></sup>.</p>
<h2 id="6-refactor-vs-rewrite-when-its-better-to-start-fresh">6. Refactor vs Rewrite: When It’s Better to Start Fresh</h2>
<p>In many cases, trying to incrementally modernise a monolith is like putting a new engine in a rusted
car. While refactoring and replatforming can reduce migration risk, there are cases where it’s more
efficient to <strong>rewrite the application as cloud-native from scratch</strong>. Benefits include:</p>
<ul>
<li>Clean, modular architecture using microservices or serverless</li>
<li>Modern CI/CD and DevOps integration</li>
<li>Easily testable</li>
<li>Cost-effective and scalable cloud infrastructure</li>
<li>Modern dev stack (easier to hire talents for)</li>
<li>Easier onboarding and documentation</li>
</ul>
<p>Rewriting the application allows organisations to align the system with current business
processes, rather than bending processes to fit legacy constraints. Yes, rewrites come with their
own risks — scope creep, delayed delivery, knowledge gaps — but for systems with outdated tech
stacks, poor code quality, and minimal test coverage, it may be the <strong>faster path to long-term
agility</strong>.</p>
<h2 id="final-thoughts">Final Thoughts</h2>
<p>Modernising legacy applications is not just about code. It’s about cost, people, process, and risk.
From overpriced licenses and fragile test coverage to security hurdles, knowledge gaps and outdated
databases, the challenges are real and non-trivial. But with a strategic mindset, a solid roadmap,
and the courage to consider a full rewrite when necessary, organisations can unlock agility, cost
savings, and long-term sustainability.</p>
<p>If you’re planning a cloud migration or stuck in the middle of one, take a step back and evaluate:
<strong>Are you just moving old problems to a new place, or solving them for good?</strong> The journey isn’t
easy. But staying stuck in the past is far riskier than taking the bold step forward.</p>
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:DBXLabs" role="doc-endnote">
<p><a href="https://www.dbxlabs.com/dollar46-million-cost-savings-legacy-system-migration-to-postgresql-success-story">DBX Labs $46 Million Cost Savings</a> <a href="#fnref:DBXLabs" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:EnterpriseDB" role="doc-endnote">
<p><a href="https://www.enterprisedb.com/blog/open-source-new-data-center-standard">Open Source is the New Data Center Standard</a> <a href="#fnref:EnterpriseDB" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:ScaleGrid" role="doc-endnote">
<p><a href="https://scalegrid.io/blog/postgresql-vs-oracle-difference-in-costs-ease-of-use-functionality">PostgreSQL vs. Oracle: Difference in Costs…</a> <a href="#fnref:ScaleGrid" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>
<p><a href="https://capgemini.github.io/development/modernising-legacy-applications/">The Hard Truths of Modernising Legacy Applications: Costs, Complexities, and Cloud-Native Benefits</a> was originally published by Capgemini at <a href="https://capgemini.github.io">Capgemini Software Engineering</a> on June 06, 2025.</p>https://capgemini.github.io/ai/prompt-driven-development2025-05-23T00:00:00+01:002025-05-23T00:00:00+01:00Julie Vaccalluzzohttps://capgemini.github.io/authors#author-julie-vaccalluzzo
<blockquote>
<p>Vibe coding (also vibecoding) is an AI-dependent programming technique where a person describes a problem in a few sentences as a prompt to a large language model (LLM) tuned for coding. The LLM generates software, shifting the programmer’s role from manual coding to guiding, testing, and refining the AI-generated source code. Vibe coding is claimed by its advocates to allow even amateur programmers to produce software without the extensive training and skills required for software engineering. The term was introduced by Andrej Karpathy in February 2025 and listed in the Merriam-Webster Dictionary the following month as a “slang & trending” noun.</p>
</blockquote>
<p><a href="https://en.wikipedia.org/wiki/Hobbit#Lifestyle" title="Wikipedia">Wikipedia</a></p>
<p>To prepare myself for the marathon I had set myself up for of building a Next.js app in one week, I watched a few short-form videos and read some online blogs. I loaded up a whole bunch of YouTube videos – I am visual, so this works for me. I also follow this topic doomscrolling on TikTok and watching comments fly by on my phone’s Discord notifications, the occasional Substack binge and me-curated X feeds. Side note: it’s dangerous letting social media just feed you stuff. Your tin hat will shine like a lighthouse on a dark night far away from civilisation. It’s like advertising, but instead of making you feel inadequate and wanting to inexplicably buy something, you find yourself angry and questioning your values. Regardless, a lot of the information had already trickled down. Interest is king. Use it to your advantage.</p>
<p>The Capgemini DCX team published an invite to <a href="https://vercel.com/">Vercel</a>’s new shiny AI tool V0. The presentation took place at the Capgemini London office, I was aware of Capgemini’s partnership with Vercel, and this project was ripe to give it a play.</p>
<p>What I learnt from working with Data Scientists on AI projects: your overarching prompt can go a long way in setting you up for success.
Vercel’s V0 has a setting for your whole project. This is where I told it what I wanted from it – how I wanted it to code, what stack I was using, and other bits. My prompt looked like this:</p>
<blockquote>
<p>For all the code you produce, please adhere to the following best practices. If you need to deviate from them, please tell me before you generate anything as we are trying to save energy and usage. I will provide context as instructions as follows:</p>
<ul>
<li>Clean Architecture: Separation of concerns between UI, business logic, and data access</li>
<li>Type Safety: Comprehensive TypeScript types for all components and functions</li>
<li>State Management: Zustand for efficient state management</li>
<li>Form Validation: Client-side validation with detailed error messages</li>
<li>Error Handling: Custom error classes and consistent error handling</li>
<li>Security: CSRF protection, secure cookies, and input validation</li>
<li>Accessibility: ARIA attributes and keyboard navigation support</li>
<li>Responsive Design: Mobile-friendly UI components</li>
<li>Use Material UI for building the app. Use Ubuntu for headings and Roboto for body copy.</li>
</ul>
</blockquote>
<p>I originally had a much larger prompt, but that got too confusing for V0 and I had to cut the chaff and thin it out. I used a web-based LLM, the famous one, the one and only, <a href="http://chat.openai.com/">ChatGPT</a>, to slim it down a bit. The V0 context window is not that large. It is good practice to keep your prompts short and actionable. One step at a time. Try not to give the LLM too many complicated things to do at once; this is where LLMs will fall over for now. You also kind of need a language degree to use this thing, in that you need to be able to communicate and explain what you need concisely, so make sure you are very specific. Say jump and it will jump… most of the time. Also a good idea to tell it what not to do or to only do this, only look at this.</p>
<p>Back to V0, I was very impressed with its rendering-from-design capability using <a href="https://www.figma.com/">Figma</a> URLs. However, if the design includes random colours and artefacts, the LLM can include these or add all your styles as overrides of the theme! I had to restart three times as design and theme colours were updated! The login screen was still on the old design, so all my subsequent pages were overrides of the original theme. Just because it works doesn’t make it right. The perfectionist in me was having a meltdown! How to deal with design changes in an agile way? In short, you can’t – well, not with this tool yet unfortunately, as you cannot upload your project back to the online system and say “update my file to the new design” when your project gets too big. Not very agile, methinks. The ecosystem is new-ish, so watch this space. I wish there was a plugin that you could have with your IDE, kind of how Dreamweaver used to work in the mid-2000s, pushing and pulling updates from the server. As of 14 May 2025, this is <a href="https://youtu.be/Tuu5iC--fcg?si=OrkKPFC74A4-C_v9&t=1000">V0’s number one thing that everyone requested</a> – well, a feature that will sync with your GitHub codebase, so that’s a start to get things cohesive.</p>
<p>I did like that V0 intuitively knew to do things and interpret the design in ways I would not think about – things that I later thought “I’ll keep that, thank you very much”, and could you also include this or that? V0, under the hood, uses many models to build its output. My feeling here is: make sure your designs are machine-readable, and by that I mean be pedantic, specific, and make them sharp-sharp. The better and more precise your designs are, the better your results. Again, this is not very agile, and I am starting to see others in the dev community starting to poke a stick at agile, asking if it’s still relevant – especially in this vibe coding era of prompt-driven development. I would also like to see UX and UI designers actually explain functionality in their designs so that these can be part of the prompt for the feature.</p>
<p>Prompt-driven development is exactly what it sounds like. You prompt, you see what comes out, and then you prompt again until you are happy with it. Coined by <a href="https://andrewships.substack.com/p/prompt-driven-development" title="Andrew Ships">Andrew Miller on his <em>Andrew Ships</em> substack</a> as follows:</p>
<ol>
<li>Developer receives requirements</li>
<li>Developer breaks requirements down into a series of prompts</li>
<li>LLM generates code for each prompt</li>
<li>Developer reviews LLM generated code</li>
<li>Developer submits all changes for review</li>
<li>Another developer reviews and merges changes</li>
</ol>
<p><img src="/images/2025-05-23-prompt-driven-development/prompt-driven-development-life-cycle.png" alt="Prompt Driven Development life cycle" /></p>
<p>GenAI can help people learn and upgrade their skills too. This is very useful with reasoning models like <a href="https://www.anthropic.com/news/claude-3-5-sonnet">Claude Sonnet</a> and now <a href="https://gemini.google.com/">Gemini</a>, apparently – but I think that was a ruse to get me to switch. These models can be picked from the dropdown and come packaged with your GitHub Enterprise licence. The model will explain what it is doing and why.
<img src="/images/2025-05-23-prompt-driven-development/cline-model-selection.png" alt="Cline model selection" />
I have used a prompt as follows:</p>
<blockquote>
<p>Please help me write the code and explain what you are doing and why to help me as a JavaScript developer understand what you are doing, and point out any differences that you feel I may not understand or that would be different in Python programming.</p>
</blockquote>
<p>It’s magic when you are trying to port over your skills and learn something new.</p>
<p>I plugged in my instance of GitHub Copilot into <a href="https://github.com/cline/cline/wiki">Cline</a>. Cline is a plugin that lets you look at your whole project, not just the files that you tell it to look at. So in more cases than not, I used Cline. You have a dropdown list where you can choose what model you want to use. In the overall settings, you can add your pre-prompt – the thing you always want to do – your master prompt. There are two files in your project that control this: <code class="language-plaintext highlighter-rouge">.github/copilot-instructions.md</code> and <code class="language-plaintext highlighter-rouge">.clinerules</code></p>
<p><img src="/images/2025-05-23-prompt-driven-development/cline-screenshot.png" alt="the cline tab in VS code" /></p>
<p>I also found that giving Cline full read rights made it easy to concentrate on what changes it made; this was super useful later on when I had to debug or fix issues manually the <em>old skool</em> way.
<img src="/images/2025-05-23-prompt-driven-development/auto-read.png" alt="auto read feature" /></p>
<p>It is important to read the commentary as it’s reasoning. You will not learn anything new if you don’t, and you also want to know what it did and why. Read the code too. Make sure you understand what it does. This comes in useful later when the project is too complicated for an LLM to solve and you have to get down and dirty with the code. There is no point in just accepting everything, as it may just hurt you later. You need to understand what it did.</p>
<p>If you have accepted something and then decide that you don’t want to go down this route, you can always revert to a previous state; much like Git, there are checkpoints. I will warn you though that if you don’t accept the answer and wish you had, manual intervention may be needed.</p>
<p><img src="/images/2025-05-23-prompt-driven-development/check-point.png" alt="checkpoints" /></p>
<p>On the whole, having an LLM was so useful in getting stuff out the door for this project and doing the boring stuff that I would have procrastinated away as much as I could have done otherwise. So useful for learning new programming languages. It’s the friend that you love for their knowledge and patience – that never gets tired of helping you figure stuff out!</p>
<p>Sometimes the LLM got in the way and made me doubt my sanity. For example, Material UI’s <code class="language-plaintext highlighter-rouge"><Box></code> element could not take on a class name implemented via ThemeProvider; convinced it was an error on my part, after much debugging (and frustration), I found out through Next.js Material UI bug reports that <code class="language-plaintext highlighter-rouge"><Box></code> does not accept class names! An LLM won’t tell you this; sometimes you have to do it the old-fashioned way.
In short: treat an LLM like a <em>hooooooman</em> – check its work makes sense; ask clarifying questions; understand what it’s doing before accepting its suggestions wholesale. If all else fails, make it make sense. That’s what we are paid for!</p>
<p><a href="https://capgemini.github.io/ai/prompt-driven-development/">Prompt Driven Development</a> was originally published by Capgemini at <a href="https://capgemini.github.io">Capgemini Software Engineering</a> on May 23, 2025.</p>https://capgemini.github.io/ai/ai-capability-matrix2025-05-16T00:00:00+01:002025-05-16T00:00:00+01:00Sarah Saundershttps://capgemini.github.io/alumni#author-sarah-saunders
<p>As a consultancy, we are expected to have strong and well-formed opinions on all new technologies arriving on the market. So we often get asked, “What should I be doing with AI? Should I be using it? What should I do and what should I avoid?” And the answer is, of course, “It depends”! There are so many factors behind whether or not AI solutions are right for a business. So, a good place to start before trying to answer this question is to be goals-oriented; to know where you are, as a business, and to know where you want to go. Then you can figure out whether any form of AI can be beneficial.</p>
<p>To this end, we are creating a Capability Matrix to help companies check they have the necessary prerequisites in place to benefit from AI. This post will consider Generative AI in the Software Development Lifecycle; as this is my personal speciality. So, to begin, a couple of definitions:</p>
<h2 id="generative-ai">Generative AI</h2>
<p><a href="https://en.wikipedia.org/wiki/Generative_artificial_intelligence">Wikipedia</a> says Generative AI is “a subset of artificial intelligence that uses generative models to produce text, images, videos, or other forms of data. These models learn the underlying patterns and structures of their training data and use them to produce new data based on the input, which often comes in the form of natural language prompts.” Digging into this a bit deeper, “new data” is slightly misleading; in reality the model will return a sort of “word salad” (or picture salad) of data it has seen before, mixed up according to a probability algorithm that will judge whether this is a highly likely answer its data might provide to this prompt. The more you know about how Gen-AI works, the more impressive it is that it ever comes up with anything useful! But, come up with useful things it does. Sometimes.</p>
<h2 id="the-software-development-lifecycle">The Software Development Lifecycle</h2>
<p>This means the end-to-end process of creating software, from identifying where technology could assist or improve a process, to designing an application to fulfil this business need, to building and testing that application, to deploying, monitoring and supporting the application. Since the adoption of <a href="https://en.wikipedia.org/wiki/Agile_software_development">Agile</a> and <a href="https://en.wikipedia.org/wiki/DevOps">DevOps</a> processes, this is an iterative and automated process that is becoming better understood the more software we build.</p>
<h2 id="where-to-start">Where to Start</h2>
<p>As with any new or disruptive tooling, evaluation and introduction of Gen AI into the software development lifecycle must be approached in a logical and structured way, and must have provable results. We have identified five vectors to consider within an organisation when considering Gen AI tooling:</p>
<p><img src="/images/2025-05-16-ai-capability-matrix/matrix.jpeg" alt="AI Capability Matrix" /></p>
<h3 id="1-the-metrics-vector-capture-your-starting-point">1. The Metrics vector: Capture your starting point</h3>
<p>There is a lot of sales pressure on companies to purchase tools such as Github Copilot, which integrates into a developer’s IDE and gives them “useful” autocomplete suggestions for the code they are writing - these suggestions can be contextual based on the files the developer has open and the history of what the developer has been up to recently. There is also a chat window, to save the developer context-switching out to a browser or other tool if they want to search for help. Github proudly claims <a href="https://resources.github.com/learn/pathways/copilot/essentials/measuring-the-impact-of-github-copilot/">55% faster task completion</a> for its users, and costs a mere $39 a month per enterprise user. Then, there is Microsoft Office 365 with Copilot, which promises to <a href="https://www.microsoft.com/en-us/microsoft-365/copilot/business#Overview">transform the way you work</a> for just $30 per user per month.</p>
<p>The only way to tell whether or not the tools have improved productivity, and hence to prove the return on your AI tooling investment, is to capture metrics on your current productivity by examining your current working practices. You can then run trials and proofs-of-concent with Gen AI tooling to look for changes in these metrics. There are many frameworks available to help you calculate your current performance - from good old-fashioned agile <a href="https://www.agilealliance.org/glossary/velocity/">velocity</a> to the more sophisticated <a href="https://dora.dev/guides/dora-metrics-four-keys/">DORA</a>, <a href="https://www.heartframework.com/">HEART</a> and <a href="https://www.infoq.com/news/2021/03/space-developer-productivity/">SPACE</a> frameworks.</p>
<h3 id="2-the-process-vector-identify-bottlenecks-and-pain-points">2. The Process vector: Identify bottlenecks and pain-points</h3>
<p>Once you have a handle on your current productivity metrics, you can start to identify if there are areas that need improvement. Skipping this step can waste considerable time and resources - for example, if you have accelerated your automation test scripting process by using Generative AI to write tests from your JIRA tickets, but in fact there was only one automated script being written per sprint and your actual delivery bottleneck is a lack of user experience feedback to redesign your front end, you are not going to have any impact on your end-to-end software delivery time.</p>
<p>There are other factors in focussing on a particular stage of the software development lifecycle, for example if a step is particularly onerous then even if it isn’t a time-based bottleneck you may be wasting money on high attrition rates for that role! Again, having the right metrics to understand your current process is key.</p>
<h3 id="3-the-security-vector-consider-your-data">3. The Security vector: Consider your Data</h3>
<p>Data sharing considerations can be a major blocker when choosing Gen AI tooling. It is certainly easier, quicker, and probably cheaper, to access pre-trained large language models (<a href="https://en.wikipedia.org/wiki/Large_language_model">LLM</a>s) as a service via cloud vendors such as Microsoft, OpenAI, AWS, but even if you do not share contextual files and data with the service, you will still need to send your search queries and if you are required to completely abstract the query to avoid sharing sensitive data, the efficacy of the Gen AI tooling will be significantly reduced.</p>
<p>It is also possible to host your own LLM and avoid internet access entirely. It is not, however, straightforward to <a href="https://gist.github.com/Birch-san/666a60eacbd9095902f99874622767be">connect tools such as Github Copilot to your own LLM</a> so you may be restricted in how effectively you can use such tools if you host your own LLM.</p>
<p>Some industries may be limited by legal frameworks as to whether they can send their data over the internet to a cloud-hosted model, and some business leaders may be nervous about the implications of sharing data.</p>
<h3 id="4-the-cultural-vector-train-your-employee-base">4. The Cultural vector: Train your Employee Base</h3>
<p>Even Generative AI toolsets need to have user uptake in order to be effective. Just installing the tools alone is not sufficient, users must be trained not only in HOW to use the tools, but also WHEN to use them (and when not to use them - for example, yes Github Copilot is quite good at writing unit tests to increase coverage of existing code bases, but you should not drop <a href="https://en.wikipedia.org/wiki/Test-driven_development">TDD</a> as a practice because of this). Collecting and sharing best practice are key, and you also may need to encourage an “AI-First” approach to daily tasks.</p>
<p>If a company’s employees are going to use Generative AI, it is important that they have at least a high level understanding of how it works. The MOST IMPORTANT lesson is that answers returned from Generative AI cannot be considered “correct”, and so if correctness is required the AI output MUST be thoroughly evaluated. Most mainstream AI tools do present a warning upfront that responses must be analysed, and if employees understand a bit more about the word-salad aspect they will have a better understanding of why the warning is there.</p>
<p>There are ethical issues that people will want to discuss and evaluate - is the purpose of this AI tooling to replace enjoyable human roles? Usually the answer to this is “no”, but it is certainly something a company will want to be open about and provide training/coaching/support for employees.</p>
<p>There are also sustainability issues to debate - how expensive are AI calls, in monetary or carbon terms? This is a complex question to answer - for a start it depends on whether you host your tooling or if it’s cloud-hosted/shared; it depends on whether you have a bespoke trained model (training is compute-intensive and <em>very</em> expensive) or if you are using a pre-trained model. Companies should have an idea of this and should, again, make the information transparent to employees.</p>
<h3 id="5-the-tools-vector-evaluation-techniques">5. The Tools vector: Evaluation techniques</h3>
<p>This vector isn’t specific to Gen AI for software engineering - for many processes there are competing software tools on the marketplace, and businesses need a strategy by which to identify the best one. The information to help evaluate available tools will come out of the other vectors mentioned here - cost and data considerations; and which process the tool needs to support.</p>
<p>Capgemini have significant expertise in the software selection process, <a href="https://www.capgemini.com/gb-en/wp-content/uploads/sites/28/2022/12/software_selection_for_automotive_suppliers.pdf">across multiple industries</a>, using a step-by-step evaluation process from identifying candidate solutions to finalising vendor negotiations. <a href="https://www.capgemini.com/gb-en/contact-us/">Contact us</a> to find out more.</p>
<p><a href="https://capgemini.github.io/ai/ai-capability-matrix/">Is your company ready for AI?</a> was originally published by Capgemini at <a href="https://capgemini.github.io">Capgemini Software Engineering</a> on May 16, 2025.</p>https://capgemini.github.io/sustainability/green-se-playbook2025-03-28T00:00:00+00:002025-03-28T00:00:00+00:00Sarah Saundershttps://capgemini.github.io/alumni#author-sarah-saunders
<p>Defra is a long-standing client for Capgemini, with both organisations sharing a <a href="https://www.capgemini.com/about-us/csr/environmental-sustainability/">commitment to sustainability</a>. Recently we have been focussing on whether we can introduce “Green coding standards” to our teams. Whilst there can be benefits from developers focussing on making each line of code <a href="https://greensoftware.foundation/articles/10-recommendations-for-green-software-development">“greener”</a>, much greater wins can be achieved from using best practices to minimise the infrastructure and resources used in the software development lifecycle. Listed below are the best practices that our teams follow at Defra to help make the application estate as sustainable as possible.</p>
<h2 id="prerequisites">Prerequisites</h2>
<p>Here we list a few common software engineering best practices that are essential to have in place before following this playbook.</p>
<ol>
<li>Automate all infrastructure configuration. Having infrastructure defined as code, so that it can be created and recreated repeatedly, not only benefits application stability but allows unused infrastructure to be torn down with confidence that it can be restored.</li>
<li>Build in observability across the lifecycle. There are certain “red flags” that you can mark as risk factors to always watch out for, such as:
<ul>
<li>Test environment infrastructure costs are higher than production</li>
<li>Builds are taking longer than an hour
If you can set up monitors / alerts for these flags, it gives you a good head start in addressing them when they occur.</li>
</ul>
</li>
<li>Prefer cloud deployments. If you can delegate hardware efficiencies to your cloud provider, they are most likely the specialist in using their infrastructure to maximum efficiency. This need not imply cloud vendor tie-in; we can abstract to layers such as Kubernetes-as-a-service and move, or even balance, applications between clouds without too much difficulty.</li>
</ol>
<h2 id="software-architecture-best-practices">Software Architecture Best Practices</h2>
<p>Thinking sustainable begins at application inception, and it is also at inception that the biggest efficiencies can be realised.</p>
<ol>
<li>Don’t solve problems that aren’t there. If there isn’t a performance issue, don’t architect as if there is one! You may not need that cache/queue/resilience level in the infrastructure.</li>
<li>In the cloud, prefer <a href="https://en.wikipedia.org/wiki/Function_as_a_service">function-as-a-service</a>, then a <a href="https://en.wikipedia.org/wiki/Platform_as_a_service">PaaS</a> that can <a href="https://www.fermyon.com/blog/scale-to-zero-problem">scale-to-zero</a>, then <a href="https://en.wikipedia.org/wiki/Virtual_machine">VMs</a>. There are often good reasons not to use function-as-a-service, but even so you should be looking for the possibility to scale right down to zero for your hosted applications rather than having a constantly-running virtual machine (that you are constantly paying for).</li>
<li>Avoid data duplication. Having a single store of data and making it readily accessible can be more sustainable than having multiple copies and having to keep them in sync.</li>
<li>Minimise message traffic. Passing excessive data over interfaces can also reduce sustainability. Keep messages small and only pass data when necessary.</li>
<li>Consider message and security protocols. Some can be more energy-efficient than others, depending on handshake complexity and encryption levels – although there is often a trade-off to consider. For example, for <a href="https://jwt.io/">JSON Web Tokens</a> (JWTs) the cost of creating and validating the JWT is high as it uses complex encryption, but you only need to make one authentication call – whereas with OAuth you pass a simple token but must make regular authentication calls to validate it. Similarly, SSL uses expensive encryption but is essential for securing internet traffic so can only be safely avoided for communications within a closed network.</li>
<li>Use stateless services. Managing state introduces complexity and cost – avoid as much as possible. Event-based models can help.</li>
<li>Set document retention policies for all supporting documents.</li>
</ol>
<h2 id="software-development-best-practices">Software Development Best Practices</h2>
<p>It is usually true that if you write quality code, you write the most sustainable code possible, because the best practices of both disciplines align. There are some development team best practices that we can highlight though:</p>
<ol>
<li>Keep It Simple. <a href="https://en.wikipedia.org/wiki/KISS_principle">KISS</a> is one of the acronyms that often appear in “clean code” guidebooks, and it plays directly into sustainability. Applications can become over-complicated by trying to code for scenarios that will never exist. Keep a “MVP” mindset and write as little code as possible.</li>
<li>Keep the team Informed. Our project recently calculated that to run a single branch build in the cloud and complete our branch integration test suite costs £40 per day. Once developers know this, they are far more likely to be conservative in the builds they run.</li>
<li>Consider your programming language. Each language has a different energy efficiency protocol – for instance a <a href="https://www.researchgate.net/publication/320436353_Energy_efficiency_across_programming_languages_how_do_energy_time_and_memory_relate">study showed</a> that a set of problems solved in JavaScript used more energy than the same problems solved with Java.</li>
<li>Use appropriate data structures. Consider what your data is being used for and choose a data structure to match its use case.</li>
<li>Write self-documenting code. Well-written code, stored in a repository in deltas with each commit linked (e.g. via naming convention) to an issue tracking system where requirements are captured, and BDD-style test scripts that copy the language of the requirements to prove delivery, can alleviate the need for a lot of expensive documentation.</li>
<li>Keep tasks small. Successful backlog management and refinement leads to manageable development tasks where the developer can focus on quality code, and the code reviewers can focus on improving the sustainability of the code.</li>
</ol>
<h2 id="software-testing-best-practices">Software Testing Best Practices</h2>
<p>Whilst automating tests is essential to enabling frequent low-risk releases, there is a danger that automation suites become overly large, repetitive and expensive to run. Tests should be reviewed and groomed throughout the application lifecycle, and test strategies should be practical and minimal.</p>
<ol>
<li>Keep to the <a href="https://martinfowler.com/articles/practical-test-pyramid.html">Testing Pyramid</a>. Unit tests are much cheaper to run as they do not require the entire application architecture to be up and running, the way that integration tests do. All functional testing should be confined to unit testing, with pact testing used to confirm dependencies and interfaces between systems.</li>
<li>Fail early. If your unit tests fail, you don’t want to go on to run further tests. So any automated pipelines should stop on the first failure – even if stages are running in parallel.</li>
<li>Only test on change. Tests should be cadenced with application changes, not with calendar dates. There is no need to re-run performance tests weekly if nothing in your environment has changed, for example!</li>
</ol>
<h2 id="software-deployment-best-practices">Software Deployment Best Practices</h2>
<p>Here we discuss all things Platform - assuming cloud deployments and full automation as per the prerequisites.</p>
<ol>
<li>Everything Ephemeral. If infrastructure isn’t being used, tear it down. This is particularly relevant for test environments - we often have many test or “live-like” environments which are left running overnight and at weekends. Review whether this is really necessary. Sometimes it is – you may have certain tests running out-of-hours, but often it is not. This applies to support infrastructure too – build pipeline machines should be created when needed and torn down afterwards. Capgemini can provide templates for completely ephemeral build environments, such as <a href="https://capgemini.github.io/devsecops/platform-for-product-oriented-teams/">CREATE</a> and, for Defra, <a href="https://github.com/DEFRA/ado-pipeline-common?tab=readme-ov-file">ADP</a> for Azure DevOps pipelines.</li>
<li>Automated tear-down. Developer builds and other automated test environment builds should automatically take themselves down once their automation suites have completed.</li>
<li>Review your cloud service plans regularly. Your initial best estimate of what “size” of hardware applications will need should be reviewed over time. Often, we only review when we want to scale up – consider regular reviews with a view to scaling down when hardware or services are not being utilised.</li>
</ol>
<p>We will be evolving this list as we work with our clients, adding best practices that we have seen in use. As usual with any drive for change, awareness is key - having a focus on sustainability for everything we do will help draw out further ideas on how we can improve. Don’t hesitate to add further suggestions!</p>
<p><a href="https://capgemini.github.io/sustainability/green-se-playbook/">A Green Software Engineering Playbook</a> was originally published by Capgemini at <a href="https://capgemini.github.io">Capgemini Software Engineering</a> on March 28, 2025.</p>https://capgemini.github.io/frontend/transparent-borders2025-03-07T00:00:00+00:002025-03-07T00:00:00+00:00Malcolm Younghttps://capgemini.github.io/authors#author-malcolm-young
<p>It’s quite a common design requirement to have a border on some states of an element but not others. For example a set of tabs where the currently active tab is highlighted with a top border, or a button where a border is added on hover.</p>
<p>The problem is that if you add a border to an element, that changes its size, so if you add a border on one state of a component, then the size will change when the state changes, as you can see from these examples:</p>
<div class="row">
<div class="column small-12 medium-6 small-centered medium-uncentered">
<div class="flex-video">
<iframe src="https://codepen.io/malcomio/full/abrmwjd"></iframe>
</div>
</div>
<div class="column small-12 medium-6 small-centered medium-uncentered">
<div class="flex-video">
<iframe src="https://codepen.io/malcomio/full/oNRzwqY"></iframe>
</div>
</div>
</div>
<p>One simple and elegant solution is to use a transparent border on the other versions of that element, and only change the border colour when the state changes.</p>
<p>This feels like a very old and quite basic piece of advice, so I assumed it must have been written about before, but the only example I’ve found is in <a href="https://stackoverflow.com/questions/39385180/css-adding-border-to-button-on-hover-how-avoiding-a-shifting-of-the-following">this stack overflow answer</a>.</p>
<div class="row">
<div class="column small-12 medium-6 small-centered medium-uncentered">
<div class="flex-video">
<iframe src="https://codepen.io/malcomio/full/GRajEXy"></iframe>
</div>
</div>
<div class="column small-12 medium-6 small-centered medium-uncentered">
<div class="flex-video">
<iframe src="https://codepen.io/malcomio/full/MWdjQyv"></iframe>
</div>
</div>
</div>
<p>There are other ways to approach this, like using an inset box-shadow, or changing the padding to offset the size of the border, but to me that feels unnecessarily complex, and unintuitive enough that I’d want a comment in the code to <a href="https://blog.codinghorror.com/code-tells-you-how-comments-tell-you-why/">explain why</a>.</p>
<p><a href="https://capgemini.github.io/frontend/transparent-borders/">Transparent borders for element states</a> was originally published by Capgemini at <a href="https://capgemini.github.io">Capgemini Software Engineering</a> on March 07, 2025.</p>https://capgemini.github.io/ai/building-a-rag2025-02-28T00:00:00+00:002025-02-28T00:00:00+00:00Sarah Saundershttps://capgemini.github.io/alumni#author-sarah-saunders
<p>I’ve said before that using the cloud feels to a certain age of developer like a kid in a sweet shop. Talking to a colleague of a similar age recently about how he wrote his own message bus in C, we agreed the current generation don’t know how lucky they are to use this software as a service via few clicks on a beautifully laid out portal! This week I was amazed again what can be done in a few clicks, whilst taking part in a Capgemini hackathon. The brief was short and open – “build an app with AI”, and the toolset was $300 credit on the Microsoft Azure portal.</p>
<p>What my team decided to build was a <a href="https://en.wikipedia.org/wiki/Retrieval-augmented_generation">RAG – a retrieval-augmented generative AI solution</a>. We have an email group that’s many, many years old and contains a lot of company knowledge from people who have long-ago retired, so we decided for our solution to “resurrect” them by using the mailbox files as a data source for our RAG. We based our solution on this Azure sample application, and architected as below:</p>
<p><img src="/images/2025-02-28-building-a-rag/architecture.jpg" alt="Azure architecture diagram for RAG" /></p>
<p>For a bit of fun, we also used Gen AI (Microsoft Copilot) to write our hackathon presentation – given a description of the architecture it came up with this masterpiece, which I kind of like:</p>
<p><img src="/images/2025-02-28-building-a-rag/hackathon_ai_arch.jpeg" alt="AI-generated architecture diagram" /></p>
<h2 id="data-source">Data source</h2>
<p>First step was to get the mailbox file into JSON format so it could be uploaded to the Azure portal. Next we had to learn about “vectorisation”.</p>
<p>Many of us are starting to become familiar with vector datastores and huge numbers of vectors being quoted – for example the <a href="https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#embeddings">“ada 2” embedding engine</a> that we used produces vectors with 1,536 dimensions. I always describe vector dimensions as “ways to represent the word cat”, for example with one dimension a cat is a cute but annoying furry animal; with 1,536 dimensions it can also be a cat o’ nine tails; a shortcut for concatenation; any subspecies of the genus Felis; a play; a musical; the arch-enemy of a dog; a cartoon… OK I won’t list 1,536.</p>
<p>It turned out to be pretty easy to vectorise our data. We stored it in an Azure storage blob so the cloud could be granted read permissions on it, we spun up an <a href="https://learn.microsoft.com/en-us/azure/ai-services/openai/">OpenAI</a> instance and deployed our “ada 2” embedding model in it, we created an <a href="https://learn.microsoft.com/en-us/azure/search/">Azure AI search index</a> (one of my favourite Azure products) and chose “load and vectorise data.” This wizard allowed us to specify our data source and our embedder, and then it went off and did some magic, and then we had vectorised data! As a test, I searched the AI Search index for the term “BBC”. I knew it wasn’t in the data source, but I knew there was a long debate in the data source about a Fawlty Towers TV series, and sure enough the vector returned, with a suitable degree of confidence, the email chain in question. Fantastic.</p>
<p>What is the process carried out by the embedder? Well, it turns the words into numbers, based on how it has been trained, and there are calculations that can be performed on these numbers to figure out how “semantically similar” two words are – so for example in English the vectors for the words “coder” and “developer” should have very similar outputs from the calculation. When I look in my AI search index post-vectorization, I can see my initial data broken down into chunks, and I can also see arrays of arrays of numbers for each chunk. There are different embedding models you can use to vectorise your data, which have been trained on different data sets to vectorise words – here, specialisms can come in such as a data set of medical terms that would have been trained to recognise how they link together. The language of the text is also presumably a very important factor in training the embedder!
We now already have a very powerful machine-learning-augmented search on our data, all we really need to do is add an interface to it. As this is an AI hackathon, we’ll add a natural-language chatbot interface.</p>
<h2 id="chatbot-interface">Chatbot Interface</h2>
<p>You can query the OpenAI instance that we’ve spun up for our hackathon in natural language by installing a “chat module” that will do this automatically, but that will ask the LLM, which I guess has been trained on a huge amount of internet data, directly – and that’s not what we want. For a RAG, we want to restrict the queries to a data set that we know contains the answer we want for our domain. So we build an “orchestrator” application to set in between the user and OpenAI, intercept queries and then use prompts to force the LLM, with extreme prejudice (and maybe some bribes) to only use the data we want and only reply in the way we expect.</p>
<p>Our user interface is a simple one-page application with a chat box you can type into and submit to our orchestrator layer. The first step for the orchestrator is to use OpenAI to translate the user’s question into a better search term for our AI search index. So when the user sits on the keyboard and writes a misspelt, badly phrased query like “wot r the main chacaraterts in that SOAP with John Cleese in it”, we can clean it up a bit before we search our vectorised data. OpenAI is good at this – it’ll return something like “Fawlty towers main characters” (although it does prefer to spell “Faulty” correctly!) So we build that API call first. The response, if there is any, is then fed into an API call to our AI Search index, which will return the data it has matched plus a probability score that the data is relevant. There’s scope here to filter based on probability score if we’re getting inaccurate responses. Then we take this data and craft another OpenAI call, this time to the chat endpoint. We provide the data we’ve received from AI search, and a strongly worded prompt along the lines of “You are a helpful assistant, you will return a paragraph and bullet points of relevant data, you will ONLY use the provided data to generate your answer, if you cannot find the answer in the provided data then say ‘I don’t know’” and so on. Despite training courses telling you otherwise, prompt engineering is a bit of a dark art, so you may need to experiment with the prompt a bit to get better results.</p>
<p>The response from this query should be in natural language and contain answers only from your data set, so it can be directed straight back to the user, and voila! Your very own RAG application.</p>
<h2 id="cost-considerations">Cost Considerations</h2>
<p>So what does it cost? From an Azure perspective, you need to host your static web front end and your orchestrator app – there are various ways to do this, you could use a Function App for the orchestrator and a “Static Web App” for your one-page front end, that might be the most cost-efficient.</p>
<p>You only need to deploy the OpenAI embedding module whilst you vectorise your data, but you do need the OpenAI chat interface which is charged per API call so that would be 2 calls per application use. You could skip the “translate-my-question” call if you really wanted to reduce this – although the pricing model does not give much benefit for this.</p>
<p>There is a handy in-portal calculator to help you figure out your OpenAI costs based on your created architecture in Azure, for my hackathon estate which included an OpenAI chat the estimate was $1,400 per month per <a href="https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/provisioned-throughput">PTU</a> (this is based on using the chat feature 24 hours a day) and I’d apparently need 15 PTUs! Ouch. Lowering this to perhaps 7 hours a day for 20 working days is a slightly more acceptable $4,200 a month, but still – costs to be very wary of.<br />
You need your AI search instance; this is charged per GB of data stored rather than per API call so the cost really depends on the size of your RAG data set.</p>
<h2 id="security-considerations">Security Considerations</h2>
<p>Whenever you use Software as a Service, you need to be aware of what you are sending to the vendor and what they are contractually able to do with it. In this case, our AI search data is contractually secure in the index, but we also need to look at the contract for the OpenAI calls – for example, is Microsoft allowed to keep a copy of our query data? There’s a <a href="https://learn.microsoft.com/en-us/legal/cognitive-services/openai/data-privacy">terms of service page</a> to help you understand with lots of promises on things Microsoft won’t do, but whether this is acceptable will be very specific to your use case requirements so get your reading glasses on.</p>
<p>Then, as for any cloud architecture, you want to be sure that each component has the minimum viable permissions to access each other component and that your internet-exposed APIs and websites are protected by login. For this architecture, we could make the Azure storage blob read-only, give the Azure OpenAI instance contributor permissions to the AI search service, and give the orchestrator service read permissions to the AI search and query permissions to the OpenAI chatbot.</p>
<h2 id="conclusion">Conclusion</h2>
<p>It was pleasingly straightforward to create this RAG, although I am not certain that the cost of adding the chatbot front end significantly increases the value of the product; perhaps just the vectorised data is enough to increase the usefulness of a data source by making it contextually searchable? I think I did succeed quite well in resurrecting all those retired employees though. All their conversations about Fawlty Towers and the value of the Capgemini car lease scheme have been brought back to life in a very convincing and accessible way!</p>
<p><a href="https://capgemini.github.io/ai/building-a-rag/">Creating a Retrieval Augmented generative AI chatbot</a> was originally published by Capgemini at <a href="https://capgemini.github.io">Capgemini Software Engineering</a> on February 28, 2025.</p>https://capgemini.github.io/cloud/accelerating-aws-native-dev-with-localstak-and-docker2024-11-08T00:00:00+00:002024-11-08T00:00:00+00:00Tochukwu (Charles) Ofoegbuhttps://capgemini.github.io/authors#author-tochukwu-charles-ofoegbu
<p>In this blog post, I’ll walk through how using <a href="https://www.localstack.cloud/">LocalStack</a> with <a href="https://www.docker.com/">Docker</a>
and <a href="https://docs.docker.com/compose/">Docker Compose</a> helped us speed up development
and testing, making the entire process more efficient and seamless.</p>
<p>When building modern cloud-native applications on AWS, having a reliable and cost-effective approach
to development and testing is essential.
For this project, we utilised LocalStack, a platform that simulates AWS services locally. This
enabled us to work more efficiently, speed up development, cut down on costs, and quickly test
integrations without constantly interacting with the actual AWS cloud. LocalStack, along with other
components of the project, was deployed as Docker containers using Docker Compose for
orchestrating the entire environment locally.</p>
<h2 id="the-challenge-cloud-dependency-slows-development">The Challenge: Cloud Dependency Slows Development</h2>
<p>As many developers will attest, working directly with live AWS services can become costly and slow.
Every API call, infrastructure change, or even basic testing, requires interaction with the cloud,
which can introduce delays. On top of that, maintaining isolation between development and production
environments can be tricky and costly when dealing with real AWS resources.</p>
<p>In particular, this project was fast-paced and required frequent iteration between components. This
constant cycle of making changes, pushing updates, and testing in a live AWS environment was prone
to introduce a lot of friction. A local alternative to AWS was essential for reducing the turnaround
time between code changes and testing. The combination of LocalStack with Docker and Docker Compose
provided the perfect solution.</p>
<h2 id="the-solution-localstack-with-docker">The Solution: LocalStack with Docker</h2>
<p><strong><a href="https://www.localstack.cloud/">LocalStack</a></strong> — “A complete, localised AWS environment where developers can build, test,
profile and debug infrastructure and code ahead of deployment to the cloud.” By deploying LocalStack
in a Docker container, we were able to spin up a local version of AWS that was fast, reliable, and
cost-free. Docker made it easy to manage and isolate the LocalStack environment, while Docker Compose
orchestrated the entire environment locally, allowing us to run all components — front-end, back-end,
mocks, and infrastructure, together in one seamless, local environment without the latency, cost, or
need for an internet connection.</p>
<h3 id="project-overview">Project Overview</h3>
<p>The project consisted of four key components:</p>
<ol>
<li><strong>UI Component:</strong> The frontend / user interface of the application was developed with
<a href="https://nodejs.org/en">Node.Js</a> and <a href="https://mozilla.github.io/nunjucks/">Nunjucks</a> used as the templating engine.</li>
<li><strong>Backend Component:</strong> The APIs and business logic built with <a href="https://www.java.com/en/">Java</a>
and <a href="https://spring.io/projects/spring-boot">Spring Boot</a> framework</li>
<li><strong>Mock Component:</strong> A HTTP mock service based on <a href="https://www.mock-server.com/">MockServer</a> used to simulate third-party
or Upstream APIs.</li>
<li><strong>Infrastructure/Environment:</strong> The AWS environment was defined in code following the
Infrastructure as Code (IaC) paradigm, using <a href="https://www.terraform.io/">Terraform</a> to provision various AWS services.</li>
</ol>
<p><img src="/images/2024-11-08-aws-native-dev/LocalStack_Project_Illustration.drawio.png" alt="Basic High level illustration of project setup with local stack, docker and docker compose" /></p>
<p>In this project, we used <strong>SQS</strong> for messaging, <strong>S3</strong> for object storage, and <strong>ECS</strong> for container
orchestration, <strong>Cognito</strong> for user management, <strong>Cloud Watch</strong> for monitoring and alerting, among
other AWS services. LocalStack allowed us to simulate these services locally, cutting down
on our cloud costs and speeding up development. With Docker and Docker Compose, all services were
integrated into a single, local environment that closely mirrored the cloud environment.
For instance:</p>
<ul>
<li>Messaging flows were tested by sending and receiving messages through <strong>local SQS queues</strong>. This
allowed us to validate the flow of messages from one component to another, ensuring that the
message-handling logic worked as expected.</li>
<li>We uploaded and retrieved files from LocalStack’s simulated <strong>S3 buckets</strong>, ensuring our storage
logic worked as expected without needing to interact with the actual AWS S3 service</li>
<li>User authorisation and authentication was tested through local <strong>Cognito User Pools</strong></li>
<li>Our back-end API integrated with Mock services simulated using <strong>MockServer</strong> running in
Docker, allowing us to test data storage and retrieval workflows without needing the live services
deployed on AWS.</li>
</ul>
<p>All the code for the various components were managed with <a href="https://git-scm.com/">Git</a> & <a href="https://about.gitlab.com/">GitLab</a> version control,
and we also utilised GitLab’s CI/CD for our build and deployment pipeline management.</p>
<p>Refer to <a href="https://docs.localstack.cloud/user-guide/aws/feature-coverage/">AWS Service Feature Coverage</a>
for an overview of the AWS APIs and services implemented in LocalStack</p>
<h2 id="localstack-and-docker-enhanced-our-project-in-the-following-ways">LocalStack and Docker enhanced our project in the following ways:</h2>
<h3 id="1-containerised-deployment">1. Containerised Deployment</h3>
<p>By containerising LocalStack, our application’s Backend services, Frontend service, mock components,
and other services, we could deploy the entire stack on any developer’s machine with minimal setup.
All that was required was Docker and Docker Compose. This approach offered consistency across
different environments and eliminated the “works on my machine” issues that often arise when
different developers have different local setups.</p>
<h3 id="2-docker-compose-for-orchestration">2. Docker Compose for Orchestration</h3>
<p>We used Docker Compose to orchestrate our multi-container environment. Each service—whether it was
LocalStack simulating AWS or our front-end and back-end components—was defined in a single
docker-compose.yml file. With one command, we could spin up or tear down the entire environment.
This orchestration allowed us to develop, test, and debug the entire system on a local machine
without having to interact with AWS cloud services.</p>
<p>Here’s an example of the Docker Compose setup. Please note, this is just an illustration and not the
actual configuration.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>version: '3'
services:
localstack:
image: localstack/localstack
ports:
- "4566:4566"
- "4571:4571"
environment:
- SERVICES=s3,sqs,lambda,dynamodb
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
backend:
build: ./backend
depends_on:
- localstack
ports:
- "5000:5000"
frontend:
build: ./frontend
ports:
- "3000:3000"
mock-service:
build: ./mock-service
ports:
- "4000:4000"
</code></pre></div></div>
<p>This setup allowed all components — LocalStack, the back-end, front-end, and mock services to run in
harmony, facilitating rapid development and easy integration testing.</p>
<h3 id="3-faster-development-cycle">3. Faster Development Cycle</h3>
<p>By running services locally, we no longer had to wait for deployments or API interactions to happen
in the cloud. We could iterate over changes rapidly, test out new features, and deploy code without
worrying about cloud costs. For example, instead of sending messages through live SQS queues and
interacting with S3 storage on AWS, LocalStack let us simulate these services locally. This allowed
us to run and test our messaging workflows much faster.</p>
<h3 id="4-easy-integration-testing">4. Easy Integration Testing</h3>
<p>One of the biggest challenges in modern cloud applications is ensuring that various components —
front-end, back-end, messaging, storage; are working harmoniously together. LocalStack made this
process seamless. Since we could emulate all the AWS services locally, we could simulate how our
front-end communicated with the back-end, how our back-end processed data using SQS and stored files
in S3, and even how our infrastructure interacted with external services. This allowed us to test
integrations between various parts of our system without needing to push anything to the cloud.</p>
<h3 id="5-reduced-costs">5. Reduced Costs</h3>
<p>The ability to test AWS-dependent services locally means you’re not accumulating unnecessary cloud
charges. In our project, running constant tests on live AWS services would have significantly
increased the budget. LocalStack helped us minimize these costs by simulating the AWS environment on
our local machines without any cloud interaction. This is particularly useful for small teams or
projects with limited budgets.</p>
<h3 id="6-simplified-infrastructure-setup">6. Simplified Infrastructure Setup</h3>
<p>LocalStack also simplifies the process of setting up and tearing down infrastructure. Since
everything runs locally, we could quickly spin up and shut down environments for testing and
development, reducing the overhead involved in managing cloud resources.</p>
<h3 id="7-parallel-development">7. Parallel Development</h3>
<p>With LocalStack in place, our team could develop multiple components simultaneously. The front-end
devs didn’t need to wait for the back-end APIs to be fully functional in AWS; they could test against
the locally mocked services. Similarly, the infrastructure devs could configure and test various AWS
services locally before pushing changes to the cloud.</p>
<h2 id="downsides-of-localstack">Downsides of LocalStack:</h2>
<h3 id="1-limited-feature-support">1. Limited Feature Support</h3>
<p>Not all AWS services and features may be available or fully supported in LocalStack. Also, new or
complex features may not be fully implemented, which could impact testing and development.</p>
<h3 id="2-compatibility-concerns">2. Compatibility Concerns</h3>
<p>LocalStack may not always be fully compatible with the latest versions of AWS SDKs which could lead
to delays in adopting new features.</p>
<h3 id="3-configuration-complexity">3. Configuration Complexity</h3>
<p>Configuring and setting up LocalStack may require additional effort and technical know how. It may
not be as straightforward as using the actual cloud services, especially for beginners.</p>
<h2 id="conclusion">Conclusion</h2>
<p>In summary, LocalStack significantly accelerated our project’s development and testing processes. By
simulating a local AWS environment, it allowed us to quickly iterate on our front-end, back-end, and
infrastructure components without incurring high cloud costs or waiting for slow cloud interactions.</p>
<p>I would highly recommend incorporating LocalStack into your workflow for anyone working on projects
that rely on AWS services. Not only will it make your development process faster and more efficient,
but it will also allow you to test AWS integrations with greater confidence, all while keeping your
costs low. If you haven’t tried LocalStack yet, give it a shot - you’ll likely find it as invaluable
as we did!</p>
<p><a href="https://capgemini.github.io/cloud/accelerating-aws-native-dev-with-localstak-and-docker/">Accelerating AWS Native Development with LocalStack & Docker: A Real-World Case Study</a> was originally published by Capgemini at <a href="https://capgemini.github.io">Capgemini Software Engineering</a> on November 08, 2024.</p>https://capgemini.github.io/ai/use-cases-for-ai2024-09-06T00:00:00+01:002024-09-06T00:00:00+01:00Sarah Saundershttps://capgemini.github.io/alumni#author-sarah-saunders
<h1 id="use-cases-for-generative-ai">Use Cases for Generative AI</h1>
<p>For the past year, almost every week the Software Engineering world is asked about Artificial Intelligence (AI). “What should Capgemini’s AI offerings be?”, “Aren’t you worried about the role of Capgemini software engineers now that AI can write code?” Specifically in these cases, the questions are around Generative AI. To put that in context, here is a recap of some terms used to describe different implementations of AI.</p>
<h3 id="machine-learning">Machine Learning</h3>
<p>A nice dictionary definition of machine learning from <a href="https://languages.oup.com/">Oxford Languages</a> is: “the use and development of computer systems that are able to learn and adapt without following explicit instructions, by using algorithms and statistical models to analyse and draw inferences from patterns in data.”
The difference between a basic decision tree and machine learning is that with machine learning, a person didn’t sit down and define a set of if-statements for the machine to follow to make decisions (as used, for instance, in the NHS <a href="https://www.ucl.ac.uk/news/2004/dec/map-medicine">Map of Medicine</a> from 2007). Instead, they provided a set of inputs and expected outputs and let the machine’s algorithms figure out how to get from input to output. You could describe it as a bottom-up approach to problem solving rather than a top-down approach. This evokes the differences between <a href="https://en.wikipedia.org/wiki/Isaac_Newton">Newton</a> and <a href="https://en.wikipedia.org/wiki/Gottfried_Wilhelm_Leibniz">Leibnitz</a> in their attempts to describe the physical world in the 17th century. Newton’s laws are built more like a decision tree, describing the results of forces on objects, whereas Leibnitz’s “monads” are more like the algorithms in machine learning where small, simple rules build up together to create the complex forces we see in action. They are both very useful tools in describing and learning about the physical world but may result in different predictions of varying accuracy.
Machine learning algorithms are loosely based on the <a href="https://en.wikipedia.org/wiki/Neuron">neurons</a> in the brain. Each “artificial neuron” is a function that receives one or more inputs, applies weights to these inputs, and sums them to produce an output. Chained together into a “Neural Network” these small functions can complete amazing tasks such as image recognition, text generation and predictive analytics.
Machine learning is incredibly useful and can benefit a wide range of industries from healthcare to travel to finance.</p>
<h3 id="generative-ai">Generative AI</h3>
<p>Dictionaries are not so hot on defining generative AI. Perhaps the best example is from <a href="https://www.merriam-webster.com/dictionary/generative%20AI">Merriam Webster</a>: “artificial intelligence that is capable of generating new content (such as images or text) in response to a submitted prompt (such as a query) by learning from a large reference database of examples”. The important difference here is the generation of new content. There are still machine-learning algorithms at play, but the ask is that they produce something new. The output of a generative AI query isn’t pulled from a list, it isn’t copied from an article. It is created by the neural network. You MAY get a text answer that is the same as an existing article, but it will have been probabilistically generated using an algorithm that, in simple terms, says “what word is most likely to come next in this phrase”, for example. Asking the same question twice will not produce the same output.
With machine learning, the output you get is usually a probability – for example, a handwriting recognition system would give a probability percentage that a given input image contains certain words. As such, it isn’t ever completely wrong, because it doesn’t give a definitive answer. Generative AI will look at the various probable outputs and choose one by applying a separate decision algorithm. In this way, it is possible to get answers that are just plain wrong.</p>
<h3 id="general-ai">General AI</h3>
<p>Wikipedia says General AI “is a type of AI that matches or surpasses human capabilities across a wide range of cognitive tasks.” To put it simply, General AI does not exist. We don’t currently accurately define human capabilities, so trying to decide whether a machine can exceed them is impossible!</p>
<h2 id="examining-the-use-cases">Examining the Use Cases</h2>
<p>As mentioned above, Machine Learning solutions are proving vastly useful in automating complex tasks that must make deductions based on huge quantities of data. A notable example is evaluating patient <a href="https://www.nature.com/articles/s43856-022-00199-0">CT scan data in search of cancerous growths</a>. The algorithm does not replace the oncologist, but assists to alleviate an otherwise “repetitive, tedious and burdensome” task. The outputs are not treated as gospel truth, but help the oncologist choose where to focus their time and efforts.</p>
<p>There has understandably been much buzz and excitement about the amazing capabilities of Generative AI this past year, with the broader uptake of <a href="https://openai.com/index/chatgpt/">ChatGPT</a> and <a href="https://openai.com/index/dall-e-2/">Dall-e</a>, for example. Now that the dust has settled, a closer evaluation of Generative AI use cases is required.</p>
<p>Remember this diagram for blockchain use cases? Perhaps it’s time for something similar for Generative AI use cases?</p>
<p><img src="/images/2024-09-06-use-cases-for-ai/flowchain.jpg" alt="Should You Use Blockchain? Flow chart" /></p>
<p>The key thing to remember about Generative AI is that the answers it generates are not necessarily correct, and are not even <a href="https://link.springer.com/article/10.1007/s11023-022-09602-0">intended</a> to be correct. When architecting a solution that involves Generative AI, the pertinent questions are:</p>
<ul>
<li>Does it matter if this generated answer is wrong?</li>
<li>Do I require the same answer to the same question if I ask it twice?</li>
</ul>
<p>If the answer to either of these questions is YES, then you cannot use a standard Generative AI solution without some significant <a href="https://www.forbes.com/sites/forbestechcouncil/2023/10/10/the-power-of-fine-tuning-in-generative-ai/">tuning</a> which in itself removes the “generative-ness” of your solution.</p>
<p>There are still plenty of scenarios where the answer to the above questions is NO. With a focus on Microservice Development, we will look at the use cases that have been put forward for enhancing the software development lifecycle:</p>
<h3 id="user-interface-design">User Interface Design</h3>
<p>When designing a human-computer interface to improve the user experience, there are best practices but there is no wrong answer. As such, it is a viable candidate for applying Generative AI - the role of an <a href="https://en.wikipedia.org/wiki/Agile_software_development">Agile Software Engineer</a> in interface design is to iteratively put suggestions forward to the system user for their consideration. AI tools could accelerate prototypes, and assist the designer in breaking out of their habitual designs and trying something new. Already, design tools such as <a href="https://www.figma.com/">Figma</a> and <a href="https://v0.dev/">V0 by Vercel</a> have started to put this into practice.</p>
<h3 id="automated-testing">Automated Testing</h3>
<p>A software test suite should allow all routes through the system to be automatically checked when a change is added, to ensure that no previous logic has been broken by the new addition. Defining “all routes” isn’t as straightforward as you may think. There is a great testing joke:</p>
<blockquote>
<p>A tester walks into a bar and orders a beer.<br />
Then they order null beers.<br />
Then they order 99999 beers.<br />
Then they order ytugihoij^%&*( beers.<br />
Then they order a bear.<br />
A customer walks into a bar and asks where the toilets are. The bar explodes.</p>
</blockquote>
<p>Turning up the “<a href="https://platform.openai.com/docs/guides/text-generation/how-should-i-set-the-temperature-parameter">temperature</a>” on Generative AI and asking it to generate some test cases, and getting some random test cases generated which a tester might not have considered, could be useful. A similar principle is applied to testing technical architectures in Barry O’Reilly’s <a href="https://www.sciencedirect.com/science/article/pii/S1877050922004975">Residuality Theory</a>, whereby he suggests imagining random events to test the robustness of a software architecture. For example, if you architect for the scenario that giant lizards might crawl out of the River Thames and set fire to London, your application will consequently be able to survive events such as power blackouts or train strikes! Generative AI could be used to produce those off-the-wall architectural scenarios that help improve your overall application stability over time.</p>
<h3 id="writing-code">Writing Code</h3>
<p>There has been a lot published around the ability of Generative AI to <a href="https://youtu.be/qbIk7-JPB2c?t=1771">write code</a>. With applications such as Github Copilot and Azure Copilot being very easy to integrate into a developer’s workflow (or to use standalone as a pseudo developer), this would appear to be a central use case for Generative AI, and some believe that Generative AI will replace the software engineer’s role imminently. What we have seen with Capgemini software projects does not align with this, however – it is not possible to provide a junior developer with a Copilot license and expect them to perform like a senior developer, let alone enabling businesses to create software themselves without a software engineer present at all. There are a few big and obvious reasons for this:</p>
<ol>
<li>
<p>It only saves time spent writing code<br />
Developers spend surprisingly small amounts of time actually writing code There are <a href="https://www.microsoft.com/en-us/research/uploads/prod/2019/04/devtime-preprint-TSE19.pdf">various studies</a> showing percentages between 9% and 67% of time is spent “coding”, depending on the definition, the seniority of the developer, the workplace environment. Either way, there is a lot of time that a developer spends on tasks that Generative AI code assistants are not designed to help with.
Github proudly announced huge <a href="https://github.blog/2022-09-07-research-quantifying-github-copilots-impact-on-developer-productivity-and-happiness/">productivity gains</a> of 55% when using Copilot, but saving 55% of the 9% of time you actually spend coding isn’t as impressive - plus, most of the examples I have seen of using Copilot focus on reducing boilerplate or writing simple games, and these use-cases are not particularly relevant to the typical task list of a senior software engineer.</p>
</li>
<li>
<p>It takes value away from the development lifecycle<br />
An often-quoted example of Generative AI writing code is the creation of unit tests. The best practice when developing code is to employ <a href="https://en.wikipedia.org/wiki/Test-driven_development">Test-Driven Development</a> (TDD), an architectural approach which forces the developer to consider their code interfaces before they even write their method implementations. Relying on Generative AI to write unit tests implies they will be written after the code, hence not following TDD practices.
A counter-argument is where a team inherits a codebase which does not have high unit test coverage. Is it safe to allow Generative AI to create the tests? Again, let’s go back to the initial ask - Does it matter if this generated answer is wrong? Yes it does matter. The only thing worse than no tests is incorrect tests! It is possible to generate unit tests for an existing code framework <a href="https://symflower.com/en/">using rules</a> rather than Generative AI. In fact, for many Generative AI use cases this is true – a rules-based or machine learning result from a query is more useful than a generated result. What we find with many Generative AI use cases is that the useful component is very often the natural language “chatbot” interface and the way it can translate a natural language query into a search term, rather than the generated result.</p>
</li>
<li>
<p>It’s most useful for the people who don’t want it<br />
It is worth considering who will find a Generative AI code assistant most useful. Senior developers are less likely to use it to generate much code, as they have already found ways to avoid boilerplate, and the difficult work of fine-tuning or bug-fixing a code base is less likely to bring useful suggestions from Copilot. It is still incredibly useful though and can save many hours of trawling the Internet. Below is a real-world example of a conversation held with Azure Copilot. The software engineer has identified an issue with the code, and uses Copilot to find a way to code a specific fix. Interestingly many of the most useful applications of Generative AI coding appear to be helping humans to write <a href="https://en.wikipedia.org/wiki/Regular_expression">regular expressions</a>!</p>
</li>
</ol>
<p><img src="/images/2024-09-06-use-cases-for-ai/copilot-regex.jpg" alt="Copilot writing regular expressions" /></p>
<p>As you can see from the prompt engineering, considerable knowledge is needed to get the code that is required to fix a problem in a code base. Further knowledge is needed to check whether the required answer is relevant. And user beware - if you lose your chat history in Copilot (as will happen with Microsoft Copilot in “Data Protection” mode) of course you cannot get the same answer a second time by asking the question again, and this can result in many frustrating lost hours trying to find what you briefly saw via an internet search instead! So much for time saving!</p>
<p>When interviewed, most senior software engineers will talk about their problem-solving mindset. They enjoy problem solving, and will actively look for problems to solve. As such, a tool which does the “fun bit” of problem solving for them is not going to have a great take-up. Regular expressions aside, it is junior developers who will be most likely to take up Generative AI and use it as a crutch. And here we must go back to our initial check: Does it matter if the code they generate is wrong? In our software supply chain pipelines we usually have built-in checks to ensure compilation, look at code quality and mandate test coverage, so some “wrong” code will be weeded out by the pipelines, but there is still a huge risk that a junior developer is shipping code the impact of which they do not really understand. Currently, hackers have to go to great lengths to get unknown code into production code-bases, now it seems we are embracing a tool that will do it for them. Care must be taken here.</p>
<ol>
<li>
<p>It’s only as good as the average of the code it learns from<br />
The data set used by Generative AI to produce code is often all open source code bases. This is a huge dataset and fairly comprehensive – many, many programmers have contributed to open source projects. Therefore, we have literally created the world’s most average programmer. This is rarely the target that a business sets for its IT divisions… And indeed, <a href="https://www.gitclear.com/coding_on_copilot_data_shows_ais_downward_pressure_on_code_quality">reports</a> are now proving that a code base maintained with Generative AI degenerates in quality over time.</p>
</li>
<li>
<p>It’s just another programming language to learn<br />
There have always been layers of abstraction between humans and machines, be it 10 strings on an abacus to represent decimal numbers, typewriter keys operating little letter printers, or an <a href="https://en.wikipedia.org/wiki/Integrated_development_environment">IDE</a> auto-completing your code bracket pairs. Even Java code compiling to JVM bytecode which is turned into operating-system-specific instructions by the <a href="https://en.wikipedia.org/wiki/Java_virtual_machine">JVM</a>, or NoCode/LoCode drag-and-drop integrations. Most of the text-based layers of abstraction have been called “programming languages” and in that way, is Generative AI very different? It is still part of the chain between the human ideation of the software, and the executable software itself. If we all become expert prompt engineers and lose the ability to write Javascript code from scratch, is that any different from losing the ability to write C++ code because we write Javascript and get Google’s <a href="https://v8.dev/">V8 engine</a> to turn it into C++ for us? Software engineering is still the same skill, but the language used is now the language of prompt engineering. The only difference is in the fact that the Generative AI answer varies over time and hence the process of debugging could get pretty interesting .</p>
</li>
</ol>
<p>In conclusion, there are fewer use cases than one might think for Generative AI, and whilst it will be a great tool for developers and will probably be as game-changing as IDEs were in the 1990s, it isn’t quite the end to software engineering that people are predicting - developers, don’t go hanging up your boots just yet.</p>
<h3 id="content-generation">Content Generation</h3>
<p>One of the often-touted use cases of Generative AI, within and without the Software Development community, is its ability to generate content for blogs and articles. (Perhaps it is also able to read said content and “like” it?)
As an example, we can compare the content of this article with the content generated when we ask Microsoft Copilot to write it, using this prompt:</p>
<p>“Write 2000 words on the use cases for Generative AI in the software development lifecycle. Include a definition of machine learning, generative AI and general AI in the first section. Use a formal but friendly tone. Write with the persona of a cautious software engineer.”</p>
<h2 id="copilot">Copilot</h2>
<blockquote>
<p>Certainly! Let’s delve into the fascinating world of Generative AI and its impact on the software development lifecycle. As a cautious software engineer, I’ll guide you through the nuances while maintaining a friendly tone.</p>
<p>Before we explore the use cases, let’s clarify some terms:</p>
<ol>
<li>Machine Learning (ML): ML is a subset of artificial intelligence (AI) that enables systems to learn from data and improve their performance over time. It involves algorithms that automatically learn patterns and make predictions based on examples.</li>
<li>Generative AI (GenAI): GenAI goes beyond traditional ML by creating new content rather than just predicting existing patterns. It generates novel outputs, such as text, images, or code, based on input data or prompts.</li>
<li>General AI (AGI): AGI refers to an AI system with human-like cognitive abilities, including reasoning, problem-solving, and creativity. While AGI remains an aspiration, > GenAI represents a step toward achieving it.</li>
</ol>
<h3 id="code-generation-and-assistance">Code Generation and Assistance:</h3>
<ul>
<li>GenAI tools like ChatGPT and GitHub Copilot assist developers by suggesting code snippets, improving productivity, and reducing manual coding efforts.</li>
<li>However, it’s essential to recognize that GenAI isn’t a silver bullet. It complements human expertise rather than replacing it entirely.
<h3 id="content-creation">Content Creation:</h3>
</li>
<li>GenAI can create documentation, blog posts, and other textual content. It helps developers communicate effectively and maintain consistent documentation.
<h3 id="test-script-generation">Test Script Generation:</h3>
</li>
<li>Beyond user stories and test cases, GenAI can generate test scripts and scenarios.</li>
<li>It identifies edge cases, executes automated tests, and logs defects, enhancing software quality.
<h3 id="unit-test-cases">Unit Test Cases:</h3>
</li>
<li>GenAI assists in writing unit test cases, ensuring robust code coverage.</li>
<li>Developers benefit from faster test case creation and improved code reliability.
<h3 id="source-code-generation">Source Code Generation:</h3>
</li>
<li>While GenAI isn’t writing entire applications (yet!), it can generate boilerplate code, saving time during initial setup.</li>
<li>Developers remain responsible for critical design decisions.</li>
</ul>
<ol>
<li>Quality Control:
<ul>
<li>Always review GenAI-generated content critically. It’s a tool, not a replacement for human judgment.</li>
<li>Verify that the code aligns with best practices, security guidelines, and project requirements.</li>
</ul>
</li>
<li>Ethical Implications:
<ul>
<li>GenAI learns from existing data, which may contain biases. Be cautious about perpetuating bias in your code.</li>
<li>Regularly assess the ethical impact of GenAI-generated content.</li>
</ul>
</li>
<li>Balancing Speed and Quality:
<ul>
<li>GenAI accelerates development, but don’t sacrifice quality for speed.</li>
<li>Maintain a balance—use GenAI where it enhances productivity without compromising reliability.</li>
</ul>
</li>
</ol>
<p>Generative AI holds immense promise for software development. By understanding its capabilities, limitations, and ethical considerations, cautious engineers can leverage it effectively. Remember, GenAI is a valuable ally, but it’s our collective wisdom that truly drives innovation in the software development lifecycle.
As a cautious software engineer, I’ve emphasized practical insights while maintaining a friendly tone. If you have any further questions or need clarification, feel free to ask! 😊
<cite>Source: Conversation with Copilot, 19/06/2024<br />
References given:</cite></p>
<ol>
<li><a href="https://www.thoughtworks.com/insights/articles/generative-ai-software-development-lifecycle-more-than-coding-assistance">Generative AI and the software development lifecycle</a></li>
<li><a href="https://www.architectureandgovernance.com/artificial-intelligence/automating-the-software-development-life-cycle-with-generative-ai/">Automating the Software Development Life Cycle with Generative AI</a></li>
<li><a href="https://geekyants.com/blog/from-concept-to-code-how-generative-ai-is-reshaping-the-software-development-lifecycle">How Generative AI is Reshaping the Software Development Lifecycle: From …</a></li>
<li><a href="https://ijcrt.org/papers/IJCRT2308145.pdf">IMPACT OF GENERATIVE AI ON THE SOFTWARE DEVELOPMENT LIFE CYCLE (SDLC)</a></li>
</ol>
</blockquote>
<p>Not bad, Copilot! However it’s only 500 words. No amount of re-prompting and scolding would generate more. Seems you still can’t beat humans for their sheer ability to vacillate!</p>
<p><a href="https://capgemini.github.io/ai/use-cases-for-ai/">Use Cases for AI</a> was originally published by Capgemini at <a href="https://capgemini.github.io">Capgemini Software Engineering</a> on September 06, 2024.</p>https://capgemini.github.io/cloud%20native/ses-email-security2024-07-12T00:00:00+01:002024-07-12T00:00:00+01:00Tom Spencerhttps://capgemini.github.io/authors#author-tom-spencer
<p>In the evolving landscape of digital communication, email spoofing poses a significant threat to the integrity of online identities. As a hobby project, and to practice AWS and software design, I have been developing an application called <a href="https://github.com/tom-spencer-organization/MovieBooker">MovieBooker</a>. This application streamlines the process of booking movie tickets and managing movie programs for both moviegoers and staff.</p>
<p><img src="/images/2024-07-12-ses-email-security/architecture.png" alt="architecture" /></p>
<h2 id="email-integration-with-aws-ses">Email Integration with AWS SES</h2>
<p>As part of the infrastructure for the MovieBooker application, I implemented <a href="https://aws.amazon.com/ses/">Amazon Simple Email Service (SES)</a> to send emails when a user pays for a ticket. However, I encountered a challenge where the messages contained a banner with a “Be careful with this message” warning. This marked the beginning of my quest to unravel the secrets of email security.
<img src="/images/2024-07-12-ses-email-security/be-careful-with-this-message.png" alt="Be careful with this message" /></p>
<h2 id="strengthening-email-security-with-spf-dkim-and-dmarc">Strengthening Email Security with SPF, DKIM, and DMARC</h2>
<p>In order to get rid of the “Be careful with this message” warning. I had to set up Sender Policy Framework (SPF), Domain Keys Identified Mail (DKIM) and Domain Message Authentication Reporting and Conformance (DMARC).</p>
<p>SPF is an email authentication standard that helps protect senders and recipients from spam, spoofing, and phishing. DKIM is a standard email authentication method that adds a digital signature to outgoing messages.
DMARC helps mail administrators prevent hackers and other attackers from spoofing their organization and domain. Spoofing is a type of attack in which the From address of an email message is forged.</p>
<p>Determined to make my application’s emails as trustworthy as possible, I delved into SPF, DKIM, and DMARC. These three tools promised an end-to-end authentication mechanism that would transform my emails into beacons of trust.</p>
<h3 id="what-you-will-learn">What You Will Learn</h3>
<p>I aim to share the valuable insights I gained while implementing a secure end-to-end email authentication system. In this article, you will learn how to harness the power of three essential tools:</p>
<ol>
<li><strong>SPF (Sender Policy Framework)</strong></li>
<li><strong>DKIM (DomainKeys Identified Mail)</strong></li>
<li><strong>DMARC (Domain-based Message Authentication, Reporting, and Conformance)</strong></li>
</ol>
<p>These lessons were acquired through practical application in fortifying the email communication of my application. Join me as I guide you through the process of implementing these robust security measures.</p>
<h3 id="understanding-the-threat-landscape">Understanding the Threat Landscape</h3>
<p>Insecure email communication poses various risks, including phishing attacks that trick recipients into divulging sensitive information, email spoofing for deceptive purposes, and business email compromise (BEC) threats involving unauthorized access to legitimate accounts. Additionally, insecure emails may damage the sender’s domain reputation, result in financial losses, and lead to regulatory compliance violations.</p>
<p>To mitigate these risks, it’s crucial to implement robust email security measures such as encryption, multi-factor authentication, and the adoption of authentication protocols like SPF, DKIM, and DMARC. Educating users on recognizing and avoiding phishing attempts further enhances overall email security.</p>
<h2 id="overview-of-spf-dkim-and-dmarc">Overview of SPF, DKIM, and DMARC</h2>
<h3 id="spf---sender-policy-framework">SPF - Sender Policy Framework:</h3>
<ul>
<li>SPF acts as a set of rules and regulations for email servers, authorizing them to send emails on behalf of a specific domain.</li>
<li>It enables the receiver to determine whether an incoming email is genuinely sent from an authorized server associated with the claimed domain.</li>
</ul>
<h3 id="dkim---domainkeys-identified-mail">DKIM - DomainKeys Identified Mail:</h3>
<ul>
<li>DKIM adds a digital signature to outgoing emails, creating a unique identifier that can be verified by the recipient.</li>
<li>The recipient checks the signature against the public key in the DNS records, ensuring the email is unaltered and genuinely originated from the claimed sender’s domain.</li>
</ul>
<h3 id="dmarc---domain-based-message-authentication-reporting-and-conformance">DMARC - Domain-based Message Authentication, Reporting, and Conformance:</h3>
<ul>
<li>DMARC acts as a protocol that instructs the email receiver on how to handle authenticated emails.</li>
<li>It provides options such as acceptance, marking as spam, quarantine, or rejection based on SPF and DKIM verification results.</li>
</ul>
<h2 id="benefits-of-spf-dkim-and-dmarc">Benefits of SPF, DKIM, and DMARC:</h2>
<ul>
<li>
<p><strong>Mitigates the risk of email spoofing and phishing attacks:</strong>
Ensures only authenticated emails are accepted, reducing the likelihood of malicious activities.</p>
</li>
<li>
<p><strong>Enhances the reputation of your domain:</strong>
Implementing industry-standard email authentication practices contributes to a positive domain reputation.</p>
</li>
<li>
<p><strong>Provides a clear framework for email handling:</strong>
Reduces the chances of legitimate emails being marked as spam, improving communication reliability.</p>
</li>
<li>
<p><strong>Aids in achieving and maintaining regulatory compliance:</strong>
Particularly beneficial in industries with stringent data protection regulations, helping organizations avoid legal consequences and financial penalties.</p>
</li>
<li>
<p><strong>DMARC protects the reputation of the sender’s domain:</strong>
Provides visibility into email authentication results, preventing unauthorized use of the domain in phishing attempts and strengthening overall brand trust.</p>
</li>
<li>
<p><strong>Contributes to the mitigation of data leakage:</strong>
Ensures that sensitive information transmitted via email is secure and has not been tampered with during transit.</p>
</li>
</ul>
<h2 id="establishing-a-secure-communication-channel">Establishing a Secure Communication Channel</h2>
<p>By implementing SPF, DKIM, and DMARC, we establish a formidable defense against unauthorized email activities, offering a secure and reliable communication channel that fosters trust and confidence among users.</p>
<h3 id="what-is-aws-ses">What is AWS SES?</h3>
<p>Amazon Simple Email Service (AWS SES) is a cloud-based email sending service designed to simplify the process of sending emails from applications and websites. Its role extends beyond just delivering emails; AWS SES serves as a comprehensive solution for ensuring secure and reliable email communication.</p>
<h4 id="key-features">Key Features:</h4>
<ul>
<li>
<p><strong>Comprehensive Email Solution:</strong>
AWS SES goes beyond basic email delivery, providing a robust solution for secure communication.</p>
</li>
<li>
<p><strong>Easy Integration with AWS Services:</strong>
Designed for easy integration with other AWS services, making it straightforward to implement and manage SPF, DKIM, and DMARC alongside complementary AWS solutions.</p>
</li>
<li>
<p><strong>Seamless Integration with Authentication Protocols:</strong>
AWS SES seamlessly integrates with SPF, DKIM, and DMARC authentication protocols, providing a strong framework to verify sender identity and prevent email spoofing or phishing attempts.</p>
</li>
<li>
<p><strong>Configurable Email Sending Settings:</strong>
Allows users to configure and customize email sending settings, enabling the implementation of specific security measures tailored to the organization’s needs.</p>
</li>
</ul>
<p>With AWS SES, you can ensure that your email communication is not only efficient but also adheres to industry-leading security practices.</p>
<h2 id="implementing-dkim-with-aws-ses">Implementing DKIM with AWS SES</h2>
<p>To enhance the security of your email communication, you can implement DKIM (DomainKeys Identified Mail) with Amazon Simple Email Service (AWS SES). Use this link to set up DKIM:
<a href="https://docs.aws.amazon.com/ses/latest/dg/send-email-authentication-dkim-easy.html">Setting up Easy DKIM</a></p>
<p><strong>Set Up Public Key in DNS:</strong>
Once DKIM signatures are enabled, you need to set up the public key in your DNS.</p>
<ul>
<li>Retrieve the public key from the AWS SES console.</li>
<li>Add a TXT record to your DNS with the DKIM public key.</li>
</ul>
<p>Example DNS Record:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Name: _domainkey.yourdomain.com
Type: TXT
Value: <span class="s2">"v=DKIM1; k=rsa; p=MIGfMA0GCSqGSIb3DQEBA..."</span>
</code></pre></div></div>
<p><img src="/images/2024-07-12-ses-email-security/dns-record.png" alt="DNS record" /></p>
<p><img src="/images/2024-07-12-ses-email-security/custom-email.png" alt="Custom email" /></p>
<p><img src="/images/2024-07-12-ses-email-security/record-sets.png" alt="Record sets" /></p>
<p>These steps will guide you through the process of setting up DKIM for your AWS SES configuration, adding an extra layer of authenticity to your outgoing emails.</p>
<h2 id="setting-up-spf">Setting up SPF</h2>
<p>Before we configure SPF, it’s essential to understand the key components of an SPF (Sender Policy Framework) record. SPF is a crucial authentication mechanism that helps prevent email spoofing and phishing attacks.
Below is a detailed breakdown of the components in an SPF record.</p>
<h3 id="version-v">Version (v):</h3>
<p>The SPF version is specified using the “v” tag, followed by the version number. For example, “v=spf1” indicates SPF version 1.</p>
<h3 id="mechanisms">Mechanisms:</h3>
<p>Mechanisms are the core components of an SPF record that define the rules for email server authorization. Common mechanisms include:</p>
<ul>
<li><strong>a:</strong> Allows the domain’s A record (IPv4 address) to send mail.</li>
<li><strong>mx:</strong> Allows the domain’s MX record (mail exchange) to send mail.</li>
<li><strong>include:domain.com:</strong> Permits the specified domain to send mail on behalf of the current domain.</li>
<li><strong>ip4:192.168.0.1:</strong> Authorizes the specific IPv4 address to send mail.</li>
<li><strong>all:</strong> Specifies the default action for servers not covered by previous mechanisms. It can be set to “allow,” “deny,” or “neutral.”</li>
</ul>
<h3 id="directives">Directives:</h3>
<p>Directives modify or refine the behavior of the mechanisms. Some common directives include:</p>
<ul>
<li><strong>redirect:</strong> Redirects the SPF check to another domain’s SPF record.</li>
<li><strong>exp:</strong> Explanations that provide additional information in case of SPF failures.</li>
<li><strong>exists:</strong> Checks if a specified domain has an A record, allowing for conditional SPF processing.</li>
</ul>
<h3 id="qualifiers">Qualifiers:</h3>
<p>Qualifiers define the result of a mechanism or directive. There are four possible qualifiers:</p>
<ul>
<li><strong>+ (Pass):</strong> The server is authorized, and the SPF check passes.</li>
<li><strong>- (Fail):</strong> The server is not authorized, and the SPF check fails.</li>
<li><strong>~ (SoftFail):</strong> The server is not authorized, but the SPF check may still pass (often used for testing or gradual SPF deployment).</li>
<li><strong>? (Neutral):</strong> No explicit authorization or denial; the SPF check is inconclusive.</li>
</ul>
<p>Now, armed with an understanding of these components, you can proceed to configure SPF for your domain, strengthening email authentication and security.</p>
<h2 id="example-spf-record">Example SPF Record</h2>
<p>Below is an example SPF (Sender Policy Framework) record that demonstrates the use of different mechanisms and qualifiers:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>v=spf1 include:_spf.example.com ip4:192.168.0.1 -all
Name: yourdomain.com
Type: TXT
Value: "v=spf1 include:_spf.example.com ip4:192.168.0.1 -all"
</code></pre></div></div>
<p>Create an SPF format
<code class="language-plaintext highlighter-rouge">
"v={spf version}{mechanisms}{directive}{qualifier}all"
</code></p>
<p>Follow this link for more information on DNS settings: <a href="https://improvmx.com/guides/configure-your-dns/">DNS settings</a>. I added this TXT entry in Route 53 for SPF support:</p>
<p><img src="/images/2024-07-12-ses-email-security/simple-type-routing.png" alt="Simple type routing" /></p>
<h2 id="configuring-dmarc-policies">Configuring DMARC Policies</h2>
<p>Now, let’s walk through the steps to configure DMARC (Domain-based Message Authentication, Reporting, and Conformance) policies for your domain in Amazon Web Services (AWS) using Route 53:</p>
<ol>
<li>
<p><strong>Log in to Amazon Web Services:</strong>
Go to <a href="https://aws.amazon.com/">https://aws.amazon.com/</a> and log in to your AWS account.</p>
</li>
<li><strong>Navigate to Route 53:</strong>
<ul>
<li>Under “Services,” find and click on “Route 53” under “Network & Content Delivery.”</li>
</ul>
</li>
<li><strong>Access Hosted Zones:</strong>
<ul>
<li>In the “DNS Management” section, select “Hosted Zones.”</li>
</ul>
</li>
<li><strong>Select Your Domain:</strong>
<ul>
<li>From the list of hosted zones, find and click on the domain you want to configure DMARC for.</li>
</ul>
</li>
<li><strong>Create a Record Set:</strong>
<ul>
<li>Click on the “Create Record Set” button.</li>
</ul>
</li>
<li><strong>Configure DMARC Record:</strong>
<ul>
<li>In the “Name” field, type “_dmarc”.</li>
<li>Under “Type,” select “TXT” from the drop-down menu.</li>
</ul>
</li>
</ol>
<p>Example DMARC Record:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Name: _dmarc.yourdomain.com
Type: TXT
Value: "v=DMARC1; p=quarantine; rua=mailto:[email protected]; ruf=mailto:[email protected]"
</code></pre></div></div>
<p>Generate a value for the DMARC record set on this site <a href="https://powerdmarc.com/dmarc-record-generator/">powerdmarc</a></p>
<p><img src="/images/2024-07-12-ses-email-security/power-dmarc-1.png" alt="Power DMARC 1" /></p>
<p><img src="/images/2024-07-12-ses-email-security/power-dmarc-2.png" alt="Power DMARC 2" /></p>
<p>Add the DMARC record to the DNS record in Route 53:</p>
<p><img src="/images/2024-07-12-ses-email-security/dns-record-dmarc.png" alt="DNS Record DMARC" /></p>
<h2 id="conclusion">Conclusion</h2>
<p>The implementation of Sender Policy Framework (SPF), DomainKeys Identified Mail (DKIM), and Domain-based Message Authentication, Reporting, and Conformance (DMARC) within the framework of Amazon Simple Email Service (SES) presents a robust and multifaceted approach to fortifying email security. By embracing these authentication protocols, organizations and individuals can significantly mitigate risks associated with phishing, email spoofing, and unauthorized access.</p>
<ul>
<li>
<p><strong>SPF as the Gatekeeper:</strong>
SPF acts as a gatekeeper, specifying authorized email servers and reducing the likelihood of malicious actors impersonating trusted sources.</p>
</li>
<li>
<p><strong>DKIM for Extra Security:</strong>
DKIM adds an extra layer of security by providing a unique digital signature, ensuring that the integrity of the email remains intact during transmission.</p>
</li>
<li>
<p><strong>DMARC as the Decision-Maker:</strong>
DMARC serves as the wise decision-maker, guiding email receivers on how to handle authenticated emails based on SPF and DKIM verification results.</p>
</li>
<li>
<p><strong>AWS SES Integration:</strong>
AWS SES, as a reliable and scalable cloud-based email sending service, seamlessly integrates these authentication mechanisms. Its features, such as scalability, deliverability, and compatibility with industry standards, make it a suitable choice for organizations looking to enhance their email security posture. Real-time analytics and monitoring tools, combined with cost-effectiveness, ease of integration, and a developer-friendly interface, contribute to a comprehensive solution for secure email delivery.</p>
</li>
</ul>
<p>In adopting SPF, DKIM, and DMARC with AWS SES, organizations not only safeguard their digital identities but also bolster user trust by ensuring that emails are genuine, secure, and delivered with integrity. This strategic combination lays the foundation for a secure communication channel, fostering a resilient defense against the evolving landscape of email-based threats. As the digital realm continues to advance, this approach stands as a testament to proactive and effective email security practices.</p>
<p><strong>The best part is you don’t get the “Be careful with this message” warning anymore 🙂.</strong></p>
<p><a href="https://capgemini.github.io/cloud%20native/ses-email-security/">AWS Simple Email Service Security</a> was originally published by Capgemini at <a href="https://capgemini.github.io">Capgemini Software Engineering</a> on July 12, 2024.</p>https://capgemini.github.io/development/dcx-react-library-v1-is-here2024-05-24T00:00:00+01:002024-05-24T00:00:00+01:00Daniele Zuricohttps://capgemini.github.io/authors#author-daniele-zurico
<p>In 2022 we announced a new style-agnostic React library called <a href="https://capgemini.github.io/development/dcx-react-library/">dcx-react-library</a> and today, after almost 3 years of hard work and 14 releases, we happy to share that version <a href="https://www.npmjs.com/package/@capgeminiuk/dcx-react-library">1.0.0</a> is finally available.</p>
<h2 id="whats-new">What’s new</h2>
<p>We’ve built more than 35 components organised in 3 different categories: forms, utilities and typography, keeping in mind our main principles:</p>
<ul>
<li>fewer dependencies</li>
<li>small bundles</li>
<li>fully tested</li>
<li>AA accessible</li>
</ul>
<p>I’m pretty sure the main question you’ll be asking yourself is: “why did you decide to make a <a href="https://semver.org/">major release version</a>?”
After using the library on our projects, we realised that we sped up the development process as we promised but being style agnostic by nature our development team was spending too much time making our components look pixel perfect and exactly how our UX/UI team designed. We asked ourselves: “now that we have got the foundations, can we do better?” Yes we can! So we decided to implement a design system for most of our components.
Most of our components have been redesigned without introducing any breaking change adding the usage of tokens. We improved our documentation adding a new section called design system:
<img src="/images/2024-04-03-dcx-react-library-v1-is-here/dcx-react-library-v1-storybook.jpg" alt="Documentation" /></p>
<p>If the use of tokens has piqued your interest and you are curious to understand how it works, we have created a quick video that illustrates how to start using it:</p>
<div class="small-12 medium-8 large-4 small-centered columns">
<div class="flex-video">
<iframe width="640" height="360" src="https://www.youtube.com/embed/KjYL9fzxdxw" title="dcx-react-library-design-system.mp4" frameborder="0" allowfullscreen=""></iframe>
</div>
</div>
<h2 id="whats-next">What’s next?</h2>
<p>Now that version 1.0.0 is finally available we will focus on implementing the design system for our remaining components, we’ll continue to implement new component (if your project needs components that we don’t have yet, please don’t be shy, create a request on our <a href="https://github.com/Capgemini/dcx-react-library">github repository</a> and stay tuned… something else is cooking in the Capgemini labs.</p>
<h2 id="you-want-to-know-more">You want to know more?</h2>
<ul>
<li>If you’d like to know more about the library, feel free to contact <a href="https://capgemini.github.io/authors/#author-daniele-zurico">Daniele Zurico</a>.</li>
<li>If you curious to give it a try, it’s publicly available on <a href="https://www.npmjs.com/package/@capgeminiuk/dcx-react-library">npm</a>.</li>
<li>If you’d like to contribute, you will need to request access from our <a href="https://github.com/Capgemini/dcx-react-library">public repository</a>.</li>
<li>If you want to familiarise with all the components we built, take a look at our <a href="https://main--6069a6f47f4b9f002171f8e1.chromatic.com/">storybook documentation</a>.</li>
</ul>
<p><a href="https://capgemini.github.io/development/dcx-react-library-v1-is-here/">DCX React Library v1.0.0 is with us and we’re proud of it</a> was originally published by Capgemini at <a href="https://capgemini.github.io">Capgemini Software Engineering</a> on May 24, 2024.</p>https://capgemini.github.io/development/automation-scripting-bash2024-05-24T00:00:00+01:002024-05-24T00:00:00+01:00Enamul Alihttps://capgemini.github.io/authors#author-enamul-ali
<p>The need for automation in software engineering, especially in Cloud and Platform Roles is apparent. Manual, repetitive tasks consume time, introduce the possibility of human error and can become tedious for team members. In this blog post, I wish to document my journey of automation using Bash. We’ll look at the process and explore how automation can positively impact our workflows and our team.</p>
<h2 id="automating-simple-commands-with-bash-scripts">Automating Simple Commands with Bash Scripts</h2>
<p>For my first project at Capgemini, I have been working as a Platform Engineer within Cloud & Custom Applications. I quickly recognised the importance of automation in streamlining workflows. I took on the challenge of automating repetitive tasks, in the hopes of saving time and reducing human error.</p>
<p>Within my role, I find myself frequently using the same set of commands. For example, I often type <code class="language-plaintext highlighter-rouge">vault read --field=value <path> | openssl x509 -noout -text</code>, a command commonly used for reading and inspecting certificates stored in HashiCorp Vault. The symbol <code class="language-plaintext highlighter-rouge">|</code> is called a “pipe” in Bash scripting. It allows the output from the command on its left side to be used as input for the command on its right side. For example, in the command above the output of <code class="language-plaintext highlighter-rouge">vault read --field=value <path></code> is passed to <code class="language-plaintext highlighter-rouge">openssl x509 -noout -text</code> for further processing.</p>
<p>This got me thinking: why not create a basic bash script that takes the vault path as an argument, which I could then alias as a command. That way, I could simply run <code class="language-plaintext highlighter-rouge"><alias> <path></code> to fetch the certificate. Not only would this save me time, but it would be easier on the fingers.</p>
<p>So I set about doing this as below:</p>
<ol>
<li>
<p>The first step in creating a script is to create a file ending in .sh, here we will create a file called <code class="language-plaintext highlighter-rouge">vaultcert.sh</code>.</p>
</li>
<li>
<p>Next, open the file in a text editor and define its functionality: The script accepts an argument for the vault path. This argument is passed as the path to the command <code class="language-plaintext highlighter-rouge">vault read --field=value <path> | openssl x509 -noout -text</code>.</p>
</li>
</ol>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c">#!/bin/bash</span>
<span class="c"># Function to read certificate from Vault</span>
read_certificate<span class="o">()</span> <span class="o">{</span>
<span class="c"># Check if argument is provided</span>
<span class="k">if</span> <span class="o">[</span> <span class="nt">-z</span> <span class="s2">"</span><span class="nv">$1</span><span class="s2">"</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
</span><span class="nb">echo</span> <span class="s2">"Error: Vault path argument is missing."</span>
<span class="nb">exit </span>1
<span class="k">fi</span>
<span class="c"># Execute the command to read certificate from Vault</span>
vault <span class="nb">read</span> <span class="nt">--field</span><span class="o">=</span>value <span class="s2">"</span><span class="nv">$1</span><span class="s2">"</span> | openssl x509 <span class="nt">-noout</span> <span class="nt">-text</span>
<span class="o">}</span>
<span class="c"># Invoke the function with the provided argument</span>
read_certificate <span class="s2">"</span><span class="nv">$1</span><span class="s2">"</span>
</code></pre></div></div>
<p>3. We must set executable permissions to ensure the script is able to execute. We can do this by running in Linux:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">chmod</span> +x vaultcert.sh
</code></pre></div></div>
<p>4. An alias allows the script to be executed from a single command. We can do this by adding the following to our shell configuration file (e.g. .bashrc or .bash_profile):</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">alias </span><span class="nv">vaultcert</span><span class="o">=</span><span class="s1">'/path/to/vaultcert.sh'</span>
</code></pre></div></div>
<p>5. Now we must reload the shell configuration to apply the changes. We can do this by sourcing: <code class="language-plaintext highlighter-rouge">source ~/.bashrc</code> or <code class="language-plaintext highlighter-rouge">source ~/.bash_profile</code>.</p>
<p>6. Finally, confirm that the alias works by running the following command in the terminal:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>vaultcert <path>
</code></pre></div></div>
<p>And that’s it you have your first automated bash script!</p>
<h2 id="automating-more-complex-commands-with-bash-scripts">Automating More Complex Commands with Bash Scripts</h2>
<p>Once I had my first bash script up and running, I challenged myself to automate more complex tasks. With Bash scripting, we can make tasks such as managing files much easier. The possibilities of streamlining workflows with Bash scripting and automating complex tedious tasks are endless.</p>
<p>For example, the next automation script I worked on expanded upon the first one, but allowed the user to copy the file stored in vault to their local machine and then write to a further vault path. All we have to do is follow the steps above and create a new script:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c">#!/bin/bash</span>
copy_write_certificate<span class="o">()</span> <span class="o">{</span>
<span class="c"># Check if arguments are provided</span>
<span class="k">if</span> <span class="o">[</span> <span class="nt">-z</span> <span class="s2">"</span><span class="nv">$1</span><span class="s2">"</span> <span class="o">]</span> <span class="o">||</span> <span class="o">[</span> <span class="nt">-z</span> <span class="s2">"</span><span class="nv">$2</span><span class="s2">"</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
</span><span class="nb">echo</span> <span class="s2">"Error: Arguments are missing."</span>
<span class="nb">exit </span>1
<span class="k">fi</span>
<span class="c"># Execute command to copy certificate from Vault to local and write to new destination</span>
vault <span class="nb">read</span> <span class="nt">--field</span><span class="o">=</span>value <span class="s2">"</span><span class="nv">$1</span><span class="s2">"</span> <span class="o">></span> /path/to/local/file
<span class="c"># Check if the file was copied successfully</span>
<span class="k">if</span> <span class="o">[</span> <span class="nt">-f</span> <span class="s2">"/path/to/local/file"</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
</span><span class="nb">echo</span> <span class="s2">"Certificate copied successfully to /path/to/local/file"</span>
<span class="k">else
</span><span class="nb">echo</span> <span class="s2">"Error: Certificate copy failed."</span>
<span class="nb">exit </span>1
<span class="k">fi</span>
<span class="c"># Prompt user to confirm writing the file to the new destination vault path</span>
<span class="nb">read</span> <span class="nt">-p</span> <span class="s2">"Do you want to proceed with writing the file to </span><span class="nv">$2</span><span class="s2"> (y/n): "</span> choice
<span class="k">if</span> <span class="o">[</span> <span class="s2">"</span><span class="nv">$choice</span><span class="s2">"</span> <span class="o">=</span> <span class="s2">"y"</span> <span class="o">]</span> <span class="o">||</span> <span class="o">[</span> <span class="s2">"</span><span class="nv">$choice</span><span class="s2">"</span> <span class="o">=</span> <span class="s2">"Y"</span> <span class="o">]</span><span class="p">;</span> <span class="k">then
</span>vault write <span class="nt">--field</span><span class="o">=</span>value <span class="s2">"</span><span class="nv">$2</span><span class="s2">"</span> <span class="nv">value</span><span class="o">=</span>@/path/to/local/file
<span class="nb">echo</span> <span class="s2">"File successfully written to the new vault path."</span>
<span class="k">else
</span><span class="nb">echo</span> <span class="s2">"Cancelled writing to vault."</span>
<span class="nb">exit </span>1
<span class="k">fi</span>
<span class="o">}</span>
<span class="c"># Invoke the function with the provided arguments</span>
copy_write_certificate <span class="s2">"</span><span class="nv">$1</span><span class="s2">"</span> <span class="s2">"</span><span class="nv">$2</span><span class="s2">"</span>
</code></pre></div></div>
<p>As demonstrated, Bash scripting opens up endless possibilities. We can increase the efficiency of our team by automating frequently used commands and sharing this with our team. Adding docs and sharing demos helps facilitate collaboration and boost productivity across the entire team.</p>
<p>We have explored my journey of bash automation on my first project here at Capgemini. In my next blog post, we will look at taking this one step further by introducing Python scripting, also discussing the use cases for Bash vs Python.</p>
<p>Thanks for reading!</p>
<p><a href="https://capgemini.github.io/development/automation-scripting-bash/">Automation Scripting with Bash</a> was originally published by Capgemini at <a href="https://capgemini.github.io">Capgemini Software Engineering</a> on May 24, 2024.</p>https://capgemini.github.io/architecture/devoxx-20242024-05-17T00:00:00+01:002024-05-17T00:00:00+01:00Sarah Saundershttps://capgemini.github.io/alumni#author-sarah-saunders
<p>Do queues in the ladies’ toilets signify success at an IT conference? Is a half-hour sit-down worth more than a T shirt? These and other deep questions pondered.</p>
<p>Capgemini’s Cloud Development team are returning from our annual pilgrimage to <a href="https://devoxx.co.uk">Devoxx UK</a>, the best and biggest annual IT developer conference in the UK. As is becoming usual, a train strike limited my attendance on the Wednesday, but even in a couple of days there has been much to inspire and mull over. This year, instead of a full booth, Capgemini sponsored a corner filled with super-comfy bean bags, and it was the most popular our area has ever been! As always it felt like a very inclusive conference, with a wide range of age groups and nationalities, and a good balance of genders and races present.</p>
<p><img src="/images/2024-05-10-devoxx-uk/beanbags.jpg" alt="Dreaming their own future" /></p>
<h2 id="ai-again">AI Again</h2>
<p>As with last year there was a strong theme of AI across many of the conference talks, but this year the emphasis was more on developer involvement - less “Wow look what this can do”, and more “This is how you can build things”. Some angles were difficult to see a production use-case for - for example, <a href="https://www.devoxx.co.uk/talk/?id=51203">Developing Intelligent Apps with Semantic Kernel</a> showcased how with the Semantic Kernel Java API you can allow generative AI access to call your Java functions whenever it considers them appropriate. Having recently taken Capgemini’s “Responsible AI” training course and appreciating that generative AI’s decision making process cannot be considered “correct”, and so you could only use this scenario when it didn’t matter if the AI got the decision to make the Java call wrong, this really limits usage. Deciding when to delete development resources, perhaps? Or rotating a “thought for the day” message? Nothing more important, please! The speaker’s (inevitable) example of roleplaying games seemed safe enough.
More AI concerns in Harel Avissar’s talk “<a href="https://www.devoxx.co.uk/talk/?id=52402">The State of Malicious ML Models Attacks</a>”. There are now popular repositories such as <a href="https://huggingface.co/">HuggingFace</a> where people can share their pre-trained large language models (LLMs), which once trained are, as Harel pointed out, basically deployable units the same as any other software artifact. As such, they are open to attack in the same way as other pipeline artifacts and need to be signed and secured in the same way. This is quite a new field for attacks, but when scanning the LLMs available to download they did find some examples where LLM pickle (.PKL) files, which can run Python scripts when you open them, contained malicious “reverse shell attack” code. Something to be aware of.</p>
<h2 id="ai-and-vectors">AI and Vectors</h2>
<p>Quite a few talks focussed on looking at <a href="https://en.wikipedia.org/wiki/Vector_database">vector databases</a>, commonly used in LLMs, and the algorithms needed to search them. Vector types can represent the relationships between concepts across multiple dimensions. Mary Grygleski’s <a href="https://www.devoxx.co.uk/schedule/talk?id=5729">talk</a> on Friday covered how ChatGPT’s vectors represent 1,536 dimensions. This would mean, for example, it could store 1,536 different contexts for the word “cat”. Handy. But not brand new, and not specific to generative AI - Elasticsearch for example has been using a vector database and the <a href="https://www.pinecone.io/learn/series/faiss/hnsw/">HNSW</a> search algorithm to great effect in its fuzzy searches for years. In fact, as I discovered in <a href="https://capgemini.github.io/cloud/create-ai-bot-in-azure/">my last blog post</a>, the powers of fuzzy search and natural language processing are often more real-world useful than generative AI. You don’t usually want a computer system to come up with its own answer to a question, you want it to go away and find a definitive answer!</p>
<h2 id="thoughtful-architecture">Thoughtful Architecture</h2>
<p>Conference regular Andrew Harmel-Law was back with what some has said is his best talk yet - <a href="https://www.devoxx.co.uk/talk/?id=5750">“How we Decide”</a>. I didn’t attend as I had recently seen <a href="https://dev.events/conferences/testing-agile-dev-ops-and-low-code-showcase-kvwjgjk2">Dave Snowden’s</a> talk on “being human in an age of technology”, in which he went into detail on the biological human process of decision-making - for example, did you know that groups of 5 or less will always reach consensus? Possibly because this is a common family group size, allowing ancient humans to move quickly in family units. And did you know that only 4 in 5 of your decisions are made by the conscious brain, with the other one coming pretty much straight from the body’s senses? Things like pheromones have a big impact on decision-making when face to face. I was still processing this information so skipped Andrew’s talk for a deep dive into container-based IDEs, but I feel I missed out and am looking forward to the video!
I did attend Barry O’Reilly’s joyous “<a href="https://www.devoxx.co.uk/talk/?id=13710">An Introduction to Residuality Theory”</a>. Any talk that begins with a Douglas Adams quote and goes on to arm the attending developers with some REALLY DIFFICULT questions to put to enterprise architects (especially difficult if you don’t allow them to use the word “magic”) is always going to go down well, and O’Reilly went on to confirm my suspicions that the way we approach software architecture is less science and more art. He pointed out that our architecture diagrams don’t capture time, change or uncertainty - hugely important factors in software engineering - and how good software architects must be really comfortable with the concept of uncertainty. He had lots of great quotes about how Agile is a reaction to the realisation that requirements don’t work for complex systems, but that using Agile and reacting to change when problems arise leads to flaky architectures. His revelation, and the topic of his Ph.D thesis, is that software architecture <em>could</em> be a science, if we use methods from complexity theory. He went on to outline some really excellent ideas about how to robustify your candidate architecture using random simulation and ideas based on Kauffman’s networks and attractors. Plus, what to do if fire-breathing lizards should happen to climb out of the Thames. Forewarned is forearmed!</p>
<h2 id="more-microservices">More microservices</h2>
<p>Microservices haven’t gone away from Devoxx, and this year there were a couple of talks debating / re-debating the monolith v microservice argument. I especially enjoyed Chris Simon’s <a href="https://www.devoxx.co.uk/talk/?id=3678">Modular Monoliths & Microservices - A 4+1 View Continuum</a> in which he advised us all not to bother with the debate as both terms were becoming meaningless! Even a monolith is probably a distributed system. And if you view your client browsers as a scaled-out cluster of your application front end, which you probably should, even the simplest static website becomes a vast distributed network. He recommended moving away from the terms microservice / monolith and instead directly using terms such as “process” (a thing you can start and stop), and “node” for distributed compute, and teams and repositories for code ownership. He talked about the power of Domain Driven Design (DDD) when building up the logical view of your system, and how mapping processes onto bounded contexts can give you a candidate physical architecture, although don’t shy away from multiple processes in a bounded context if, for example, there are asynchronous steps between multiple processes (i.e. a queue). He also gave the quote of the conference when he said people often found they had a service and “distributed the wazoo out of it and we don’t know why”.
He talked about the development view of an architecture and how your code repository should always align with the logical view. If this doesn’t happen you end up with inter team coupling - behaviours such as “Scrum of Scrums” mean that it is becoming very hard to get decisions made and is a clue that your development view isn’t in alignment. He also talked about how we should reconsider our assumptions on the physical view of the architecture - we often jump to the conclusion that a microservice needs its own infrastructure but this may not be the case, and we often assume that a repository must be deployed as a monolith when in fact you can use path filters on your monorepo to deploy separate sections at different times.</p>
<h2 id="fear-of-rust">Fear of Rust</h2>
<p>For something new, I went to see Ayoub Alouane talk about “<a href="https://www.devoxx.co.uk/talk/?id=9001">My discovery of Rust: Why is it a Game Changer?</a>” where he showed an example of Discord’s “Message Marker” app showing regular spikes in CPU that corresponded with garbage collection. The corresponding Rust application had no such spikes. Why? Rust, announces Ayoub, does its memory management at compile time! He went on to demonstrate with a simple loop application how, for each variable, you had to specify the memory details of the variable:</p>
<ul>
<li>Is it mutable?</li>
<li>Is it usable in multiple threads?</li>
<li>Is it lockable?</li>
<li>Should you clone it into threads?</li>
</ul>
<p>All this can be expressed through the syntax of the language. A bit of a pain to learn, but well worth it. A lot of end-of-life C applications are being migrated to Rust, and after this talk I can understand why. He gave another example of rewriting a node.js app in Rust and using 95% less memory and 75% less CPU! OK, game-changing point taken!
This talk of application efficiency fit well with our own Shirley Tarboton’s talk on <a href="https://www.devoxx.co.uk/talk/?id=22223">Decarbonising Code</a>. She gave some examples of easy-to-implement coding practices that can make your application consume fewer resources. It was a well-received talk in the huge auditorium theatre - quite a feat to get people to attend during lunch!</p>
<p><img src="/images/2024-05-10-devoxx-uk/decarbonising-with-shirley.jpg" alt="Decarbonising code with Shirley" /></p>
<h2 id="crashing-and-burning">Crashing and burning</h2>
<p>Last thing Thursday I popped into a talk called <a href="https://www.devoxx.co.uk/talk/?id=22207">Mayday Mark 2! More Software Lessons From Aviation Disasters.</a>. As speaker Adele Carpenter pointed out, the Venn diagram of software engineers and aviation geeks is, “basically a circle”. Previous aviation disaster talks I’ve been to focus on ways of working we can both benefit from - for example, focussing on psychological safety, failing fast and failing openly.
Adele’s talk, however, was a rather harrowing blow-by-blow account of some major air crashes from the past few decades. There were some important lessons about how much information humans can cope with, how humans react unexpectedly in stressful situations, the importance of familiarity and expectations in UI design, but overall it was a rather sobering affair. Which is just as well, since we all poured straight out of the talk and into the local pub for the famous Devoxx party, complete with IT-branded beers. Cheers for another year!</p>
<h2 id="continuous-change">Continuous Change</h2>
<p>On Friday I went to a talk very relevant to my current project - Chris Simon’s “<a href="https://www.devoxx.co.uk/talk/?id=3677">Winning at Continuous Deployment with the Expand/Contract Pattern</a>”. My project has a fairly typical Continuous Integration pipeline, but we still do production releases via manual intervention on a business schedule. One of the reasons is schema changes which cause backward incompatibility between services - this can cause outages if our server-side (message recipient side) services are updated before the client side (message sender) services - because even if they manage to process the old-style message, they may send a response the new client isn’t expecting. So whilst we have eventual consistency, a few requests may get lost along the way during the service restart.
The Expand/Contract pattern addresses this by putting the onus on developers to create an “interim” server-side application which can accept both old and new client messages, and sends both old and new response formats. So if you used to have an application that took a username and returned OK like this</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>{
"name": "Sarah Saunders"
}
{
"response": "OK"
}
</code></pre></div></div>
<p>and the new version would take first name / last name and return OK and an ID:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>{ "name": {
"first": "Sarah",
"last": "Saunders"
}}
{
"id": "2134-ker-438052u"
}
</code></pre></div></div>
<p>In this scenario, the “interim” application would accept both inputs, and would return an output like this:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>
{
"response": "OK",
"id": "2134-ker-438052u"
}
</code></pre></div></div>
<p>This should allow client applications to talk to the server both before and after they are migrated to the new version. Then, once all clients are migrated, the “Contract” stage of the release occurs and the server can be moved to only accept/respond in a V2 style.</p>
<p>I really like this pattern and I don’t think developers would mind writing the extra code for the interim release. It does spell out though, that deployment must be a consideration right from the design stage of an application, and Chris did raise this point at the start - mentioning his other talk goes into more detail about how Test Driven Development (TDD) helps this by defining strong contractual interfaces between components, and is a strong enabler for continuous deployment. I have long been a strong advocate of TDD, and here is yet another reason why getting <a href="https://chat.openai.com/g/g-yBT4X4ldj-unit-test-writer">ChatGPT to write your tests</a> for you is no substitute at all for proper TDD!</p>
<p>In conclusion, a really great conference for affirming that we (Capgemini Cloud Development team) are on the right lines with the software and architectures we are currently building for our clients, our use and understanding of AI aligns well with the global community (or is sometimes a little bit ahead) and we have lots of great lessons to learn to improve our architecture and deployment practices even further. Looking forward to the next year.</p>
<p><a href="https://capgemini.github.io/architecture/devoxx-2024/">Devoxx 2024</a> was originally published by Capgemini at <a href="https://capgemini.github.io">Capgemini Software Engineering</a> on May 17, 2024.</p>https://capgemini.github.io/development/macos-development-with-swift2024-04-11T00:00:00+01:002024-04-11T00:00:00+01:00Enamul Alihttps://capgemini.github.io/authors#author-enamul-ali
<p><img src="/images/2024-04-08-macos-development-with-swift/hijri-dates-cover.jpeg" alt="Hijri Menu Bar App Cover Image" /><br />
For a first foray into MacOS development, I built a custom menu bar application. The Hijri Menu Bar allows the user to view the current <a href="https://www.webexhibits.org/calendars/calendar-islamic.html">Hijri date</a>. It is convenient, minimal and easy to use. Below, we’ll highlight the development process, looking at the code, build process and challenges faced.</p>
<h2 id="development">Development</h2>
<p>The development process began with setting up the project in Xcode. Swift and SwiftUI were chosen as the foundations for the app. Essential components such as the <code class="language-plaintext highlighter-rouge">AppDelegate</code>, which manages the app’s startup and background tasks, were incorporated, alongside a custom <code class="language-plaintext highlighter-rouge">StatusItemView</code> to handle the UI and what the user would see.</p>
<h3 id="app-delegate">App Delegate</h3>
<p><img src="/images/2024-04-08-macos-development-with-swift/1-macOS-menu-bar-app.png" alt="NSStatusBar Swift Code Snippet" /><br />
The <code class="language-plaintext highlighter-rouge">AppDelegate</code> class is responsible for setting up the <a href="https://developer.apple.com/documentation/appkit/nsstatusitem">status bar item</a>, managing the popover and handling user interactions. It initialises a clickable item in the menu bar with a calendar icon.</p>
<p><img src="/images/2024-04-08-macos-development-with-swift/2-macOS-menu-bar-app.png" alt="NS Popover Swift Code Snippet" /><br />
An <a href="https://developer.apple.com/documentation/appkit/nspopover"><code class="language-plaintext highlighter-rouge">NSPopover</code></a> is essentially a holder to display the <a href="https://developer.apple.com/documentation/appkit/nswindow/1419160-contentview"><code class="language-plaintext highlighter-rouge">contentView</code></a> (a SwiftUI view). This popover will utilise a custom SwiftUI View to display the current Hijri date information.</p>
<p>Now we have a clickable item in the menu bar and a <a href="https://developer.apple.com/design/human-interface-guidelines/popovers">popover</a>. We need to ensure when the item is clicked, the popover will appear on the screen.</p>
<p><img src="/images/2024-04-08-macos-development-with-swift/3-macOS-menu-bar-app.png" alt="Button Swift Code Snippet" /><br />
To achieve this, the <a href="https://developer.apple.com/documentation/swiftui/button"><code class="language-plaintext highlighter-rouge">Button</code></a> <code class="language-plaintext highlighter-rouge">action</code> and <code class="language-plaintext highlighter-rouge">target</code> lines establish the target-action pattern. Clicking the button will now trigger the <code class="language-plaintext highlighter-rouge">togglePopover(_:)</code> function (defined below).<br />
This function decides whether to show or hide the popover, based upon it’s current state.</p>
<p><img src="/images/2024-04-08-macos-development-with-swift/4-macOS-menu-bar-app.png" alt="Toggle Popover Function Code Snippet" /><br />
The <code class="language-plaintext highlighter-rouge">togglePopover(_:)</code> is a custom function. It first checks for the validity of both the button and popover. If the popover is already shown, it will close it.<br />
If the popover is hidden, it fires a custom notification <code class="language-plaintext highlighter-rouge">.fetchHijriDates</code>. This will be received later in our <code class="language-plaintext highlighter-rouge">StatusItemView</code> (The SwiftUI view responsible for displaying the dates).</p>
<p>This function also displays the <a href="https://developer.apple.com/documentation/appkit/nspopover/1532113-show">popover relative</a> to the menu bar icon’s bottom edge.</p>
<p><img src="/images/2024-04-08-macos-development-with-swift/5-macOS-menu-bar-app.png" alt="NSApp Activation Policy Code Snippet" /><br />
Applications in MacOS can have <a href="https://developer.apple.com/documentation/appkit/nsapplication/activationpolicy">different activation policies</a> that determine how they behave in the user interface. Setting the activation policy to <code class="language-plaintext highlighter-rouge">accessory</code> mode means that the application will remain active in the background, even if the user closes its window or quits the application. This is useful for menu bar applications that need to remain running without requiring constant user interaction.</p>
<h3 id="user-interface">User Interface</h3>
<p>After configuring the <code class="language-plaintext highlighter-rouge">AppDelegate</code>, the focus shifted to building the user interface. A new file is created called the <code class="language-plaintext highlighter-rouge">StatusItemView</code>. This will be responsible for displaying the current Hijri date in the popover created above.</p>
<p><img src="/images/2024-04-08-macos-development-with-swift/6-macOS-menu-bar-app.png" alt="Status Item View Code Snippet" /><br />
This view manages the date fetching and presenting that to the user. The app starts with state variables which will store the current Hijri dates in Arabic and English.</p>
<p><img src="/images/2024-04-08-macos-development-with-swift/7-macOS-menu-bar-app.png" alt="Text View Code Snippet" /><br />
Next, we utilise <a href="https://developer.apple.com/documentation/swiftui/text"><code class="language-plaintext highlighter-rouge">Text</code></a> views to display the information stored in the state variables (i.e. the current Hijri date in Arabic and English with appropriate formatting.)</p>
<p><img src="/images/2024-04-08-macos-development-with-swift/8-macOS-menu-bar-app.png" alt="HStack Code Snippet" /><br />
SwiftUI offers a horizontal stack <a href="https://developer.apple.com/documentation/swiftui/hstack"><code class="language-plaintext highlighter-rouge">HStack</code></a> which is used here to display a copyright message and an information icon. Tapping the information icon toggles the visibility of an instructions menu (which is coded in a separate <code class="language-plaintext highlighter-rouge">InstructionsView</code>).</p>
<p><img src="/images/2024-04-08-macos-development-with-swift/9-macOS-menu-bar-app.png" alt="VStack Code Snippet" /><br />
The app fetches the latest Hijri date in two ways: When the view first appears on the screen, it grabs the latest date for an accurate display. This is achieved using the <a href="https://developer.apple.com/documentation/swiftui/view/onappear(perform:)"><code class="language-plaintext highlighter-rouge">onAppear</code></a> method. However, this did not seem to refresh the dates subsequently, say every midnight, hindering the functionality of the app.</p>
<p>We can achieve this through utilising Swift’s <a href="https://developer.apple.com/documentation/foundation/notificationcenter"><code class="language-plaintext highlighter-rouge">NotificationCenter</code></a> class. Here, a custom notification <code class="language-plaintext highlighter-rouge">fetchHijriDates</code> is triggered from the <code class="language-plaintext highlighter-rouge">AppDelegate</code> every time the popup is shown. The <code class="language-plaintext highlighter-rouge">StatusItemView</code> listens for this notification. When it is received, it fires the custom <code class="language-plaintext highlighter-rouge">fetchHijriDates</code> function (explained next) which will handle fetching the current date.</p>
<p><img src="/images/2024-04-08-macos-development-with-swift/10-macOS-menu-bar-app.png" alt="Fetch Dates Function Code Snippet" /><br />
Behind the scenes, a custom <code class="language-plaintext highlighter-rouge">fetchHijriDates</code> function is responsible for fetching the current Hijri dates. It formats them according to Arabic and English locales and updates the state variables created earlier with those formatted dates. It is called both when the view appears and in response to the <code class="language-plaintext highlighter-rouge">fetchHijriDates</code> notification, discussed earlier.</p>
<p>Here we are able to leverage Apple’s built-in Date and Calendar functionality, using <a href="https://developer.apple.com/documentation/foundation/calendar/identifier/islamic"><code class="language-plaintext highlighter-rouge">Calendar(identifier: .islamicUmmAlQura)</code></a>. This (satisfying) pre-built functionality saves us time and ensures accuracy and uniformity in our date handling.</p>
<p>This is the end of our discussion on the code. <a href="https://github.com/Enamulali/hijri-date">The full code can be found on the GitHub repo</a>.<br />
Below we will look at the rest of the development & build process.</p>
<h2 id="build-process--challenges">Build Process & Challenges</h2>
<p>Building the app in Xcode involved compiling the code, resolving errors and creating a distributable package. This process also involves setting the minimum macOS version, which is set to 12 for this app.</p>
<h3 id="app-icon">App Icon</h3>
<p><img src="/images/2024-04-08-macos-development-with-swift/512-mac.png" alt="App Icon" /><br />
Creating a visually appealing icon was important. A custom icon was designed and added to Xcode in various sizes to ensure it adapts to different screen resolutions. <a href="https://www.candyicons.com/free-tools/app-icon-assets-generator">I used this free online tool to achieve this</a>.</p>
<h3 id="distribution">Distribution</h3>
<p>Distributing through the Mac App Store, requires a paid Apple Developer ID, so I chose to make it freely available on GitHub. Subsequent releases were uploaded via <a href="https://github.com/Enamulali/hijri-date/releases">GitHub Releases</a>.</p>
<h3 id="contemplating-further-features">Contemplating Further Features</h3>
<p>Features like automatic launch at login were initially considered. Launch at login ensures the menu bar icon is always visible (i.e. that the app is started after the user logs in/restarts their machine). However the focus remained on core functionality and a simple user experience. Instructions are provided to users during installation to manually set launch at login.</p>
<h3 id="creating-dmg-file">Creating DMG File</h3>
<p>To ensure smooth installation, I created a DMG file, using the <a href="https://github.com/create-dmg/create-dmg"><code class="language-plaintext highlighter-rouge">create-dmg</code></a> tool. Clear and concise instructions were included to guide users through the installation process. Using the downloaded DMG file, users can drag the app to their Applications folder, and launch it from there.</p>
<h2 id="conclusion">Conclusion</h2>
<p>This journey, from concept to distribution, has been a valuable learning experience in macOS development. I hope to continuously improve the Hijri Menu Bar app and provide users with a seamless way to view Hijri dates on their Macs.</p>
<p>If you’re interested in trying out the app or contributing to its development, <a href="https://github.com/Enamulali/hijri-date">you can find it on GitHub</a>.</p>
<p><a href="https://capgemini.github.io/development/macos-development-with-swift/">Exploring MacOS Development: Creating a Menu Bar App with Swift & SwiftUI</a> was originally published by Capgemini at <a href="https://capgemini.github.io">Capgemini Software Engineering</a> on April 11, 2024.</p>https://capgemini.github.io/cloud/create-ai-bot-in-azure2024-01-19T00:00:00+00:002024-01-19T00:00:00+00:00Sarah Saundershttps://capgemini.github.io/alumni#author-sarah-saunders
<p>I recently had the privilege of judging an internal Capgemini hackathon. It was an open brief, but the focus was to be on technology and its application to solve a real-world business problem. The entries were varied and excellent, from a dashboard to assess how warm/busy/accessible the office was so you could decide whether or not it was worth going in, to gamification of training, to improvements for mountain search and rescue teams.
One of the major commonalities across many of the entries was the use of “AI”, where, given our common use of the Azure platform, AI tended to be defined as Azure cognitive search (recently renamed <a href="https://azure.microsoft.com/en-us/products/ai-services/ai-search">Azure AI</a>) indexing a set of business documents, with a natural language processing unit on top to act as a chatbot. This made me want to have a go myself, and see what I could build!</p>
<h2 id="is-it-ai">Is it AI?</h2>
<p>This architecture, for me, isn’t really using the “AI” bits of AI - despite the fact that if you use ChatGPT (which is <a href="https://learn.microsoft.com/en-us/azure/ai-services/openai/">available as a product in Azure</a> since Microsoft’s purchase of OpenAI) there can be some non-deterministic, generative functionality, but it sure is useful and could probably ease the burden of the HR and support departments of many organisations - and could possibly even replace a lot of the staff in these departments. I set out to see if I could build a HR chatbot to replace the kinds of queries a typical HR department employee might need to deal with. Why HR? Just because everybody hates them?? No…! - it’s because of the remit of HR, dealing with the employee lifecycle and needing to prove that a company acts without bias it must be a heavily process-driven department. These processes must be documented, and most of the workload of the department is in dealing with queries regarding the process. The incoming questions are probably not phrased in the same way as the process documentation, so some sort of fuzzy search is required in order to automate the question-answering process; for example, translating “how much paid time off do I get when my baby is born?” to “paternity leave allowances” is not a straightforward mapping. This is the reason that previous attempts to automate such departments have failed. Language is too complex for simple mappings and decision trees to replace a person on the end of a line - as anyone who has tried to navigate an automated telephone call will tell you. Who hasn’t ended up shouting “I WANT TO SPEAK TO A PERSON” down the line? But at the end of the day, the workload is simply regurgitating content from a document repository and the hard bit comes in finding the relevant sections - a process that is better automated as it’s a pretty unrewarding job acting as a knowledge base for people who can’t be bothered to read swathes of documentation.</p>
<h2 id="the-architecture">The Architecture</h2>
<p>As mentioned, the hackathon had been playing with the Azure cloud, so we’ll keep to this and use Azure’s concepts. These are pretty simply translated to any hyperscaler though, or to open-source alternatives if you want to host your own. For example, on AWS you could use <a href="https://aws.amazon.com/blogs/machine-learning/building-an-nlp-powered-search-index-with-amazon-textract-and-amazon-comprehend/">Textract and Amazon Comprehend</a>, and in the OSS world you’d perhaps use <a href="https://www.nltk.org/">NLTK</a> and <a href="https://lucene.apache.org/">Lucene</a>.</p>
<p>Azure AI Search is a nice tool - a little more than document search, a little less than AI. It can be a bit clunky to get used to, and the price policy is per GB storage which is pretty bizarre - but this can be beneficial if you have query-intensive applications and a small-ish data set of documents. We use it as an exotic database view for one of our applications, and it took us a while to get used to the fuzzy query syntax - it’s not really designed for logical queries, it’s much better at giving you best-guess matches for loose search terms - and as such is well positioned to be the back-end of our HR chatbot.</p>
<h2 id="the-method">The Method</h2>
<p>I found a couple of tutorials and quick-starts to create chatbots on my documentation -
<a href="https://github.com/Azure-Samples/azure-search-openai-demo">Azure Search OpenAI demo</a>
or <a href="https://techcommunity.microsoft.com/t5/startups-at-microsoft/build-a-chatbot-to-query-your-documentation-using-langchain-and/ba-p/3833134">Query your documentation using Langchain</a></p>
<p>The issue I found is that it’s all moving quite fast - faster than the tutorials can keep up with. All mention of <a href="https://www.langchain.com/">Langchain</a> has now gone from the Azure portal (although you can still <a href="https://towardsdatascience.com/talk-to-your-sql-database-using-langchain-and-azure-openai-bb79ad22c5e2">write your own</a> Langchain chatbot), and QnA maker has now moved on and we have <a href="https://language.cognitive.azure.com/">Azure AI Language Studio</a> where you can add in your documents via a “Custom Question Answering” project, which is a type of Azure “Language” and can be created via the LoCode/NoCode <a href="https://language.cognitive.azure.com/">Language Studio homepage</a>. The tutorial speedily guides you through a simply-configured web form although it’s not quite clear what you are actually going to create - looking at what was deployed after the configuration steps, this sets up an Azure cognitive search (AI search) repository and then enables custom text classification / custom named entity recognition on the repository. The default behaviour for this appears to be breaking down the content in your referenced documents into paragraphs and pulling out likely titles/subjects. You can then modify this classification by adding in new questions and answers, or choosing the best answer for given terms.</p>
<p>The free trial only allows you to upload three sources into your AI search repository. So, for our HR example, I’ve downloaded three HR policy documents from <a href="https://staffsquared.com/free-hr-documents/">this handy online repository</a> and added them into my Custom Question Answering repository. This generates a “Knowledge Base” that I can then publish.</p>
<p><img src="/images/2024-01-08-create-ai-bot-in-azure/upload-docs.jpg" alt="Upload documents into your language knowledge base" /></p>
<p>Here we can see the way that the content has been divided up into major terms and paragraphs that may address those terms. I can edit here, and once it’s published I can generate a Bot to act as the user interface to it.</p>
<p><img src="/images/2024-01-08-create-ai-bot-in-azure/knowledge-base.jpg" alt="knowledge base parsed from documents" /></p>
<p>OK so now onto creating this Bot. As Bots go, OpenAI’s <a href="https://chat.openai.com">ChatGPT</a> is the real deal. Generative AI, pre-trained to recognise vast arrays of English language. For most use cases we would have to “turn off” all the fun, generative stuff for our application (see Guardrails below) and it’s probably overkill to use ChatGPT for this demo - plus, it isn’t included in the Azure Free Trial tier so I will be experimenting with the <a href="https://azure.microsoft.com/en-gb/products/ai-services/ai-bot-service">Azure AI Bot Service</a> instead. It should be sufficient for this fairly small and simple demo.
Cost-wise, the Azure AI Bot has a free tier, but it must be hosted via an Azure Web App whose service plan is defaulted to S1 (Standard). This plan, at £75/month to keep it running, is eating rapidly into my free credit!</p>
<p>Configuring the Bot online is pretty straightforward. The web GUI provides you with a customised template for creating the resources that you will need, creating an App Service Plan to launch an Azure WebApp that will host your Bot. The only config you have to do is enter the key of your Language Resource so that you can create a secure connection between the AI service knowledge base and the chatbot. This isn’t documented, but you can find the key by going back to the Azure Portal home and clicking the green squares “All Resources” view, then selecting your Language resource (the resource where Type = Language) and then selecting the Keys and Endpoint menu item. (There are two keys, so that you can refresh them by rotating them individually and hence avoid downtime. Either one is fine.)</p>
<h2 id="testing">Testing</h2>
<p>Once your Bot is deployed, you can test it by finding it under All Resources and choosing “Test in Web Chat” from the right hand menu.</p>
<p>I tried with a simple question, that I know is answerable with the content in the documents:
<img src="/images/2024-01-08-create-ai-bot-in-azure/good-answer.jpg" alt="Trial question" /></p>
<p>So far so good. The Bot has successfully found the right bit of my documentation and returned a comprehensive and understandable answer. How about another:</p>
<p><img src="/images/2024-01-08-create-ai-bot-in-azure/bad-answer.jpg" alt="Second question" /></p>
<p>Oh dear. “cannot” is not exactly a strong English sentence! Although it has found the relevant section of the documentation, it has not been able to pull out a contextual answer.
I am not sure if it is the Language Service or the Bot which is struggling with this question. Enabling and examining the logs on the Bot Service isn’t that helpful - it just shows HTTP POST requests going to the Bot framework. The Bot framework should be responsible for breaking down the user’s entered text into logical “intentions” that the back-end question-answerer can respond to, and then delivering the back-end response in a human readable form.
I eventually figure out how to <a href="https://learn.microsoft.com/en-us/azure/ai-services/diagnostic-logging">enable logging on my Language Service</a> and discover the query and response that the Bot has sent to the language service:</p>
<p><img src="/images/2024-01-08-create-ai-bot-in-azure/backend-query.jpg" alt="Bot query to language service" /></p>
<p>I can see that the language service has actually done a reasonable job. It’s identified the right paragraph for the query, but returned just a 38.97% certainty rating that this is the right data. Fair enough. So it seems that the issue is with the Bot being able to pull the right piece of text out of the response. This makes me start to wonder about the “Bot” I have deployed. What is it actually based on? There isn’t much documentation I can find, but you can download the source code, which shows that I have deployed something created by the <a href="https://github.com/microsoft/botbuilder-js">BotBuilder SDK</a>. I should be able to run this locally, but weirdly the Bot JavaScript code in my download seems totally out-of-date with the latest Language Studio API. I have to go back to the drawing board and use one of the <a href="https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/javascript_nodejs/48.customQABot-all-features">later samples</a> and update the code to correctly declare a method asynchronous to get the Bot running locally using the Bot Framework Emulator.</p>
<p>To get it to work using Node.js v18.16.0 and restify ^11.1.0, I had to edit the sample code <a href="https://github.com/microsoft/BotBuilder-Samples/pull/3939/files">index.js line 91</a> to declare the method async or it would not start:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>// Listen for incoming requests.
server.post('/api/messages', async (req, res) => {
adapter.processActivity(req, res, async (turnContext) => {
// Route the message to the bot's main handler.
await bot.run(turnContext);
});
});
</code></pre></div></div>
<p>I was then able to run the Bot locally connecting to my Azure-hosted Language via the Azure <a href="https://github.com/Microsoft/BotFramework-Emulator/blob/master/README.md">Bot Framework Emulator</a>. And of course as luck would have it, the latest sample doesn’t return such a poor response! It’s still not perfect, but it’s at least a sentence. See below.</p>
<p><img src="/images/2024-01-08-create-ai-bot-in-azure/via-emulator.jpg" alt="Local Bot Service running in emulator" /></p>
<p>It does also prove that the poor response here was the chatbot interpreting the data from the Language Service. The Language Service will return a field called an AnswerSpan which lists, with a confidence score, the section of the documentation it considers most relevant to the question. In the case of my “dismissal” question, the AnswerSpan returned was:</p>
<p><code class="language-plaintext highlighter-rouge">An employee whose appointment is terminated for any reason will be provided with a written statement of the reasons for the dismissal</code></p>
<p>This text was paired with a confidence score of 0.2880999999999997, or circa 29%. Fair enough. So how the cloud-deployed bot extracted the answer “cannot” from this is a bit of a mystery! The new version of my Bot prints the whole AnswerSpan and is, whilst still not exactly accurate, at least better. So how do I fix it?</p>
<h2 id="customisation">Customisation</h2>
<p>It seems the way to fix up these simple Bots is to go and add a custom question/answer into the Language Service knowledge base. I try adding a specific answer to the question, “Can I appeal against my dismissal?”. I re-publish the knowledge base and try again.</p>
<p><img src="/images/2024-01-08-create-ai-bot-in-azure/fixed-question.jpg" alt="Adding a custom question" /></p>
<p>This looks much better. But it does imply that quite a lot of user testing and customisation will have to take place before this Bot is ready to replace its human counterparts.</p>
<h2 id="guardrails">Guardrails</h2>
<p>One of the things that surprised people about ChatGPT, particularly in its earlier iterations, was that it was not trained to be accurate. It was trained to please the user. This would mean it would return inaccurate answers above telling you that it didn’t know the answer, as it had gauged higher satisfaction from “lying”! You don’t want your HR chatbot to lie, so you must use the guardrail settings to ensure that it does not. With ChatGPT, guardrails can be set using natural language, for example you can state:</p>
<pre><code class="language- ">{"role": "system", "content": "Assistant is an intelligent chatbot designed to help users answer their tax related questions.`
Instructions:
- Only answer questions related to taxes.
- If you're unsure of an answer, you can say "I don't know" or "I'm not sure" and recommend users go to the IRS website for more information. "},
{"role": "user", "content": "When are my taxes due?”}
</code></pre>
<p>This configuration will prevent the chatbot from “making up” an answer if it cannot find a decent response in its repository.
Configuring Azure’s ChatGPT chatbot via the GUI, to achieve the above you turn the setting known as “temperature” down to 0. The temperature represents how creative the chatbot can be in getting you an answer. A low temperature results in more “I’m sorry I don’t know” type answers, but increases the chances you’ll get an accurate answer, and that you’ll get the same answer when you ask the same question twice!</p>
<h2 id="the-cost">The Cost</h2>
<p>So what does this cost to run in Azure? Depending on your Bot type, the cost can vary wildly. As mentioned, I am running my Language instance and my Bot instance in the free trial tier, so I am only paying for the app service to host them and this is around £75/month. If you were to use an enterprise ChatGPT Bot, costs are over £800/month fixed rate for 40 users, plus 80p per “usage unit” and £20 for any extra users over and above the plan. Still considerably cheaper than making your HR staff deal with these queries, I suppose..
As mentioned, Azure AI search is priced per GB of data indexed, the free tier runs up to 50 GB, Standard tier gives you 25 GB for 27p/hour.</p>
<h2 id="in-conclusion">In Conclusion</h2>
<p>I am impressed with the Azure AI search offering - it’s powerful and useful - there are so many scenarios whereby we end up awash with documentation and cannot find the content we need. The chatbots are a varied bunch but I liked the way you could download the code and run/edit it locally with relative ease. In all, I feel this will be a very common architecture for the business problems of the next year or so.</p>
<p><a href="https://capgemini.github.io/cloud/create-ai-bot-in-azure/">How to (maybe) replace your HR department in 3 easy steps</a> was originally published by Capgemini at <a href="https://capgemini.github.io">Capgemini Software Engineering</a> on January 19, 2024.</p>https://capgemini.github.io/cloud%20native/spring-cloud-vault-kubernetes2023-07-07T00:00:00+01:002023-07-07T00:00:00+01:00Greg Wolversonhttps://capgemini.github.io/authors#author-greg-wolverson
<p>I have previously written <a href="https://capgemini.github.io/engineering/securing-spring-boot-config-with-kubernetes/">blog posts</a> about securing Spring Boot configuration with standard Kubernetes resources. In this post I’m going to take it a step further with a more productionised pattern of securing Spring Boot microservices with Vault in Kubernetes.</p>
<h2 id="keep-it-secret-keep-it-safe">Keep It Secret, Keep It Safe</h2>
<p>As a famous wizard once said; <em>keep it secret, keep it safe</em>. Whilst this applies to rings and other precious objects, it also applies to the sensitive data that we keep within our applications.</p>
<p>Security is paramount in productionised applications, often being one of the more challenging patterns to implement correctly.</p>
<h2 id="not-all-secret-stores-were-created-equal">Not All Secret Stores Were Created Equal</h2>
<p>As I spoke about in my <a href="https://capgemini.github.io/engineering/securing-spring-boot-config-with-kubernetes/#keeping-secrets">previous post</a>, using Kubernetes secrets for storing sensitive data is considered bad practice for two main reasons:</p>
<ol>
<li>The secrets themselves are stored in base64 format, which provides minimal security on its own.</li>
<li>By default, secrets are <a href="https://kubernetes.io/docs/concepts/configuration/secret/">stored unencrypted</a> in the underlying API’s data store (etd), meaning anyone with API access can retrieve and modify them.</li>
</ol>
<p>There are several alternatives to using Kubernetes default secrets, and one of the most widely used tools is <a href="https://www.vaultproject.io/">HashiCorp Vault</a>. Vault is an identity-based secrets and encryption management system, that provides encryption services protected by authentication and authorization mechanisms. This makes it a much more secure way to store sensitive data. Additionally, Vault offers integration and authentication mechanisms <a href="https://developer.hashicorp.com/vault/docs/auth/kubernetes">with Kubernetes</a> out-of-the-box, providing a proven and secure approach to managing secrets within your Kubernetes cluster.</p>
<h2 id="secure-doesnt-mean-complex">Secure Doesn’t Mean Complex</h2>
<p>Whilst being a challenging pattern to get right, security doesn’t need to be complex. Let’s walk through a simple example of how to set up the Kubernetes auth method locally, and retrieve secrets from a Spring Boot application using <a href="https://cloud.spring.io/spring-cloud-vault/reference/html/">Spring Cloud Vault</a>.</p>
<h3 id="configuring-vault">Configuring Vault</h3>
<p>To begin with, we will configure Vault locally. HashiCorp has a <a href="https://helm.releases.hashicorp.com/">set of available helm charts</a> that you can apply, in order to test and work with Vault. For our example, we will be using the <a href="https://github.com/hashicorp/vault-helm">vault helm chart</a>.</p>
<p>We will use <a href="https://github.com/Praqma/helmsman#what-is-helmsman">Helmsman</a> to manage our helm deployments. If you are interested in learning more about Helmsman, I recently wrote a <a href="https://capgemini.github.io/kubernetes/introduction-to-helmsman/">blog post</a> about it.</p>
<p>Our <code class="language-plaintext highlighter-rouge">dev</code> state file looks like this:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>helmRepos:
vault: https://helm.releases.hashicorp.com
apps:
...
vault:
namespace: dev
enabled: true
chart: vault/vault
version: 0.24.1
valuesFile: values/vault/values-dev.yaml
</code></pre></div></div>
<p>This will tell Helmsman to deploy the <code class="language-plaintext highlighter-rouge">vault</code> helm chart into our local dev namespace using the values file located at <code class="language-plaintext highlighter-rouge">values/vault/values-dev.yml</code>. The values file contains some simple overriding configuration to enable <a href="https://github.com/hashicorp/vault-helm/blob/main/values.yaml#L746">development mode</a> for Vault. In doing so, it allows us to experiment with Vault without needing to unseal or store keys against it (Note: This should not be done in a production environment).</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>server:
dev:
enabled: true
</code></pre></div></div>
<h4 id="kube-auth-method">Kube Auth Method</h4>
<p>After applying the helmsman state file, we can proceed with configuring the Vault instance. There are two main ways to configure Vault: through the Vault UI or programmatically via the CLI. Since we prefer repeatable processes, having our Vault configuration in code is a better approach. Taking it a step further, we could use the <a href="https://registry.terraform.io/providers/hashicorp/vault/latest/docs">Vault Terraform</a> approach to treat this configuration as infrastructure-as-code. However, that goes beyond the scope of this example.</p>
<h4 id="configuration">Configuration</h4>
<p>Next we need to enable the Kube auth method. The easiest way to do this (programmatically) is via the Vault CLI (which comes pre-installed in the Vault container from the installed helm chart).</p>
<p><code class="language-plaintext highlighter-rouge">vault auth enable kubernetes</code></p>
<p>After enabling this feature, we need to configure the auth method to work with our local kubernetes cluster. There are several ways to configure this, but with the <a href="https://developer.hashicorp.com/vault/docs/auth/kubernetes#kubernetes-1-21">changes introduced in Kubernetes 1.21</a>, there are some documented and recommended approaches. It’s worth reading through the different approaches and understanding their differences. However, for the purpose of this example, we will be using a <a href="https://developer.hashicorp.com/vault/docs/auth/kubernetes#use-local-service-account-token-as-the-reviewer-jwt">local service account as the reviewer JWT</a> because we have Vault running locally in a pod within our cluster.</p>
<p>To enable this configuration, we can run the following command:</p>
<p><code class="language-plaintext highlighter-rouge">vault write auth/kubernetes/config kubernetes_host=https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT</code></p>
<p>This configures the Vault auth method to use the service account token running in the Vault pod itself. This works because the Vault pod is running in our local cluster, the <em>same</em> cluster that Vault will be authenticating against later on when we send requests from our sample service. If Vault was running as an externally managed service (which is typical in a production environment), this approach wouldn’t work, and we’d have to configure the auth method using a more robust approach, such as <a href="https://developer.hashicorp.com/vault/docs/auth/kubernetes#use-the-vault-client-s-jwt-as-the-reviewer-jwt">using the Vault client’s JWT as the reviewer token</a> or possibly <a href="https://developer.hashicorp.com/vault/docs/auth/kubernetes#use-the-vault-client-s-jwt-as-the-reviewer-jwt">using long-lived tokens</a>.</p>
<h4 id="roles">Roles</h4>
<p>Now that we have enabled and configured our auth method, we can proceed to add the other important pieces of configuration. Firstly, we need to configure the role against the authentication method. We will create a role that allows our Spring Boot application to retrieve secrets from our Vault instance.</p>
<p>`
vault write auth/kubernetes/role/demo bound_service_account_names=’*’ bound_service_account_namespaces=dev policies=spring-boot-demo<br />
`</p>
<p>The role above will be called <code class="language-plaintext highlighter-rouge">demo</code>, and it will be bound to any service account (for finer grained security you would usually limit this to a specific account), it will be bound to our <code class="language-plaintext highlighter-rouge">dev</code> namespace and will have a policy attached to it named <code class="language-plaintext highlighter-rouge">spring-boot-demo</code> (more on this later).</p>
<p>Each Kubernetes auth method can have any number of roles created against it. The purpose of these roles is to restrict each integrating service to a specific set of secrets through roles and policies. The <code class="language-plaintext highlighter-rouge">role</code> component of this configuration determines which service(s), bound to which service account(s) can authenticate against this method (the auth aspect). The attached <a href="https://developer.hashicorp.com/vault/docs/concepts/policies">policy</a> determines which secrets that service account(s) (and consequently service(s)) can access.</p>
<h4 id="policies">Policies</h4>
<p>Vault policies define the fine-grained, path-based access to specific secrets held within Vault itself. The policy we’re using for this example looks like this:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>path "kv/spring-boot-demo" {
capabilities = ["read"]
}
path "kv/spring-boot-demo/dev" {
capabilities = ["read"]
}
path "auth/token/lookup-self" {
capabilities = ["read"]
}
path "auth/token/create" {
capabilities = ["create", "read", "update", "list"]
}
</code></pre></div></div>
<p>This policy will give access to secrets held at <code class="language-plaintext highlighter-rouge">kv/spring-boot-demo</code> and <code class="language-plaintext highlighter-rouge">kv/spring-boot-demo/dev</code>, it also has some default Vault policies which allows the JWT token lookup to occur during login and authentication. For secret lookups, we only need to provide <code class="language-plaintext highlighter-rouge">read</code> access because our service will only be trying to <code class="language-plaintext highlighter-rouge">get</code> specific secrets, not create or update them.</p>
<p>The following diagram gives a high-level view as to how Policy look-ups and authorisation occur.</p>
<p><img src="/images/2023-06-21-spring-cloud-vault-kubernetes/vault-policy.png" alt="Vault policy access" /></p>
<h4 id="secrets">Secrets</h4>
<p>Lastly, we need to enable a secrets engine and create a secret for our application to use. For this example, we will be using the <a href="https://developer.hashicorp.com/vault/docs/secrets/kv">Key-Value secrets engine</a>. The following CLI command will enable the KV engine for us, with a name of <code class="language-plaintext highlighter-rouge">kv</code> (this should look familiar from our policy outlined earlier).</p>
<p><code class="language-plaintext highlighter-rouge">vault secrets enable kv</code></p>
<p>Next, we can put a secret into our new kv store:</p>
<p><code class="language-plaintext highlighter-rouge">vault kv put kv/spring-boot-demo/dev admin=password</code></p>
<p>Now that we have our Vault instance configured with the kube auth method, a role, an appropriate policy and secret data, we can integrate a sample application to test it.</p>
<h3 id="spring-cloud-vault">Spring Cloud Vault</h3>
<p>To test our Vault configuration and close the loop with our example setup, we will use a Spring Boot microservice, which has endpoint security configured with Spring Security. For this demo, we will be using <a href="https://docs.spring.io/spring-boot/docs/current/reference/html/actuator.html">actuator</a> which only <a href="https://docs.spring.io/spring-boot/docs/current/reference/html/actuator.html#actuator.endpoints.security">exposes <code class="language-plaintext highlighter-rouge">/health</code> by default for security reasons</a>. Let’s expose some actuator endpoints that could contain sensitive information such as <code class="language-plaintext highlighter-rouge">/env</code> and <code class="language-plaintext highlighter-rouge">/heapdump</code>, and secure them with spring security.</p>
<h4 id="securing-our-endpoints">Securing Our Endpoints</h4>
<p>In Spring Boot it’s fairly straightforward to enable various actuator endpoints. Spring provides a <code class="language-plaintext highlighter-rouge">management</code> config block, which allows developers finer-grained control over which endpoints are exposed, and also which sub-sets of information are exposed at those levels.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>management:
endpoint:
...
env:
enabled: true
heapdump:
enabled: true
</code></pre></div></div>
<p>We will be enabling the <code class="language-plaintext highlighter-rouge">env</code> and <code class="language-plaintext highlighter-rouge">heapdump</code> endpoints as mentioned above, this means we <em>could</em> be exposing sensitive information about our service if not secured correctly.</p>
<p>In order to secure the actuator routes properly we need to implement spring security. A simple pattern I like to follow is to split my routes into secure and insecure, allowing pass-through traffic for any non-secure route, and then handling secure routes with appropriate <a href="https://auth0.com/docs/manage-users/access-control/rbac">role-based access controls</a>. Our configuration will look like as follows, including a ‘management’ style user for access purposes.</p>
<p><code class="language-plaintext highlighter-rouge">application.yml</code></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>appsecurity:
management:
username: ADMIN
password: ${admin:test}
securedroutes:
management:
- "/actuator/shutdown"
- "/actuator/loggers/**"
- "/actuator/heapdump"
- "/actuator/env"
unprotected:
- "/actuator/info"
- "/actuator/prometheus"
- "/actuator/health/**"
- "/hello"
</code></pre></div></div>
<p>The config above allows us to use a <a href="https://docs.spring.io/spring-boot/docs/2.0.0.M3/reference/html/howto-properties-and-configuration.html#howto-use-short-command-line-arguments">placeholder value</a> for our management user password. This is useful for unit test purposes where we don’t want to create another <code class="language-plaintext highlighter-rouge">application.yml</code> test resource file. If we don’t supply a value at runtime, the default value of <code class="language-plaintext highlighter-rouge">test</code> will be used.</p>
<p>In order for our application to use this configuration, simple configuration properties can be used to map the values to a configuration class:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@Getter
@Setter
@Configuration
@ConfigurationProperties(prefix = "securedroutes")
public class SecuredRoutesConfig {
private String[] management;
private String[] unprotected;
}
</code></pre></div></div>
<p>Our Spring security config will configure our application to allow any requests accessing non-secure routes to pass-through without any auth checks, whereas any requests to our secured routes will be subject to authentication and authorisation checks. An example of this config is show below.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>private final SecuredRoutesConfig securedRoutesConfig;
@Value("${appsecurity.management.username}")
private String managementUsername;
@Value("${appsecurity.management.password}")
private String managementPassword;
@Bean
public PasswordEncoder encoder() {
return new BCryptPasswordEncoder();
}
@Bean
public SecurityFilterChain securityFilterChain(HttpSecurity http) throws Exception {
http
.formLogin().disable()
.csrf().disable()
.authorizeHttpRequests((requests) -> requests
.requestMatchers(securedRoutesConfig.getUnprotected()).permitAll()
.requestMatchers(securedRoutesConfig.getManagement()).hasRole(ROLE_MANAGEMENT_USER)
)
.httpBasic(withDefaults());
return http.build();
}
@Bean
public UserDetailsService userDetailsService() {
UserDetails user =
User.builder()
.username(managementUsername)
.password(encoder().encode(managementPassword))
.roles(ROLE_MANAGEMENT_USER)
.build();
return new InMemoryUserDetailsManager(user);
}
</code></pre></div></div>
<h4 id="configuring-vault-1">Configuring Vault</h4>
<p>In order to configure our Spring Boot service to integrate with Vault, we need two key parts; the Spring Cloud Vault library and our application configuration to integrate with Vault itself.</p>
<p>Adding the following library to the POM file gives us the full spring-cloud-vault implementation:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code><dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-vault-config</artifactId>
</dependency>
</code></pre></div></div>
<p>And the following configuration enables our application to integrate with Vault:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>spring:
config:
import: optional:vault://
cloud:
vault:
enabled: ${vault-enabled:false}
application-name: spring-boot-demo
connection-timeout: ${vault-connection-timeout:5000}
read-timeout: ${vault-read-timeout:15000}
authentication: KUBERNETES
kv:
backend: kv
enabled: true
profile-separator: '/'
application-name: spring-boot-demo
default-context: spring-boot-demo
profiles: dev
</code></pre></div></div>
<p>Some of the configuration above might already start to make sense based on how we configured our Vault instance earlier. The main aspects to point out are the <code class="language-plaintext highlighter-rouge">kv</code> engine configuration;</p>
<ul>
<li><code class="language-plaintext highlighter-rouge">backend: kv</code> - this tells Spring Boot the name of the kv secrets engine to lookup in Vault</li>
<li><code class="language-plaintext highlighter-rouge">profile-separator: '/'</code> - this tells Spring Boot the path separator used in the secrets engine, e.g. <code class="language-plaintext highlighter-rouge">kv/</code></li>
<li><code class="language-plaintext highlighter-rouge">application-name: spring-boot-demo</code> - this tells Spring Boot the naming convention of the secret lookup, e.g. kv/spring-boot-demo</li>
<li><code class="language-plaintext highlighter-rouge">profiles: dev</code> - this refers to the active profile Spring Boot is running, as Spring Cloud Vault uses that profile to determine the secret path to use, so dev would give us <code class="language-plaintext highlighter-rouge">kv/spring-boot-demo/dev</code>.</li>
</ul>
<h3 id="bringing-it-all-together">Bringing It All Together</h3>
<p>Given that we have a local Vault instance set up and a Spring Boot service to integrate with it, we can deploy our app and test the successful retrieval of secrets to secure our application.</p>
<p>Firstly, we will add our Spring Boot service to our Helmsman desired state file.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>apps:
spring-boot-demo:
namespace: dev
enabled: true
chart: '../service-helm-chart'
version: 1.0.0
valuesFile: values/service/values-dev.yaml
vault:
...
</code></pre></div></div>
<p>Then we can apply the updated state file:</p>
<p><code class="language-plaintext highlighter-rouge">helmsman --apply -f dev.yaml</code></p>
<p>Once the new Spring Boot service is running successfully, we can test the actuator endpoint has been secured properly with our secret we set up in Vault.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>> kubectl get deploy -n dev
NAME READY UP-TO-DATE AVAILABLE AGE
vault-agent-injector 1/1 1 1 64s
spring-boot-vault-demo 1/1 1 1 65s
</code></pre></div></div>
<p>We can <a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/#forward-a-local-port-to-a-port-on-the-pod">port-forward</a> to the running pod to establish a localhost connection and conduct some basic cURL tests. When calling a secure endpoint without any authentication using cURL, we should receive a 401 response.</p>
<p><code class="language-plaintext highlighter-rouge">kubectl port-forward deploy/spring-boot-vault-demo 8080:8080 -n dev</code></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>> curl http://localhost:8080/actuator/env -v
* Trying 127.0.0.1:8080...
* Connected to localhost (127.0.0.1) port 8080 (#0)
> GET /actuator/env HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.79.1
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 401
...
</code></pre></div></div>
<p>Now, when we use cURL to call the same endpoint while providing the authentication secret stored in Vault, we should receive a 200 response, along with the JSON payload that outlines the environment properties stored in the service.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>> curl http://localhost:8080/actuator/env --user ADMIN:password -v
* Trying 127.0.0.1:8080...
* Connected to localhost (127.0.0.1) port 8080 (#0)
* Server auth using Basic with user 'ADMIN'
> GET /actuator/env HTTP/1.1
> Host: localhost:8080
> Authorization: Basic QURNSU46cGFzc3dvcmQ=
> User-Agent: curl/7.79.1
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200
...
{"activeProfiles":["dev"],"propertySources":[{"name":"server.ports","properties":{"local.server.port":{"value":"******"}}},...
</code></pre></div></div>
<p>And that’s it! All working as expected.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Keeping sensitive information secure in production systems is paramount. With the <a href="https://www.itgovernance.co.uk/blog/data-breaches-and-cyber-attacks-in-2022-408-million-breached-records">vast number of data breaches last year</a>, which caused chaos for those who fell victim, ensuring data security and mitigating attack vectors is critical for engineering robust, well-designed systems. While this post has outlined a simpler approach to integrating a Spring Boot microservice with a secrets management solution, it hopefully demonstrates that it doesn’t have to be incredibly complex to get it right.</p>
<p>You can see all the code to accompany this post <a href="https://github.com/gwolverson/vault-kubernetes-example">over on my github</a>.</p>
<p><a href="https://capgemini.github.io/cloud%20native/spring-cloud-vault-kubernetes/">Keeping Spring Boot Apps Secure With HashiCorp Vault</a> was originally published by Capgemini at <a href="https://capgemini.github.io">Capgemini Software Engineering</a> on July 07, 2023.</p>https://capgemini.github.io/kubernetes/introduction-to-helmsman2023-05-22T00:00:00+01:002023-05-22T00:00:00+01:00Greg Wolversonhttps://capgemini.github.io/authors#author-greg-wolverson
<p><a href="https://kubernetes.io/">Kubernetes</a> is one of the most popular open-source container orchestration frameworks in use today. It allows you to easily deploy, scale and manage containerised applications. As your applications grow, the number of Kubernetes resources you have to manage increases, and that’s where <a href="https://helm.sh/">Helm</a> comes in. Helm is a package manager for Kubernetes, allowing you to define, install and manage complex Kubernetes clusters at scale. However, unless you want to <a href="https://helm.sh/docs/helm/helm_install/">install</a> all of your <a href="https://helm.sh/docs/topics/charts/">helm charts</a> individually (and possibly manually), there is a need for an automated, infrastructure-as-code approach. Enter <a href="https://github.com/Praqma/helmsman#what-is-helmsman">Helmsman</a>.</p>
<h2 id="the-problem">The Problem</h2>
<p>As mentioned above, in a productionised domain, the set of deployed services and their accompanying resources will grow exponentially. Even when using a package manager like Helm, the sheer amount of deployable resources and packages can become hard to manage.</p>
<p>If you have ten Helm charts to deploy, you could be running ten install and/or upgrade commands to reach the desired cluster state for any given environment. Furthermore, if you have multiple environments (dev, test, preprod, prod etc), you then have ten commands <em>per environment</em> to run - you can quickly see how this could become difficult - not to mention inefficient - to manage.</p>
<h2 id="an-introduction-to-helmsman">An Introduction to Helmsman</h2>
<p>Helmsman is a tool which allows you to define the desired state of your Kubernetes cluster in code, giving you the ability to deploy, upgrade or destroy that state in a single command. Each environment (<code class="language-plaintext highlighter-rouge">namespace</code> traditionally in Kubernetes) has its own state file, making managing versions and resources across environments much simpler.</p>
<p>As a result of Helmsman encapsulating the state of your cluster(s) in code, you can easily describe the state of any cluster by looking at the Helmsman <a href="https://github.com/Praqma/helmsman/blob/master/docs/desired_state_specification.md#helmsman-desired-state-specification">desired state file</a>. This makes it easier to manage what’s deployed, where and at which version.</p>
<h2 id="a-helmsman-story">A Helmsman Story</h2>
<p>Let’s take an example where we have a service domain which contains four microservices. Each microservice has slightly different resources requirements (CPU/Memory) and two of them are required to integrate with a database. In non-production environments (dev, test) they are not required to be highly-available, whereas in production environments (preprod, prod) they are.</p>
<h3 id="basic-helm-chart">Basic Helm Chart</h3>
<p>We’ll create a Helm <a href="https://helm.sh/docs/topics/charts/">application chart</a> that can define the Kubernetes resources required for each of our services. Our example service chart will contain some standard Kubernetes resources such as a deployment and network policy.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>metadata:
environment: replace-me
deployment:
create: false
replicas: 1
name: replace-me
image: replace me
ports:
- 8080
resources:
requests:
memory: "250Mi"
cpu: "250m"
limits:
memory: "350Mi"
cpu: "300m"
networkPolicy:
create: false
podSelector:
matchLabels:
app: replace-me
policyTypes:
- Egress
egress: {}
</code></pre></div></div>
<p>The above is heavily simplified from what a real production chart may look like, but the purpose here is just to give an example to work from later.</p>
<p>Above you can see a <code class="language-plaintext highlighter-rouge">create: false</code> property on each resource, this is a practice I tend to follow when building Helm library charts, as it gives implementing charts the ability to opt-in to whichever resources they need, and not just get them implemented by default.</p>
<h3 id="microservice-setup">Microservice Setup</h3>
<p>Each microservice will have it’s own implementation of the base chart shown above. Let’s first use microservice-a as an example, which has no extra resource requirements, and no database connectivity.</p>
<p>Chart.yaml</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>---
apiVersion: v2
name: service-a
description: Chart for microservice A
version: 0.1.0
dependencies:
- name: base
version: 1.0.0
repository: "@base-repository"
</code></pre></div></div>
<p>values.yaml</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>base:
deployment:
create: true
replicas: 1
name: service-a
image: service-a:1.0.0
</code></pre></div></div>
<p>As you can see above, microservice-a has a very simple implementation of the base chart, mostly using the default values provided.</p>
<p>Now let’s look at microservice-b. This service will have slightly higher resource requirements and will also need egress networking out to a MySQL database (running in a pod in the same namespace).</p>
<p>Chart.yaml</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>---
apiVersion: v2
name: service-b
description: Chart for microservice B
version: 0.1.0
dependencies:
- name: base
version: 1.0.0
repository: "@base-repository"
</code></pre></div></div>
<p>values.yaml</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>base:
deployment:
create: true
replicas: 1
name: service-b
image: service-b:1.0.0
resources:
requests:
memory: "500Mi"
cpu: "350m"
limits:
memory: "550Mi"
cpu: "400m"
networkPolicy:
create: true
podSelector:
matchLabels:
app: service-b
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: mysql
</code></pre></div></div>
<h3 id="helmsman-implementation">Helmsman Implementation</h3>
<p>Now let’s look at the Helmsman implementation and how it makes dealing with multi-service deployments simpler.</p>
<p>Our very simple Helmsman folder structure will look as follows (showing only service-a and service-b for brevity):</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>.
├── dev.yaml
├── test.yaml
|── preprod.yaml
|── prod.yaml
└── values
├── service-a
└── values-dev.yaml
└── values-test.yaml
└── values-preprod.yaml
└── values-prod.yaml
├── service-b
└── values-dev.yaml
└── values-test.yaml
└── values-preprod.yaml
└── values-prod.yaml
</code></pre></div></div>
<p>Let’s look at a desired state file and one of the values files for each service in a bit more detail to show what’s happening.</p>
<p>As mentioned previously, Helmsman provides a way of describing the desired state for your Kubernetes cluster. In the example we’re using, we’ve got two clusters; non-production (containing dev and test namespaces) and production (containing preprod and prod namespaces).</p>
<p>Let’s take a look at the <code class="language-plaintext highlighter-rouge">dev.yaml</code> state file;</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>metadata:
description: Desired State File for the dev environment
namespaces:
dev:
helmRepos:
stable: http://custom-helm-repo-example.com
apps:
service-a:
namespace: dev
enabled: true
chart: stable/service
version: 1.0.0
valuesFile: values/service-a/values-dev.yaml
service-b:
namespace: dev
enabled: true
chart: stable/service
version: 1.0.0
valuesFile: values/service-b/values-dev.yaml
</code></pre></div></div>
<p>There’s a few bits going on in the above state file definition, so let’s break it down.</p>
<p>The <code class="language-plaintext highlighter-rouge">namespaces</code> property allows you to define the namespace(s) you have or want as part of this state definition. If the namespace(s) don’t exist when you run Helmsman, it will <a href="https://github.com/Praqma/helmsman/blob/master/docs/how_to/namespaces/create.md#create-namespaces">create them</a> for you.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>namespaces:
dev:
</code></pre></div></div>
<p>The <code class="language-plaintext highlighter-rouge">helmRepos</code> property allows you to <a href="https://github.com/Praqma/helmsman/blob/master/docs/how_to/helm_repos/default.md">define the Helm repositories</a> where your packaged charts are stored. There are several options for chart repositories, such as; default, private (backed by Google, AWS or basic auth) and local.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>helmRepos:
stable: http://custom-helm-repo-example.com # This doesn't exist, it's just shown for example purposes
</code></pre></div></div>
<p>The <a href="https://github.com/Praqma/helmsman/blob/master/docs/desired_state_specification.md#apps">apps</a> block is the most important block within the example state file shown above, it defines <em>all</em> the services you want deploying as part of this state file. Helmsman is very powerful and provides a lot of configuration options for <a href="https://github.com/Praqma/helmsman/tree/master/docs/how_to">deploying apps and configuring them</a>. In the example above, we’re using a simple app definition for each service.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>apps:
service-a:
namespace: dev
enabled: true
chart: stable/service
version: 1.0.0
valuesFile: values/service-a/values-dev.yaml
</code></pre></div></div>
<p>An important property defined above is the <code class="language-plaintext highlighter-rouge">valuesFile</code> property, this tells Helmsman where the values file to be installed as part of this release is located within the Helmsman structure.</p>
<p>As displayed previously, our Helmsman file structure contains the following files;</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>└── values
├── service-a
└── values-dev.yaml
├── service-b
└── values-dev.yaml
</code></pre></div></div>
<p>So when we’re specifying the <code class="language-plaintext highlighter-rouge">valuesFile</code> property as <code class="language-plaintext highlighter-rouge">values/service-a/values-dev.yaml</code> it’s referring to the following folder</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>└── values
├── service-a
└── values-dev.yaml
</code></pre></div></div>
<p>Now let’s look at the contents of those files - this is where the modularisation within Helmsman really shines.</p>
<p>Earlier on we stated that Service A doesn’t have any additional requirements beyond the standard chart specification. Whereas Service B had the additional requirements of higher resources and a connection to a MySQL database. With that being said, let’s look at the <code class="language-plaintext highlighter-rouge">values-dev.yaml</code> definition for these services</p>
<h4 id="service-a">Service A</h4>
<p>Service A <em>only</em> needs to specify the environment it sits within and some basic information about the deployment; name, image and container port, everything else is already defined in the base service chart that we’re using (as defined in the Helmsman <code class="language-plaintext highlighter-rouge">dev.yaml</code> state file).</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>metadata:
environment: dev
deployment:
create: true
name: service-a
image: service-a:1.0.0
containerPort: 8080
</code></pre></div></div>
<h4 id="service-b">Service B</h4>
<p>Service B on the other hand, needs a bit more configuration to meet requirements.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>metadata:
environment: dev
deployment:
create: true
name: service-b
image: service-b:1.0.0
containerPort: 8080
resources:
requests:
memory: "500Mi"
cpu: "350m"
limits:
memory: "550Mi"
cpu: "400m"
networkPolicy:
create: true
podSelector:
matchLabels:
app: service-b
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: mysql
</code></pre></div></div>
<p>For the Service B <code class="language-plaintext highlighter-rouge">values-dev.yaml</code> file we have specified the environment, deployment and networkPolicy configuration values. This has allowed us to <em>override</em> and add to the values that are defined in the base service chart we’re using as part of this deployment.</p>
<p>As our project grows, we can easily add more services to our desired state file(s), making the management of our environments much simpler than if we had to manage all the helm charts individually.</p>
<h3 id="bringing-it-all-together">Bringing It All Together</h3>
<p>So now we have our example Helmsman project setup, with our desired state file(s) ready to provision services into our cluster. All we need to do now is issue certain Helmsman commands and we’ll have our services running in no time. Ideally, you’d <a href="https://github.com/Praqma/helmsman/blob/master/docs/how_to/deployments/ci.md#run-helmsman-in-ci">run Helmsman from CI pipelines</a>, but that goes beyond the scope of this post. We’ll now take a look a few of the more widely used commands.</p>
<h4 id="dry-run">Dry Run</h4>
<p>A <em>really</em> useful feature of Helmsman is the ability to use <code class="language-plaintext highlighter-rouge">dry-run</code>. This allows you to point Helmsman at one of your desired state files and do a dry-run installation against your cluster. The benefit of this is you get to see the rendered Kubernetes manifests that would be installed, and can easily verify and validate that the manifests to be installed are correct, without them actually being installed.</p>
<p><code class="language-plaintext highlighter-rouge">helmsman -f dev.yml --dry-run</code></p>
<h4 id="apply">Apply</h4>
<p>Next up is the <code class="language-plaintext highlighter-rouge">apply</code> command. This applies your desired state file to your kubernetes cluster, installing all the resources via Helm.</p>
<p><code class="language-plaintext highlighter-rouge">helmsman -f dev.yml --apply</code></p>
<h4 id="destroy">Destroy</h4>
<p>Another useful command is the <code class="language-plaintext highlighter-rouge">destroy</code> command. This tears down your cluster based on the desired state file - this is useful if you want to tear down environments quickly or nightly to save costs.</p>
<p><code class="language-plaintext highlighter-rouge">helmsman -f dev.yml --destroy</code></p>
<h2 id="wrapping-up">Wrapping Up</h2>
<p>Although this post has only shown a very simple example project, hopefully you can see how Helmsman is a very useful tool for managing our Kubernetes environments. As service domains grow, so do the amount of resources we need to keep track of and implement to keep everything ticking along. Rather than trying to keep a handle on all of those resources manually, it’s better to leverage specific tooling (like Helmsman) to provide consistency, efficiency and a much better developer experience!</p>
<p>Helmsman is just one approach to managing your kubernetes environments, and is a good entryway to more GitOps style approaches such as <a href="https://fluxcd.io/">FluxCD</a> or <a href="https://argo-cd.readthedocs.io/en/stable/">ArgoCD</a> (among others).</p>
<p>You can see all the code for an example service scenario like the one described in this post <a href="https://github.com/gwolverson/helmsman-demo">over on my github</a>.</p>
<p><a href="https://capgemini.github.io/kubernetes/introduction-to-helmsman/">Navigating Kubernetes Deployments With Helmsman</a> was originally published by Capgemini at <a href="https://capgemini.github.io">Capgemini Software Engineering</a> on May 22, 2023.</p>https://capgemini.github.io/development/preparing-for-devoxx2023-05-10T00:00:00+01:002023-05-10T00:00:00+01:00Sarah Saundershttps://capgemini.github.io/alumni#author-sarah-saunders
<p>It’s a big deal preparing for sponsorship of a conference. Each year, the Cloud Development team at Capgemini are proud to sponsor <a href="https://www.devoxx.co.uk">Devoxx UK</a>, the leading developer conference in Britain. What does this involve? A lot more creativity than you might think!</p>
<h2 id="theme">Theme</h2>
<p>Capgemini is a huge global company with global annual goals and missions. We are a relatively small team of 100 or so UK-based software engineers, so aligning our goals with the wider company can be something of a challenge to start with! This year, Capgemini’s purpose was a good start for us: “Unleashing human energy through technology for an inclusive and sustainable future”. The Capgemini brand platform of “Get the future you want” is also good for Devoxx - part of the reason we’re sponsoring the conference is to remind people that we’re always recruiting for new talent and can offer a great place for a software engineer to work and develop themselves.</p>
<p>We knew we wanted to focus on sustainability as it’s one topic close to all our hearts as well as being a <a href="https://www.capgemini.com/be-en/about-us/csr/environmental-sustainability">Capgemini goal</a>. But we didn’t want to make people feel depressed or personally responsible - we wanted to inspire them to make a difference. I recently attended a Capgemini “<a href="https://climatefresk.org/">Climate Fresk</a>” workshop, and whilst it was educational, it was mainly terrifying! The concept of a “Fresk” is that it works rather like a round table discussion, there are a number of prompt cards around environmental topics such as carbon dioxide levels, deforestation, rising sea levels, weather pattern disruption, sea water acidity levels, plant and animal diversity, CFCs, forest fires, population migration. The cards could be positioned to show which events affected which other topics, and so during the hour we built up a map of the effects of fossil fuel extraction on our planet. Most of the information I had seen before but I did learn and relearn a couple of things, for example as more CO<sub>2</sub> is absorbed by the ocean, the pH of the sea water rises and the sea becomes more acidic. This makes it difficult for small sea creatures to form shells, because their shells are made of alkaline calcium which dissolves in more acidic waters. These tiny creatures are the base of the food chain in the sea and so depletion in their numbers has a massive effect on the population of larger sea creatures. I came out of the fresk feeling informed but scared - not an emotion we want people to associate with Capgemini! But neither do we want to shy away from the problems that burning fossil fuels are causing. Instead, we decided to focus on the positives. This year’s <a href="https://www.capgemini.com/insights/research-library/technovision-2023/">TechnoVision</a> report contains some powerful and brave messages for Capgemini employees - for example, “Do more with less”. This is very often not the easy route, and for Capgemini perhaps not the most profitable route either, it takes courage to tell your clients that the best route forward is not to build any software at all! But we need to recognise that sometimes this is the right answer. Wasteful technology is something we can all do without.</p>
<p>Taking that as a starting point, what about useful tech? How can we make it more carbon efficient? We know that great strides have been made towards generating carbon neutral electricity, for example from renewable energy sources such as wind, waves and sun. We know that many European countries generate quantities of electricity that way when they can - but it depends on factors such as hours of sunlight, wind speed, river levels. How can we know whether the electricity our application farms are using is generated from renewable sources? Turns out there is a way. <a href="https://www.energymonitor.ai/">The Energy Monitor website</a> collects data from 27 European countries (many of which host data centres for major cloud providers such as Amazon and Azure) so that you can see, for a given point in time, which country is producing the most electricity from renewables. What if you could use this information to move your applications to the data centre using the most “green” electricity? Now that’s inspiring!</p>
<p>A lot of what we do as Cloud developers gives us opportunities to make electricity savings by reducing the amount of compute power we use. The fantastic advantage of infrastructure-as-code is that you can safely tear down huge proportions of your infrastructure when you’re not using it - for example, only have your build pipelines running when you actually have something to build. We are contributing to a Capgemini “Green Book” of practices that we can share with our clients, to help them reduce their carbon footprints with minimal impact to their businesses.</p>
<h2 id="swag">Swag</h2>
<p>OK so now we have a phrase to print on our stand (“Get the future you want”), and a theme. What next? We want something to give away that is useful and reusable, that isn’t plastic, that’ll remind people of meeting us. We need some cotton T-shirts! <a href="https://capgemini.github.io/development/the-efficient-cloud-era/">Last year</a> we brought along 40 or so of our Capgemini / Ada Lovelace “I am a Role Model” T-shirts and they went like hot cakes. Unfortunately the million tonnes of paper notepads and sweets that we brought along didn’t go down so well - turns out devs don’t write much and are rather healthier than we’d given them credit for! So we know that T-shirts are the way to go. But what picture can we put on them, that developers attending the conference will want to wear?
Recently, Capgemini opened a new “Delivery Centre” in our Holborn office. This is a step back to teams working face to face, appreciating the value of getting people together. For the centre opening, our resident artist had drawn some fantastic images of an octopus busily multi-tasking that had got a lot of attention. Octopuses have started popping up all over the delivery centre and we thought we’d get involved! So we stole an octopus for our T-shirt.</p>
<p><img src="/images/2023-04-28-preparing-for-devoxx/Devoxx_Tshirt_Blank.jpeg" alt="T-shirt design with octopus" /></p>
<p>All we needed now was some text to go with it. I turned to my team to get the best octopus puns and they didn’t disappoint.</p>
<ul>
<li>“Be INK-redibly productive”</li>
<li>“Don’t be a sucker! Develop at octo-speed”</li>
<li>“8-bit computing”</li>
<li>“Octo-pushing delivery forward”</li>
<li>“Un-LIMB-ited potential”</li>
<li>“Kraken’ on with development”</li>
<li>“This is what beak performance looks like”</li>
<li>“Be an octo-coder”</li>
</ul>
<p>OK thanks guys. Enough already. We decided that the un-LIMB-ited potential slogan fitted really well with our concept of “Get the future you want”, and so our T-shirt is a wrap! Now we can start to look forward to the talks on offer at the conference and up-skilling ourselves for the year ahead. Roll on May 10th!</p>
<p><a href="https://capgemini.github.io/development/preparing-for-devoxx/">Preparing for Devoxx</a> was originally published by Capgemini at <a href="https://capgemini.github.io">Capgemini Software Engineering</a> on May 10, 2023.</p>